Now that spring elections have passed, our minds have already fast-forwarded to the next round of polls. The question is: What have we learnt from this past round of elections and how does it add to what we know about electoral politics?
Over the past decade, electoral politics has become more professionalised and so has the tiny universe of electoral analysts. Experts, pollsters, political consultancy shops have mushroomed to offer their services to parties, the media, and private companies, and sometimes to all three at once.
First, much of that data is used for predicting electoral outcomes, not to understand the mechanisms that lead to them. Most pollsters are interested in voters’ self-declared voting intentions a few weeks ahead of an election, not in grasping how people’s lives have changed over the past years and how that might affect the choices they make. The voter is of no interest as such, other than as an instrument of choosing one party over another.
As a result, there is a paucity of information about how voters make their choices, and much of the post-poll analysis consists of matching one’s forecasts with the results. This often boils down to explaining away success by success and defeat by faulty electoral strategy.
The main problem with this sort of analysis is that it reads electoral outcomes as an extension of what parties did or did not do, or reduces the shaping of outcomes to the behaviour of large groups of voters, based on caste, gender or religion, without much fine distinction. This kind of analysis also ignores the social and political context in which elections take place.
The recent West Bengal elections are a case in point. Trinamool Congress’s victory has been attributed to a mix of clever campaigning and to Mamata Banerjee’s own chutzpah. On the other hand, BJP’s defeat has been largely attributed to hubris, poor strategy, and the failure to build a strong local organisation.
All of this may be true, but the production of electoral outcomes is complex and therefore their reading needs to be far more nuanced. For instance, the impact of state governments’ social schemes on voting is assumed rather than measured. In fact, we know little about the determinants of electoral behaviour.
Thanks to academic surveys, we know how people voted by large categories such as gender, age, class, caste and religion. This, though precious information, is not meant to tell us why voters voted the way they did.
The second problem is that most of the data one would need to properly contextualise an election either does not exist or is not accessible. A comprehensive electoral survey would need to back political attitudes on a range of underlying information on employment, access to information, access to public facilities and services, scheme implementation and so on. The data on all these issues is scarce, incomplete, inaccessible, barely usable. India lacks an open data ecosystem that would enable the interlinking of information.
Besides, most of the data generated around elections gets lost once elections are over, as their production takes place under methodological black boxing and proprietary embargos. There is no process of generating cumulative knowledge on elections involved here.
What data should we rely on, then? Any form of open data that passes the tests of transparency and accountability, is publicly available and documented, and contributes to building a baseline of information that can be used to refine political analysis.
This includes clean election result repositories, demographic density measures through satellite imagery, polling booth data, large surveys such as the National Family Health Survey or the India Human Development Survey, whose authors publish raw data that can be matched to political boundaries.
All such data help ask relevant questions, whose answers can mostly be found through the old-school forms of immersive reporting and fieldwork, neither of which happens much. There is no substitute for the insights one gains from spending time on the ground, talking to voters, to party workers, to local observers and journalists, listening to their woes and views.
This requires greater engagement than asking generic questions at tea stalls or speaking to party spokespersons. Ideally, data work and fieldwork should go hand in hand, as no ground investigation can provide the larger picture without the backing of empirical evidence. Instead of doing that, we move from one election to the next without paying attention to what happens between them.
There is an element of collective failure in our not having enough people trained in data and groundwork reporting and ethnography. What is also needed is an open data environment that adheres to data transparency, and investment from the media into immersive reporting. Without these, experts will keep collecting data that does not add to our cumulative knowledge.
Views expressed above are the author’s own.
END OF ARTICLE