Poll forecasting: Dial M for mistake
In its exit poll, Axis APM predicted 169-183 seats for grand alliance (GA) in the recently-held assembly elections in Bihar. The pollster was the only one close to the final result, but ironically, the results did not appear on CNN-IBN, where it was scheduled to be aired. The channel allegedly refused to air it due to doubts over the prediction methods that gave a massive victory for GA. The pollster then put the results online, which was later used by several media organisations.
Almost all other exit pollsters had predicted a close finish between the Bharatiya Janata Party (BJP)-led alliance and the Nitish Kumar-led GA, or in some cases, predicted victory for the former. Exit polls are largely considered more accurate than opinion polls. But this was not the case in Bihar.
It is believed that if there is a huge margin of victory, it is always easy to predict. But the Bihar poll changed this belief too. So the science of psephology in India is still struggling with the fundamentals, not only for opinion polls but also for exit polls. “Poor application of psephology is the real reason for pollsters’ failure,” says Yogendra Yadav, a poll analyst.
The political science of psephology is complex and poll analysts are experimenting with various methods to accurately predict elections. In India, psephologists have been conducting opinion polls and predicting elections for over 30 years.
Experts say problems may occur at two levels—arriving at a concrete estimate of the vote share and converting vote share to seats. If a pollster gets even one level wrong, it results in an error. In the case of Bihar, pollsters say it was due to very poor sampling—surveyors were interviewing only the upper caste, rich and urban people.
This is the one reason why even exit polls favoured the BJP, let alone opinion polls earlier that were following the same path. “It is always cheaper to cater to the rich people,” says Yadav.
Only 11.5 percent of the voters in Bihar live in urban areas while the rest lives in rural areas. Vinay Kanth, a mathematics professor at Patna University, says, “I doubt whether these the agencies included rural voters in their samples.” Praveen Rai, a political analyst with the Centre for the Study of Developing Societies (CSDS), says, “Forecasting agencies do not reveal their methodologies and related details about their surveys—samples, the profile of people interviewed and location.”
Election forecasting in India is based on opinion polls which ascertain the choice of political parties of the sampled electorate and calculates the vote share for each party contesting the election. The final vote share of each party is calculated by assigning ‘weightages’ based on the actual vote share of political parties in the previous election. The weighted votes share is then put into a forecasting model which translates the number of seats each political party is most likely to win, he adds.
The statistical challenge begins from the very beginning. Pradeep Gupta, the chairperson of Axis APM, says correct sampling is a herculean task. “That’s why we conduct a survey before conducting the general survey to predict election results. This first survey is meant to understand the demography, local issues influencing the results, thus identifying the right sample,” he adds.
The election forecast models developed by various organisations use statistical techniques like the Multiplier Effect, Index of Opposition Unity (when one party’s vote bank supports the alliance partner), Cube Law (where the seat shares have a multiplying factor, with every one percent change in vote share) and the Probabilistic Count (a statistical tool to decipher large data).
A well-known face of poll analysis in India, Rajeev L Karandikar, director of the Chennai Mathematical Institute, who works with Lokniti, reveals the statistical method he uses, “We first decide upon the total number of constituencies to be sampled—it is somewhere between 100 and 280 selected via circular random sampling or systematic sampling. However, this is not possible in exit polls. Generally, agencies pick 10 voters of the certain booth they have selected.”
The toughest challenge is converting the vote percentage to seats. “I use swing model along with opinion poll data to estimate votes for each major party in each seat and use a probabilistic count method (devised by him) to convert these into seat estimates,” says Karandikar. If there is a huge gap between the voting percentages, it is easy to predict the winner. One way to predict winnable seats for a certain party is to go to every seat to find out the possible winner. However, in cases where there is no huge margin between the contestants, it becomes problematic. In such cases, experts use a method called backtesting. Under this, pollsters use the previous vote share of different parties and then compare it with their findings. The change in the percentage of votes is called a swing. After dividing the state into regions, they try to find out the swing at the regional level and base their results on it. Many times, when one compares the predicted vote share in each constituency with the actual findings, there are a lot of variations. Experts say the pollsters need to devise a more accurate sample base to estimate the vote share of major parties in each constituency.
The common perception that political parties or corporates influence poll findings do not hold water as the pollster’s credibility is at stake, says Yadav. India is different in terms of voting patterns. Voter’s choice can be based on their caste, religion, education and class. Forecast science needs to account all these factors, both in rural as well as in urban areas.
Kanth says that seat conversion is not necessarily always proportionate to the vote share. Constituency-wise distribution is tough to decipher, he says, adding that it is not clear whether the booths chosen represented the entire constituency. Agencies use corrective methods to reduce the error margin. But it is clear from the Bihar polls that the pollsters themselves are sitting on an embarrassing error.
(The author is a senior reporter with Down to Earth. Views expressed are strictly personal)