Published April 26, 2022

## In partnership with the Toronto Star, Vox Pop Labs launches The Signal, an election forecast for the 2022 Ontario provincial election.

The Signal, now available at https://www.thestar.com/politics/ontario-election/2022/election-forecast.html, is based on the mechanics of a Bayesian dynamic linear model. This type of model forms the basis of forecasting models currently in use for U.S. elections for which many will be familiar, such as those by the New York Times and FiveThirtyEight.

Our variant of the model accounts for two biases in the polling industry. First, we account for the fact that pollsters differ systematically between each other with respect to whether they over- or under-represent certain voters. For example, compared to the polling industry average, some pollsters might over-represent Progressive Conservative party voters; others, NDP voters. The model accounts for these differences dynamically, such that each poll that is released is filtered for our current estimate of that bias. Polls over multiple years are used to calculate these “house biases” so that they themselves are recalculated each time a new poll is released. Second, we account for bias in the polling industry as a whole by using data from previous elections. In the Ontario context, these biases are relatively small, although not insignificant: even small differences in national vote share can have relatively large effects on seat share.

Because there are many days on which polls are not released and because polls contain sampling error, the model uses information about where vote intention stood on one day to inform where it stands the next day. If a new poll is released, vote share estimates for that day effectively become a weighted average of information from the newly released poll and from information about where vote intention stood on the previous day. This means that outlier (and all other) polls are effectively pulled in toward the previous day’s forecast. Visually, this means that vote intention across time will appear relatively smooth, as we would expect it to in reality.

This differs from other forecasting models in Ontario where one might see forecasts jump around relatively drastically from poll release to poll release. Unfortunately, this leads many commentators in Ontario to speculate that large changes are occurring in the electorate for various reasons related to the campaign even if no substantial changes are occurring in reality. An added benefit is that a new forecast can be released for each day of the campaign, even if no new poll is released, and we are able to estimate vote intention for every day of the campaign.

To estimate regional-level vote share, we run a separate model with the same basic structure as the national model. We then adjust the regional vote share results proportionally such they they match estimates from the national-level forecast. For the riding level, furthermore, we use the vote share achieved by each party during the 2018 Ontario provincial election in each riding, adjusting these proportions proportionally to match the estimated regional (and national) vote share forecast. It is worth noting that the degree of uncertainty with respect to these projections is highest at the individual riding level. As riding-level predictions are derived from provincial vote share estimates and vote share in the 2018 provincial election rather than local polling data (which do not exist in sufficient numbers), these should be interpreted with due caution.

Visit The Signal at https://www.thestar.com/politics/ontario-election/2022/election-forecast.html.

I don’t see any allowance for the many polls that are online or automated interactive telephone polls. No thanks. I’ll stick with Nanos who do telephone, landline and cell phone, interviews. I’ll also stick with them because they have been the most accurate over the years when their numbers are compared to final outcome.

Hi Elmo. The model actually accounts explicitly for biases in survey mode (IVR, telephone, online) and also by polling organization’s “house bias” (i.e. whether a pollster historically polls higher on a given party compared to others).

I understand your skepticism of polls, given recent elections in Canada. One benefit of the way our model is set up is explicitly to account for these problems dynamically, using past election and polling results. The bias parameters in the model are recalculated each time The Signal is updated also.

If you run your model with information from the 2011 election, what are the results?

Thanks!

Assumption of bias bothers me. Cannot the assumption be wrong just as easily as it is right ? Most poll weighting rests on previous electoral behaviour and weights incumbency strongly. In this election, there are ~10% new ridings and some redistribution. There are no priors for these ridings.

The parties and the leaders are much different today than 4 years ago. The NDP has gone from a rockstar to a stoic. The Liberals, from an unlikable to a popular leader. Four years ago the CPC head was tolerated now he is disliked. Given this dynamic are priors relevant ?

The CPC has lost ~ 20 % of their incumbents to retirement. Among these, are a number of cabinet minister with high profiles. This would seem to skew the importance of incumbency and prior results.

Based on prior results polling is weighted to account for rural efficiency of the CPC vote. In most predictions, they are loosing rural ridings in the North, Atlantic Canada and BC with smaller losses indicated in Ontario and the prairies. They may pick up a couple of non urban ridings in Quebec. Again prior electoral behaviour may have less bearing on this election than ever before.

I distrust the reliance on correlation with past elections and prefer frequent polls with decent sample sizes, live pollsters, neutral questions.

Hi John,

Thanks for your thoughts about the model and its potential problems. I address some of your concerns in turn below.

In the model, there’s actually no assumption of bias because the bias parameters are estimated from the data themselves (relative to the industry average), and, moreover, these parameters are not static: they are re-estimated each time a new poll is released. If there is no systematic difference in vote intention estimates between polling organizations, the bias parameters will all be roughly 0 for every pollster.

With respect to the new ridings, Statistics Canada has taken data from the 2011 polling station results and transposed them to the new riding boundaries. Although this is less than ideal, it provides a decent approximation to where results will stand in the upcoming election.

The loss and gain of incumbents can be modelled at the riding level, so one can account for the fact that an incumbent has stepped down who had previously enjoyed an incumbent bump in support. This bump will naturally be lost for his or her successor, which, again, can be modelled.

The model only partially relies on past polling estimates as compared to past election results: I put a “prior” on the bias terms related to differences between the polls and the result in the past election(s), such that these biases are pulled toward zero. This is done because it’s expected that pollsters will 1) modify their methods to generate better results, and 2) each election is only a single data point, with error of its own. Note that the alternative to this would be to assume that the polling industry average bias is zero _or_ that bias from previous elections will carry through fully to the upcoming election. The model we use is a compromise between these two extremes, essentially taking a weighted average of the assumption of 0 bias and of the estimates of bias as estimated using data using past election results and polls.

I hope this clears up some of your concerns, but if you have any more questions please let us know!

It strikes me as somewhat spurious to have a model that accurately accounts for the differences in results based on the survey method. The only way it seems to me that one could theoretically do this is to run the same survey during the same time period using all three data collection methods (live telephone interviewer, IVR or net), and then assuming one has an election within a day or to of the data collection, compare the results. And most importantly, one trial would not be sufficient to make an inference about the differences that result from the three common data collection procedures (live person et al). So it would be of interest to have you explain how you actually account for differences in survey methods, since whether you use regression or other methods to determine appropriate weights to apply to data collected by different survey methods, you must have a number of trails using the same survey, collected by each survey method and a way to validate the findings after the fact (e.g., an election). So can you tell us how you arrive at a valid set of weights? Thanks

Hi Average Joe,

Your intuition for how this is done is a good one. The way the model works is to take all polls across multiple years and (effectively, though not technically) compare how pollsters’ estimates compare to each other when poll releases occur relatively close to one another. This occurs dynamically such that biases are re-estimated with each poll that is released: new polls tell us something about biases and vote share in the past as well.

To gain some intuition for how this works, let’s say that day-to-day we expect that the “true” value of vote share in the electorate for a given party will move, but that it won’t move in leaps and bounds (e.g. we might expect few, if any, 5% jumps for a single day). And in fact, we can estimate how much vote share moves day to day on average as part of the model, which we do. Pollsters’ will provide us with an estimate of where the vote is on a current day or, at least, for the range of days in which the poll is in the field. Some of the change that we will observe day to day will be due to sampling error; some due to real changes in party support among the electorate; and some “change” due to bias. Because we have a forecast for a given day, an unbiased pollster is expected to release a poll with estimates that are in and around that forecast, although of course rarely directly on it, and, moreover, some of that will be due to real changes in the vote. However, if a pollster is systematically high or systematically low relative to where we might expect it to be, then we capture this deviation in the form of a parameter, which can be interpreted as the bias of a given pollster.

To make interpretation of the bias term more concrete, you might want to think of it as the sum of various characteristics within a polling house that, in aggregate, cause vote intention estimates to systematically deviate from the true value in the population. These characteristics might include, for example, differences in survey mode, population coverage, response rate, and question wording. (I want to stress that I’m not talking about partisan bias in the sense of pollsters trying to sway the electorate one way or another.)

We chose to write up the methodology on this page in a way that is as accessible as possible, to provide readers with an intuition for how the model works. Formal notation was therefore avoided. I might write up a more technical document shortly, however, so stay tuned.

In the meantime, if you have any more questions, please let me know.

And if you’re really interested, please take a look at the following two articles which represent the basis for the current state-of-the-art in election forecasting (both of which inform our own model).

http://votamatic.org/wp-content/uploads/2013/07/Linzer-JASA13.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.397.6111&rep=rep1&type=pdf

[Note that the latter article explains for polling bias is estimated as part of the model; the former effectively assumes that the bias cancels out in aggregate (second-to-last paragraph, p. 126)]

Do you also account for certain groups being less “poll-able”, i.e. the common belief that Conservative voters have a higher tendency to not comment on voting intentions?

Hi Andy. Thanks for the question.

We do, although we do so indirectly. The lack of certain groups being “poll-able” is effectively soaked up in the model’s bias parameters for each polling organization: a given polling organization will typically under- or over-estimate support for each party due to certain groups not be captured in each pollster’s sample, in addition to a variety of other factors.