Scenario Models Refuse to Forecast, Outperform Traditional Polls in English Local Elections Analysis
Breaking: New Analysis Shows Uncertainty-Based Models Beat Point Forecasts
A newly released scenario analysis of English local elections reveals that forecasting models which explicitly refuse to produce single-number predictions outperform traditional polling methods when uncertainty is high. The study, conducted by data scientists at a leading analytics firm, found that models calibrated to historical error and designed to simulate multiple plausible futures provided more reliable guidance than conventional forecasts.

“The biggest mistake is to pretend we can predict the exact outcome,” said Dr. Eleanor Marsh, a senior data scientist not involved in the study. “These models accept uncertainty as a feature, not a bug. When the uncertainty is bigger than the shock—like a last-minute swing in voter turnout—the model that says ‘I don’t know’ is actually the most honest and useful.”
Background: Why Traditional Election Forecasting Fails Locally
English local elections, which determine councils and mayors across hundreds of districts, have long frustrated pollsters. National polls often fail to capture local dynamics, while small sample sizes and low turnout amplify random error. The new case study analyzed election results from 2018 to 2023, comparing standard polling averages with scenario models that used calibrated uncertainty ranges based on historical forecast errors.
The approach, known as scenario modelling, generates dozens or hundreds of potential outcomes by varying key assumptions such as turnout, tactical voting, and demographic shifts. It does not assign a single winner but instead provides a probability distribution of possible seat counts and control scenarios.
“In 2021, nearly every poll predicted a Conservative landslide in the local elections,” noted Professor James Corrigan, an election analyst at the University of Manchester. “But scenario models that accounted for past polling errors showed a much wider range—including scenarios where Labour held key councils. That breadth turned out to be more accurate than the single-number forecast.”
What This Means: Embracing Uncertainty in Real-Time Decision Making
The study’s findings carry practical implications for campaign strategists, journalists, and political scientists. Instead of asking “Who will win?” the scenario approach forces users to consider “What could happen under different conditions?” This shift can help campaigns allocate resources more flexibly and prepare for unexpected outcomes.

For example, a scenario model that shows a 30% chance of a hung council—even if the point forecast predicts a majority—can prompt early coalition talks. “Campaigns that plan for the range of possibilities are less likely to be caught off guard,” said Marsh.
The analysis also highlights the value of historical error calibration. By examining how off past polls were, modelers can set realistic confidence intervals. The study found that uncalibrated models—those assuming perfect data—missed the true outcome in 40% of local elections, while calibrated scenario models missed in just 12% of cases.
However, critics warn that scenario models can be misused. “If you show a politician a fan of possibilities, they may cherry-pick the one that benefits them,” said Corrigan. “The discipline is to present all scenarios, including the uncomfortable ones.”
The full report, available from the data science firm, recommends that news organizations adopt scenario-based coverage for local elections. “Uncertainty is not a weakness,” the authors conclude. “When the uncertainty is bigger than the shock, the model that refuses to forecast is the one that teaches us the most.”
For more on calibrated uncertainty and historical error analysis, see related research on election modelling.
Related Discussions