2016 state-by-state final prediction - frozen noon, November 7th

Shown below is the final forecast for the Electoral Map in the 2016 Presidential election between Republican Donald Trump and Democrat Hillary Clinton. This forecast was frozen at noon the day before the election.

ElectoralMap.net - analyzing the 2016 election forecast of the Pivit prediction market and the polls.

ElectoralMap.net is going to HTML5 for the 2016 election season. It appears as though you are using a browser, such as older version of IE, which can't render the latest graphics technologies for the web. You need to obtain a modern browser to use this Electoral Map application.

A compliant browser can be downloaded from any of the following sites:




With these state-by-state probabilities, Clinton wins 80.5% of the time.

Post-election analysis - Nov 30, 2016

Post-election summary
The largest takeaway from this year's election is that we are not in a golden age of forecasting and, no matter our grasp on statistics and sampling, polling can and will fail us. We still have to actually have an election to see what will happen. We have also been reminded that a widely accepted consensus of experts can be wrong.

The forecasting was so badly lopsided this year that we have to consider the possibility that Democrat turnout was suppressed due to extreme overconfidence. Nate Silver, a respected analyst who has been praised by both the right and left, was actually attacked for saying Donald Trump had a 30% chance to win; his detractors insisting the number was closer to 1%. A serious narrative developed, at one point, that Texas could flip blue this year -- Trump would carry the state by almost 10 points. The New York Times ran a hit piece on the L.A. Times for daring to say that Trump was winning in their "Daybreak" poll. The media and the forecasting community gave Democrats good reason to avoid the hassle of going out and voting.

Credit where it's due
The L.A. Times actually had to defend itself when it came under attack by the New York Times for the sample it was using in its "Daybreak" tracking poll. Shown below is the data from said poll, which bucked the consensus and showed Trump with a lead for much of the cycle. This poll was one of very few which picked Trump to win, and the only one to be pushed by one of the country's largest newspapers. Detractors are still insisting, even now, that the poll is wrong because Clinton carried the popular vote. Given the state of the rest of forecasting in this election, I don't find that argument convincing. They picked the winner using a nontraditional methodology and sample, and they deserve credit for getting it right.
They aren't forecasters or analysts, but an honorable mention in the "credit where it's due" category goes to Ann Coulter on the right and Michael Moore on the left. Michael Moore gets credit for being a lone voice on the left predicting a Trump victory, and demonstrating that he really does understand the working people of the rust belt. Ann Coulter gets credit for calling a Trump victory before he even secured the nomination, and enduring ridicule and condescending laughter on the Bill Maher show for doing so. Like Moore, Coulter called the rust belt with 100% accuracy and was one of very few conservative pundits saying that Florida wasn't a requirement for a Trump win - which ended up being true.

Battleground polling bias
Call it error, or call it bias, what's indisputable is that the battleground polling this year had it in spades. Not only was the polling bad, but it was bad where it mattered the most. In Pennsylvania, Wisconsin, and Michigan the average bias in Clinton's favor was 4.6 points. Wisconsin claims the honor this year of the largest difference between the RCP poll average and the actual outcome of the election at a massive 7.3 points. In ten out of fourteen battleground states the polling was biased toward Clinton. Most of the error seems to have come from using turnout models from 2008 and 2012 and vastly oversampling Democrats when polling.

State RCP average Actual Error/Bias
Wisconsin Clinton +6.5 Trump +0.8 Clinton +7.3
Missouri Trump +11.0 Trump +19.1 Clinton +7.1
Iowa Trump +3.0 Trump +9.5 Clinton +6.5
Ohio Trump +3.5 Trump +8.0 Clinton +4.5
Michigan Clinton +3.4 Trump +0.2 Clinton +3.6
Pennsylvania Clinton +1.9 Trump +1.1 Clinton +3.0
North Carolina Trump +1.0 Trump +3.7 Clinton +2.7
Florida Trump +0.2 Trump +1.1 Clinton +0.9
Georgia Trump +4.8 Trump +5.2 Clinton +0.4
New Hampshire Clinton +0.6 Clinton +0.3 Clinton +0.3
Arizona Trump +4.0 Trump +3.5 Trump +0.5
Virginia Clinton +5.0 Clinton +5.5 Trump +0.5
Colorado Clinton +2.9 Clinton +4.9 Trump +2.0
Nevada Trump +0.8 Clinton +2.4 Trump +3.2

The true outlier this year is Nevada
The rust belt and industrial midwest surprised everyone this year, but not much has been said about the surprise out west. Using the Real Clear Politics average of polls, there were 19 states that were forecast to go to Hillary Clinton that went to Hillary Clinton, and 27 states forecast to go to Donald Trump that went to Donald Trump. There were three states forecast to go to Clinton that went to Trump. But there was just one state that was forecast to go to Trump that ended up going Clinton, and that state was Nevada.

In the battleground states the average polling bias was +2.2 points in Clinton's favor. In Nevada the apparent bias is +3.2 points in Trump's favor, which makes for a total deviation of 5.5 points from the average. Furthermore, the battleground average is actually skewed by 0.3 points just from Nevada being present in the calculation. Without Nevada the average bias is even greater in Clinton's favor. The worst of the bad polling in the battleground states was at least consistently bad in that it universally favored Clinton. Nevada's polling was not only bad, but inconsistent with the general trend of favoring Clinton.

Prediction markets failed as well
Prediction markets failed to shrug off the forecasting consensus this year, and the "wisdom of the crowds" proved no better than the wisdom of the experts. In 2008 and 2012 prediction markets did very well, but so did the general forecasting community. This year would appear to vindicate those who say that prediction markets reinforce the expert consensus rather than coming to an independent conclusion. PredictWise had Clinton at a 89% favorite to win, and PredictIt 82%. BitCoin prediction market Predictious was down at 75%, five points higher than Nate Silver who had Clinton at 70% going into election day. Pivit was hovering around the 80% mark for most of October.

Because of outages and problems getting data from the site, ElectoralMap.net did not use the Pivit prediction market for the final forecast -- and ended up with a more accurate prediction after changing data sources. Staying with Pivit would have meant getting North Carolina and Florida wrong as well.

Pivit had a good early season
Pivit was at peak accuracy in late September and early October, between roughly 9/20 and 10/5. It was at this time that Pivit had 47/50 states correct, missing only Pennsylvania, Michigan, and Wisconsin. Missing those states means missing the election, of course, but nevertheless this was the most accurate period for Pivit. Shown below is the forecast from October 5, which reflects the relatively stable betting of late September and early October.

During that timeframe, I noted that FiveThirtyEight.com was diverging from Pivit somewhat rapidly. Pivit had Trump maintaining a lead in Florida, Ohio, North Carolina, and Iowa while the 538 forecast shifted toward a Clinton victory in all of those states. The divergence was the most extreme in Florida and North Carolina, two states that ended up blue in the Nate Silver forecast but went to Trump. I believed this was significant because it showed a prediction market going against the grain and shrugging off polls and other forecasts. Ultimately, that changed.

Something happened to Pivit around October 23
As I noted in a previous news post Pivit experienced enormous shifts in betting over a roughly 100 hour period. These shifts were larger than any previous jumps in the betting data on Pivit, and moved Florida and North Carolina out of Trump's column and into Clinton's column. We now know that Trump did indeed carry those two states, and that Pivit was more accurate before the large and sudden shifts. So what happened? We can only speculate. It's not unreasonable, since Pivit is a play-money prediction market, to consider the possibility that it was gamed. There is no way to know without access to Pivit's internals. But the following stands:
  1. Pivit was accurate in early October when it held Florida, North Carolina, and other swing states for Trump in spite of most other forecasts shifting toward Clinton.
  2. Pivit lost accuracy in late October, suddenly and erratically, when there were jolting shifts (as large as 24% in the case of Florida ) toward Clinton.
  3. After the shifts the Pivit forecast ended up identical to the 538 forecast.
  4. After these events, ElectoralMap.net made a more accurate forecast by removing Pivit as a data source.
Going forward ElectoralMap.net will probably move to a more custom forecast, rather than rely on a single prediction market for data.

The Recounts
What are the odds that the proposed recounts change anything? Let's put it this way: the prediction markets haven't even created events for recounts in Michigan, Wisconsin, and Pennsylvania. Michigan, the tightest of the three races, has already been counted twice and Trump won both times with a margin of 10,000 votes. For reference, the most enthusiastic Al Gore supporters claim a statewide recount would have netted Gore a swing of about six hundred votes (out of six million) in the Florida recount in 2000.

older news...