[HN Gopher] An election forecast that's 50-50 is not "giving up"
___________________________________________________________________
An election forecast that's 50-50 is not "giving up"
Author : luu
Score : 27 points
Date : 2025-03-08 22:20 UTC (2 days ago)
(HTM) web link (statmodeling.stat.columbia.edu)
(TXT) w3m dump (statmodeling.stat.columbia.edu)
| bryanlarsen wrote:
| Nate Silver also had an article saying that the most likely
| scenario would be a blowout. There was a 25% chance that Trump
| would win all 7 battleground state, a 15% chance Harris would win
| all 7. No other permutation of the 7 states was anywhere close.
| bombcar wrote:
| If they're not completely independent variables, that should be
| the expected outcome (e.g., they all trend the same direction).
|
| And if they're not that close, they're not battleground.
| fullshark wrote:
| They are where the campaign money and attention is going,
| what about battleground implies independence? Whoever wins
| them will win the election.
| chicagobob wrote:
| True, but the word "blowout" in this case is just a crazy side-
| effect of our weird electoral college system.
|
| Everyone knows that in all the swing states (except Arizona),
| the final vote margin was just a few percent, and that was well
| within the MOE for all the "50-50" polling in each of those
| states.
|
| No one seriously believes that any President has had a blowout
| election since maybe Obama in 2008 or Bush in 2004, but the
| media sure loves the word "blowout".
| nightski wrote:
| So basically, if you ignore how the entire system works then
| it wasn't a blowout lol. I'm guessing the media was taking
| into account that we indeed use an electoral college system
| so that is all that matters.
| culi wrote:
| Trump won the popular vote by 1.5%. That's the 8th closest
| election in all of US history.
|
| Maybe he meant an EC blowout. But that's easier to predict.
| Most polling had almost all the swing states as extremely
| close. The outcome was likely to swing in the same direction
| across all swing states so an EC blowout is likely
| wakawaka28 wrote:
| As I recall none of the big polls the mainstream media was
| pushing projected Trump to win. They even refused to call the
| election until the wee hours of the morning, when the results
| were pretty clear. Just like in 2016, the polls were intended
| to deceive people into thinking there was no hope for an
| alternative candidate.
| bitshiftfaced wrote:
| An accurate model can output a 50-50 prediction. Sure, no problem
| there. But there is a human bias that does tend to make 50% more
| likely in these cases. It is the maximum percentage you can
| assign to the uncomfortable possibility without it being higher
| than the comfortable possibility.
|
| 538 systematically magnified this kind of bias when they decided
| to rate polls, not based on their absolute error, but based on
| how close their bias was relative to other polls'
| biases.(https://x.com/andrei__roman/status/1854328028480115144)
| This down-weighted pollsters like Atlas Intel who would've
| otherwise improved 538's forecast.
| culi wrote:
| I'm not sure how to verify your comment since 538 was cut by
| ABC a month or 2 ago. But Nate Silver's pollster rating
| methodology is pretty much the same as 538's was during his
| tenure there and can be found here:
| https://www.natesilver.net/p/pollster-ratings-silver-bulleti...
|
| It actually explicitly looks for statistical evidence of
| "herding" (e.g. not publishing poll results that might go
| against the grain) and penalized those pollsters.
|
| In both rating systems, polls that had a long history of going
| against the grain and being correct, like Ann Seltzer's Iowa
| poll, were weight very heavily. Seltzer went heavily against
| the grain 3 elections in a row and was almost completely
| correct the first 2 times. This year she was off by a massive
| margin (ultimately costing her her career). Polls that go
| heavily against the grain but DON'T have a polling history
| simply aren't weighted heavily in general.
| tantalor wrote:
| 538 was cut by ABC 6 days ago.
|
| https://archive.is/E2nre
| culi wrote:
| Wow thanks. As a former regular reader it's felt a lot
| longer
|
| They've had several major cuts in the past couple of years
| so maybe that's why it's felt like that
| bitshiftfaced wrote:
| > I'm not sure how to verify your comment
|
| Here's how 538 explains how they factor in bias into their
| grade:
|
| > Think about this another way. If most polls in a race
| overestimate the Democratic candidate by 10 points in a given
| election, but Pollster C's surveys overestimate Republicans
| by 5, there may be something off about the way Pollster C
| does its polls even if its accuracy is higher. We wouldn't
| necessarily expect it to keep outperforming other pollsters
| in subsequent elections since the direction of polling bias
| bounces around unpredictably from election to election.
|
| - https://abcnews.go.com/538/best-pollsters-
| america/story?id=1...
| jdoliner wrote:
| I don't know about the framing of "giving up." But I think anyone
| who's been following election models since the original 538 in
| 2008 has probably gotten the feeling that they have less alpha in
| them than they did back then. I think there's some obvious
| reasons for this that the forecasters would probably agree with.
|
| The biggest one seems to be a case of Goodhart's Law, leading to
| herding. Pollsters care a lot now about what their rating is in
| forecasting models, so they're reluctant to publish outlier
| results, those outlier results are very valuable for the models
| but are likely to get a pollster punished in the ratings next
| cycle.
|
| Lots of changes to polling methods have been made due to polls
| underestimating Trump. Polls have become like mini models unto
| themselves. Due to their inability to poll a representative slice
| of the population they try to correct by adjusting their results
| to compensate for the difference between who they've polled and
| the likely makeup of the electorate. This makes sense in theory,
| but of course introduces a whole bunch of new variables that need
| to be tuned correctly.
|
| On top of all this is the fact that the process is very high
| stakes and emotional with pollsters and modellers alike bringing
| their own political biases and only being able to resist pressure
| from political factions so much.
|
| The analogy I kept coming back to watching election models during
| this last cycle was that it looked like an ML model that didn't
| have the data it needed to make good predictions and so was
| making the safest prediction it could make given what it did
| have. Basically getting stuck in this local minima at 50-50 that
| was least likely to be off by a lot.
| genewitch wrote:
| okay, now change "election prediction models" to "Climate
| models" and see if you feel like downvoting me merely for
| pointing out the (slight?) hypocrisy in "excusing" every other
| model we humans use for being "inaccurate" or "not having the
| full details" or the "whole slice of"...
|
| when none of the models tend to agree... and the IPCC
| literature published however often they do it is hung upon the
| framework of _models_.
| sojournerc wrote:
| All models are wrong, but some are useful.
|
| Climate modeling is way messier than the media portrays, yet
| even optimistic models show drastic change.
|
| I'm not in the catastrophy camp, but it's worth preparing for
| climate change regardless of origin. It's good for humanity
| to be resilient to a hostile planet.
| lyu07282 wrote:
| At the end of the day it was a 312 vs. 226 in the Electoral
| College. Seems a bit odd that this is supposed to be impossible
| to be predicted with any useful amount of certaintly. But perhaps
| that says more about the nature of the Electoral College than it
| says about pollsters.
|
| https://en.wikipedia.org/wiki/Efforts_to_reform_the_United_S...
| culi wrote:
| Almost every major model had this outcome as very likely.
|
| There was an inordinate amount of very close swing states in
| 2024. Whichever way the swing was gonna be was likely to apply
| across the board.
|
| If you're going by popular vote, then this was actually the 8th
| closest election in all of US history.
| alphazard wrote:
| Highly recommend this video by NNT about why prediction markets
| tend towards 50-50.
|
| https://www.youtube.com/watch?v=YRvPF__du9w
|
| Prediction markets are usually implemented as binary options.
| Like vanilla options, their price depends not just on the most
| likely outcome, but the whole distribution. When uncertainty
| increases (imagine squishing a mound of clay), you end up pushing
| lots of probability mass (clay) to the other side of the bet and
| the expectation of the payoff (used to make a price) tends
| towards 1/2.
| moduspol wrote:
| Agreed with the points in OP. Though we did have the story that
| came out shortly after the election that apparently internal
| polling for the Harris campaign never showed her ahead [1].
|
| Obviously it says in the article that they did eventually fight
| it to a dead heat, which is in-line with a 50-50 forecast, but I
| do wonder what, if anything, failed such that this key detail was
| never reported on publicly until after the election.
|
| As the article notes, public polls started appearing in late
| September showing Harris ahead, which they never saw internally.
| Are internal polls just that much better than public ones? Is the
| media just incentivized to report on the most interesting outcome
| (a dead heat)?
|
| [1]
| https://www.usatoday.com/story/news/politics/elections/2024/...
| delichon wrote:
| There's a pretty clear test of whether a pollster is about
| reporting or influencing: the partisanship of their errors.
| Neutral pollsters have differences with the results randomly
| distributed across parties. Propagandists have errors that favor
| their patrons. This essay leans on the magnitude of the errors,
| but that's less probative than their distribution.
|
| Does any poll aggregator order by randomness of polling error?
| culi wrote:
| This mixes up accuracy with precision and 538 had written at
| length about this.
|
| There's a big difference between pollster bias correction vs
| rating.
|
| There are many pollsters that are pretty consistently +2D/R but
| are reliably off from the final result in that direction. These
| polls are actually extremely valuable once you make the
| correction in your model. Meanwhile, polls that can be off from
| the final result by a large amount but _average out_ to about
| correct should not be trusted. This is a yellow flag for
| herding
|
| A pollster can have an A+ rating while having a very consistent
| bias in one direction or another. The rating is meant to
| capture consistency/honesty of methodology more than result
| notfried wrote:
| For each of the 2024 7 swing states, the winner was <1% ahead on
| average, so what good are these polls if the results are going to
| be within their margin of error?
|
| They need to either find a more accurate way, or... give up!
| culi wrote:
| We had a pretty weird year in general. Harris did bad across
| most safe states but seemed to do much better than her average
| in swing states (not enough to win them, but much better than
| she did in non-competitive states)
|
| Many election models rely heavily on historical correlation.
| States like OH and IN might vote quite differently but their
| _swings_ tend to be in the same direction.
|
| The weirdness this year (possibly caused by the Harris campaign
| having a particularly strong ground game in swing states)
| definitely challenged a lot of baked in assumptions of
| forecasts.
| rachofsunshine wrote:
| What they're good for is telling you that things are close. A
| tied poll or a 50-50 model can tell you that if your beliefs
| think it's 99% to go one way, you're probably overconfident,
| and should be more prepared for it to go the other way.
|
| I cared about the result, because it was going to decide
| whether I settled down in the US or whether I wanted to find a
| different place to live. And because I paid attention to those
| polls, I knew that what happened was not particularly unlikely.
| I prepared early, and that enabled me to be ready to celebrate
| my new year on a Europe-bound plane over the Atlantic.
|
| A lot of people I know thought it couldn't happen. They ignored
| the evidence in front of them, because it was distasteful to
| them (just as it was to me). And they were caught flat-footed
| in a way that I wasn't.
|
| That's not the benefit of hindsight: I brought receipts. You
| can see the 5,000 equally-likely outcomes I had at the start of
| the night (and how they evolved as I added the vote coming in)
| here:
| https://docs.google.com/spreadsheets/d/11nn9y9fusd-6LQKCof3_...
| .
| randomNumber7 wrote:
| Tell me you don't know the difference between bias and variance.
| The longest article/comment wins.
___________________________________________________________________
(page generated 2025-03-10 23:01 UTC)