[HN Gopher] A deep critique of AI 2027's bad timeline models
___________________________________________________________________
A deep critique of AI 2027's bad timeline models
Author : paulpauper
Score : 67 points
Date : 2025-06-23 18:51 UTC (4 hours ago)
(HTM) web link (www.lesswrong.com)
(TXT) w3m dump (www.lesswrong.com)
| brcmthrowaway wrote:
| So much bikeshedding and armchair expertise displayed in this
| field.
| goatlover wrote:
| Which would that be, the arguments for ASI being near and how
| that could be apocalyptic, or the push back on those timelines
| and doomsday (or utopian) proclamations?
| evilantnie wrote:
| I don't think the real divide is "doom tomorrow" vs "nothing
| to worry about." The crux is a pretty straightforward
| philosophical question "what does it even mean to generalize
| intelligence and agency", how much can scaling laws tell us
| about that?
|
| The back-and-forth over s2's and growth exponents feels like
| theatrics that bury the actual debate.
| vonneumannstan wrote:
| >The crux is a pretty straightforward philosophical
| question "what does it even mean to generalize intelligence
| and agency", how much can scaling laws tell us about that?
|
| Truly a bizarre take. I'm sure the Dinosaurs also debated
| the possible smell and taste of the asteroid that was about
| to hit them. The real debate. lol.
| evilantnie wrote:
| The dinosaurs didn't create the asteroid that hit them,
| so they never had the chance for a real debate.
| ysofunny wrote:
| with all that signaling... it's almost like they're trying to
| communicate!!! who would've thought!?
| TimPC wrote:
| This critique is fairly strong and offers a lot of insight into
| the critical thinking behind it. The parts of the math I've
| looked at do check out.
| yodon wrote:
| So... both authors predict superhuman intelligence, defined as AI
| that can complete tasks that would take humans hundreds of hours,
| to be a thing "sometime in the next few years", both authors
| predict "probably not before 2027, but maybe" and both authors
| predict "probably not longer than 2032, but maybe", and one
| author seems to think their estimates are wildly better than
| those of the other author.
|
| That's not quite the level of disagreement I was expecting given
| the title.
| jollyllama wrote:
| That's not very investor of you
| LegionMammal978 wrote:
| As far as I can tell, the author of the critique specifically
| avoids espousing a timeline of his own. Indeed, he dislikes how
| these sorts of timeline models are used in general:
|
| > I'm not against people making shoddy toy models, and I think
| they can be a useful intellectual exercise. I'm not against
| people sketching out hypothetical sci-fi short stories, I've
| done that myself. I am against people treating shoddy toy
| models as rigorous research, stapling them to hypothetical
| short stories, and then taking them out on podcast circuits to
| go viral. What I'm most against is people taking shoddy toy
| models seriously and _basing life decisions on them_ , as I
| have seen happen for AI2027. This is just a model for a tiny
| slice of the possibility space for how AI will go, and in my
| opinion it is implemented poorly even if you agree with the
| author's general worldview.
|
| In particular, I wouldn't describe the author's position as
| "probably not longer than 2032" (give or take the usual
| quibbles over what tasks are a necessary part of "superhuman
| intelligence"). Indeed, he rates social issues from AI as a
| more plausible near-term threat than dangerous AGI takeoff [0],
| and he is very skeptical about how well any software-based AI
| can revolutionize the physical sciences [1].
|
| [0] https://titotal.substack.com/p/slopworld-2035-the-dangers-
| of...
|
| [1] https://titotal.substack.com/p/ai-is-not-taking-over-
| materia...
| ysofunny wrote:
| but what is the difference between a shoddy toy model and a
| real world pro "rigorous research"?
|
| it's like asking between the difference between amateur toy
| audio gear, and real pro level audio gear... (which is not a
| simple thing given "prosumer products" dominate the
| landscape)
|
| the only point in betting when "real AGI" will happen boils
| down to the payouts from gambling with this. are such gambles
| a zero sum game? does that depend on who escrows the bet??
|
| what do I get if I am correct? how should the incorrect lose?
| LegionMammal978 wrote:
| If you believe that there's any plausible chance of AGI
| causing a major catastrophe short of the immediate end of
| the world, then its precise nature can have all sorts of
| effects on how the catastrophe could unfold and how people
| should respond to it.
| vonneumannstan wrote:
| For rationalists this is about as bad as disagreements can
| get...
| TimPC wrote:
| He predicts it might be possible from model math but doesn't
| actually say what his prediction is. He also argues it's
| possible we are on a s-curve that levels out before superhuman
| intelligence.
| sweezyjeezy wrote:
| I don't think the author of this article is making any strong
| prediction, in fact I think a lot of the article is a critique
| of whether such an extrapolation can be done meaningfully.
|
| _Most of these models predict superhuman coders in the near
| term, within the next ten years. This is because most of them
| share the assumption that a) current trends will continue for
| the foreseeable future, b) that "superhuman coding" is possible
| to achieve in the near future, and c) that the METR time
| horizons are a reasonable metric for AI progress. I don't agree
| with all these assumptions, but I understand why people that do
| think superhuman coders are coming soon._
|
| Personally I think any model that puts zero weight on the idea
| that there could be some big stumbling blocks ahead, or even a
| possible plateau, is not a good model.
| XorNot wrote:
| The primary question is always whether they'd have made those
| sorts of predictions based on the results they were seeing on
| the field from the same amount of time in the past.
|
| Pre-CharGPT I very much doubt the bullish predictions on AI
| would've been made the way they are now.
| lubujackson wrote:
| These predictions seem wildly reductive in any case and it seems
| like extrapolating AI's ability to complete task that would take
| a human 30 seconds -> 10 minutes is far different than going from
| 10 minutes to 5 years. For one reason, a 5 year task generally
| requires much more input and intent than a 10 minute task.
| Already we have ramped up from "enter a paragraph" to complicated
| Cursor rules and rich context prompts to get to where we are
| today. This is completely overlooked in these simple "graphs go
| up" predictions.
| echelon wrote:
| I'm also interested in error rates multiplying for simple
| tasks.
|
| A human can do a long sequence of easy tasks without error - or
| easily correct. Can a model do the same?
| kingstnap wrote:
| The recent Apple "LLMs can't reason yet" paper was exactly
| this. They just tested if models could run an exponential
| number of steps.
|
| Of course, they gave it a terrible clickbait title and framed
| the question and graphs incorrectly. But if they did the
| study better it would have been "How long of a sequence of
| algorithmic steps can LLMs execute before making a mistake or
| giving up?"
| kypro wrote:
| As someone in the P(doom) > 90% category, I think in general
| making overly precise predictions are a really bad way to
| highlight AI risks (assuming that was the goal of AI 2027).
|
| Making predictions that are too specific just opens you up to
| pushback from people who are more interested in critiquing the
| exact details of your softer predictions (such as those around
| timelines) rather than your hard predictions about likely
| outcomes. And while I think articles like this are valuable to
| refine timeline predictions, I find a lot of people use them as
| evidence to dismiss the stronger predictions made about the risks
| of ASI.
|
| I think people like Nick Bostrom make much more convincing
| arguments about AI risk because they don't depend on overly
| detailed predictions which can be easily nit-picked at, but are
| instead much more general and focus more on the unique nature of
| the risks AI presents.
|
| For me the risk of timelines is that they're unknowable due to
| the unpredictable nature of ASI. The fact we are rapidly
| developing a technology which most people would accept comes with
| at least some existential risk, that we can't predict the
| progress curve of, and where solutions would come with
| significant coordination problems should concern people without
| having to say it will happen in x number of years.
|
| I think AI 2027 is interesting as a science fiction about
| potential futures we could be heading towards, but that's really
| it.
|
| The problem with being an AI doomer is that you can't say "I told
| you so" if you're right so any personal predictions you make have
| no close to no expected pay-out, either socially or economically.
| This is different to other risks which if you predict accurately
| when others don't you can still benefit from.
|
| I have no meaningful voice in this space so I'll just keep saying
| we're fucked because what does it matter what I think, but I wish
| there were more people with influence out there who were
| seriously thinking about how they can best influence rather than
| stroking their own own egos with future predictions, which even
| if I happen agree with do next to nothing to improve the
| distribution of outcomes.
| Fraterkes wrote:
| I'm not trying to be disingenuous, but in what ways have you
| changed your life now that you belief theres >90% chance to an
| end to civilization/humanity? Are you living like a terminal
| cancer patient?
|
| (Im sorry, I know its a crass question)
| allturtles wrote:
| The person you're replying to said "For me the risk of
| timelines is that they're unknowable due to the unpredictable
| nature of ASI." So they are predicting >90% chance of doom,
| but not when that will happen. Given that there is already a
| 100% chance of death at some unknown point in the future, why
| would this cause GP to start living like a terminal cancer
| patient (presumably defined as someone with a >99% chance of
| death in the next year)?
| lava_pidgeon wrote:
| I like to point out, that the existence of AGI in the
| future does change my potential future planning. So I am
| 35. Do I need save for pensions? Does it make to sense to
| start family? These aren't 1 year questions but 20 years
| ahead questions...
| amarcheschi wrote:
| If you're so terrorized of Ai to not start a family
| despite wanting and being able to, it must be miserable
| (if) to eventually live through the years as everyone
| that tried to predict the end of the world did (except
| for those who died of other causes before the predicted
| end)
| siddboots wrote:
| I think both approaches are useful. AI2027 presents a specific
| timeline in which a) the trajectory of tech is at least
| somewhat empirically grounded, and b) each step of the plot arc
| is plausible. There's a chance of it being convincing to a
| skeptic who had otherwise thought of the whole "rogue AI"
| scenario as a kind of magical thinking.
| kypro wrote:
| I agree, but I think you're assuming a certain type of person
| who understands that a detailed prediction can be both wrong
| and right simultaneously. And that it's not so much about
| getting all the details right, but being in the right
| ballpark.
|
| Unfortunately there's a huge number of people who get
| obsessed about details and then nit pick. I see this with
| Eliezer Yudkowsky all the time where 90% of the criticism of
| his views are just nit picking the weaker predictions he
| makes while ignoring his stronger predictions regarding the
| core risks which could result in those bad things happening.
| I think Yudkowsky opens himself up to this though because he
| often makes very detailed predictions about how things might
| play out and this largely why he's so controversial, in my
| opinion.
|
| I really liked AI 2027 personally. I thought specifically the
| tabletop exercises were a nice heuristic for predicting how
| actors might behave in certain scenarios. I also agree that
| it presented a plausible narrative for how things could play
| out. I'm also glad they did wimp out with the bad ending.
| Another problem I have with people are concerned about AI
| risk is that they scare away from speaking plainly about the
| fact if things go poorly your love ones in a few years will
| probably either be either be dead, in suspended animation on
| a memory chip, or in a literal digital hell.
| boznz wrote:
| I expect the predictions for fusion back in the 1950's and 1960's
| generated similar essays, they had not got to ignition but the
| science was solid; the 'science' with moving from AGI to ASI is
| not really that solid yet we have yet to achieve 'AI ignition'
| even in the lab. (Any AI's that have achieved consciousness feel
| free to disagree)
| fasthands9 wrote:
| I do agree generally with this, but AI 2027 and other writings
| have moved my concern from 0% to 10%.
|
| I know I sound crazy writing it out, but many of the really bad
| scenarios don't require consciousness or anything like that. It
| just requires they be self-replicating and the ability to
| operate without humans shutting them off.
| staunton wrote:
| This is a lot of text, details and hair splitting just to say
| "modeling things like this is bullshit". It's engaging
| "seriously" and "on the merits" with something that from the very
| start was just marketing fluff packaged as some kind of
| prediction.
|
| I'm not sure if the author did anyone a favor with this write-up.
| More than anything, it buries the main point ("this kind of
| forecasting is fundamentally bullshit") under a bunch of
| complicated-sounding details that lend credibility to the
| original predictions, which the original authors now get to agrue
| about and thank people for pointing out "minor issues which we
| have now addressed in the updated version".
| ed wrote:
| Anyone old enough to remember EPIC 2014? It was a viral flash
| video, released in 2004, about the future of Google and news
| reporting. I imagine 2027 will age similarly well.
|
| https://youtu.be/LZXwdRBxZ0U
| mlsu wrote:
| My niece weighed 3 kg one year ago. Now, she weighs 8.9 kg. By my
| modeling, she will weigh more than the moon in approximately 50
| years. I've analyzed the errors in my model; regrettably, the
| conclusion is always the same: it will certainly happen within
| our lifetimes.
|
| Everyone needs to be planning for this -- all of this urgent talk
| of "AI" (let alone "climate change" or "holocene extinction") is
| of positively no consequence compared to the prospect I've
| outlined here: a mass of HUMAN FLESH the size of THE MOON growing
| on the surface of our planet!
| habinero wrote:
| LOL, exactly. All of the weird AGI/doomer/whatever bullshit
| we're calling it/ feels like exactly this: people who think
| they're too smart to fall prey to groupthink and bias
| confirmation, and yet predictably are falling prey to
| groupthink and bias confirmation.
| mlsu wrote:
| I have more fun reading it as a kind of collaborative real-
| time sci-fi story. Reads right out of a Lem novel.
| Workaccount2 wrote:
| We have watched many humans grow so we have a pretty good idea
| of the curve. A better analogy is an alien blob appeared one
| day and went from 3kg to 9kg in a year. We have never seen one
| of these before, so we don't know what it's growth curve looks
| like. But it keeps eating food and keeps getting bigger.
| mlsu wrote:
| Mine's different. She's cuter.
|
| On a more serious note. Have these AI doom guys ever dealt
| with one of these cutting edge models on _out of distribution
| data_? They suck so so bad. There 's only so much data
| available, the models have basically slurped it all.
|
| Let alone like the basic thermodynamics of it. There's only
| so much entropy out there in cyberspace to harvest, at some
| point you run into a wall and then you have to build real
| robots to go collect more in the real world. And how's that
| going for them?
|
| Also I can't help remarking: the metaphor you chose is
| _science fiction_.
___________________________________________________________________
(page generated 2025-06-23 23:01 UTC)