[HN Gopher] Analyzing a Critique of the AI 2027 Timeline Forecasts
___________________________________________________________________
Analyzing a Critique of the AI 2027 Timeline Forecasts
Author : jsnider3
Score : 31 points
Date : 2025-06-24 19:44 UTC (3 hours ago)
(HTM) web link (thezvi.substack.com)
(TXT) w3m dump (thezvi.substack.com)
| f38zf5vdt wrote:
| I think the author is right about AI only accelerating to the
| next frontier when AI takes over AI research. If the timelines
| are correct and that happens in the next few years, the widely
| desired job of AI researcher may not even exist by then -- it'll
| all be a machine-based research feedback loop where humans only
| hinder the process.
|
| Every other intellectual job will presumably be gone by then too.
| Maybe AI will be the second great equalizer, after death.
| goatlover wrote:
| Except we have no evidence of AI being able to take over AI
| research anymore than we have evidence so far that automation
| this time will significantly reduce human labor. It's all
| speculation based on extrapolating what some researchers think
| will happen as models scale up, or what funders hope will
| happen as they pour more billions into the hype machine.
| dinfinity wrote:
| It's also extrapolating on _what already exists_. We are
| _way_ beyond 'just some academic theories'.
|
| One can argue all day about timelines, but AI has progressed
| from being _fully inexistent_ to a level rivaling and
| surpassing quite some humans in quite some things in less
| than 100 years. Arguably, all the evidence we have points to
| AI being able to take over AI research at some point in the
| near future.
| suddenlybananas wrote:
| >surpassing quite some humans
|
| I don't really think this is true, unless you'd be willing
| to say calculators are smarter than humans (or else you're
| a misanthrope who would do well to actually talk to other
| people).
| pier25 wrote:
| > _all the evidence we have points to AI being able to take
| over AI research at some point in the near future._
|
| Does it?
|
| That's like looking at a bicycle or car and saying "all the
| evidence points out we'll be able to do interstellar travel
| in the future".
| KaiserPro wrote:
| _bangs head against the table._
|
| Look, fitting a single metric to a curve and projecting from that
| only gets you a "model" that conforms to your curve fitting.
|
| "proper" AI, where it starts to remove 10-15% of jobs will cause
| an economic blood bath.
|
| The current rate of AI expansion requires almost exponential
| amounts of cash injections. That cash comes from petro-dollars
| and advertising sales. (and the ability of investment banks to
| print money based on those investment) Those sources of cash
| require a functioning world economy.
|
| given that the US economy is three fox news headlines away from
| collapse[1] exponential money supply looks a bit dicey
|
| If you, in the space of 2 years remove 10-15% of all jobs, you
| will spark revolutions. This will cause loands to be called in,
| banks to fail and the dollar, presently run obvious dipshits, to
| evaporate.
|
| This will stop investment in AI, which means no exponential
| growth.
|
| Sure you can talk about universal credit, but unless something
| radical changes, the people who run our economies will not
| consent to giving away cash to the plebs.
|
| AI 2027 is unmitigated bullshit, but with graphs, so people think
| there is a science to it.
|
| [1] trump needs a "good" economy. If the fed, who are currently
| mostly independent need to raise interest rates, and fox news
| doesn't like it, then trump will remove it's independence. This
| will really raise the chance of the dollar being dumped for
| something else (and its either the euro or renminbi, but more
| likely the latter)
|
| That'll also kill the UK because for some reason we hold ~1.2
| times our GDP in US short term bonds.
|
| TLDR: you need an exponential supply of cash for AI 2027 to even
| be close to working.
| goatlover wrote:
| It's certainly hard to imagine the political situation in the
| US resulting in UBI anytime soon, while at the same time the
| party in control wants unregulated AI development for the next
| decade.
| bcrosby95 wrote:
| It's the '30s with no FDR in sight. It won't end well for
| anyone.
| gensym wrote:
| > AI 2027 is unmitigated bullshit, but with graphs, so people
| think there is a science to it.
|
| AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey
| - it's a science fiction story that pretends to be rigorous and
| predictive but in such a way that when you point out it's
| neither, the authors can fall back to "it's just a story".
|
| At first I was surprised at how much traction this thing got,
| but this is the type of argument that community has been
| refining for decades and this point, and it's pretty effective
| on people who lack the antibodies for it.
| tux3 wrote:
| It's the other way around entirely: the story is the
| unrigorous bailey, when confronted they fall back to the
| actual research behind it
|
| And you can certainly criticize the research, but you've got
| the motte and the bailey backwards
| mitthrowaway2 wrote:
| I'm very much an AI doomer myself, and even I don't think AI
| 2027 holds water. I find myself quite confused about what its
| proponents (including Scott Alexander) are even expecting to
| get from the project, because it seems to me like the median
| result will be a big loss of AI-doomer credibilty in 2028
| when the talking point shifts to "but it's a long tailed
| prediction!"
| hollerith wrote:
| Same here. I ask the reader _not_ to react to _AI 2027_ by
| dismissing the possibility that it is quite dangerous to
| let the AI labs continue with their labbing.
| elefanten wrote:
| This is feeling like a retread of climate change
| messaging. Serious problem requiring serious thought
| (even without "AI doom" as the scenario, just the
| political economic and social disruptions suffice) but
| being most loudly championed via aggressive timelines and
| significant exaggerations.
|
| The overreaction (on both sides) to be followed by
| fatigue and disinterest.
| adastra22 wrote:
| Or maybe, just maybe, AI doom isn't a serious problem,
| and the lack of credible arguments for it should be
| evidence of such.
| 098799 wrote:
| Because if we're unlucky, Scott will think in the final
| seconds of his life as he watches the world burn "I could
| have tried harder and worried less about my reputation".
| stego-tech wrote:
| It got traction because it supported everyone's position in
| some way:
|
| * Pro-safety folks could point at it and say this is why AI
| development should slow down or stop
|
| * LLM-doomer folks (disclaimer: it me) can point at it and
| mock its pie-in-the-sky charts and milestones, as well as its
| handwashing of any actual issues LLMs have at present, or
| even just mock the persistent BS nonsense of "AI will
| eliminate jobs but the economy [built atop consumer spending]
| will grow exponentially forever so it'll be fine" that's so
| often spewed like sewage
|
| * AI boosters and accelerationists can point to it as why we
| should speed ahead even faster, because you see, everyone
| will likely be fine in the end and you can totes trust us to
| slow down and behave safely at the right moment, swearsies
|
| Good fiction always tickles the brain across multiple
| positions and knowledge domains, and AI 2027 was no
| different. It's a parable warning about the extreme dangers
| of AI, but fails to mention how immediate they are (such as
| already being deployed to Kamikaze drones) and ultimately
| wraps it all up as akin to a coin toss between an American or
| Chinese Empire. It makes a _lot_ of assumptions to sell its
| particular narrative, to serve its own agenda.
| OgsyedIE wrote:
| I disagree with the forecast too, but your critique is off-
| base. The assumption that exponential cash is required assumes
| that subexponential capex can't chug along gradually without
| the industry collapsing into mass bankruptcy. Additionally, the
| investment cash that the likes of Softbank are throwing away
| comes from private holdings like pensions and has little to
| nothing to do with the sovereign holdings of OPEC+ nations. The
| reason that it doesn't hold water are the bottlenecks on
| compute production. TSMC is still the only supplier of anything
| useful for foundation model training and their expansions only
| appear big and/or fast if you read the likes of Forbes.
| pier25 wrote:
| > _AI 2027 is unmitigated bullshit, but with graphs, so people
| think there is a science to it._
|
| One of the best things I've read all day.
| JimDabell wrote:
| It's not just changing economics that will derail the
| projections. The story gives them enough compute and
| intelligence to massively sway public opinion and elections,
| but then seems to just assume the world will just keep working
| the same way on those fronts. They think ASI will be invented,
| but 60% of the public will disapprove; I guess a successful PR
| campaign is too difficult for the "country of geniuses in a
| datacenter"?
| jvalencia wrote:
| It's like the invention of the washing machine. People didn't
| stop doing chores, they just do it more efficiently.
|
| Coders won't stop being, they'll just do more, compete at higher
| levels. The losers are the ones who won't/can't adapt.
| falcor84 wrote:
| I suppose that those who stayed in the washing business and
| competed at a higher level are the ones running their own
| laundromats; are they the big winners of this technological
| shift?
| alganet wrote:
| What are you even talking about?
|
| The article is not about AI replacing jobs. It doesn't even
| touch this subject.
| fasthands9 wrote:
| Yeah. For understandable reasons that is covered a lot too,
| but AI 2027 is really about the risk of self-replicating AI.
| Is an AI virus possible, and could it be easily stopped by
| humans and our military?
| bgwalter wrote:
| No, all washing machines were centralized in the OpenWash
| company. In order to do your laundry, you needed a subscription
| and had to send your clothes to San Francisco and back.
| jgalt212 wrote:
| Excellent analogy
| vntok wrote:
| [delayed]
| stego-tech wrote:
| Reading through the comments, I am _so glad_ I'm not the only one
| beyond done with these stupid clapbacks between boosters and
| doomers over a work of fiction that conveniently ignores present
| harms and tangible reality in knowledge domains outside of AI -
| like physics, biology, economics, etc.
|
| If I didn't know better, it's _almost_ like there's a vested
| interest in propping these things up rather than letting them
| stand freely and let the "invisible hand of the free market"
| decide if they're of value.
___________________________________________________________________
(page generated 2025-06-24 23:00 UTC)