[HN Gopher] AI 2027
___________________________________________________________________
AI 2027
Author : Tenoke
Score : 804 points
Date : 2025-04-03 16:13 UTC (1 days ago)
(HTM) web link (ai-2027.com)
(TXT) w3m dump (ai-2027.com)
| ikerino wrote:
| Feels reasonable in the first few paragraphs, then quickly starts
| reading like science fiction.
|
| Would love to read a perspective examining "what is the slowest
| reasonable pace of development we could expect." This feels to me
| like the fastest (unreasonable) trajectory we could expect.
| admiralrohan wrote:
| No one knows what will happen. But these thought experiments
| can be useful as a critical thinking practice.
| layer8 wrote:
| The slowest is a sudden and permanent plateau, where all
| attempts at progress turn out to result in serious downsides
| that make them unworkable.
| 9dev wrote:
| Like an exponentially growing compute requirement for
| negligible performance gains, on the scale of the energy
| consumption of small countries? Because that is where we are,
| right now.
| photonthug wrote:
| Even if this were true, it's not quite the end of the story
| is it? The hype itself creates lots of compute and to some
| extent the power needed to feed that compute, even if
| approximately zero of the hype pans out. So an interesting
| question becomes.. what happens with all the excess? Sure it
| _probably_ gets gobbled up in crypto ponzi schemes, but I
| guess we can try to be optimistic. IDK, maybe we get to solve
| cancer and climate change anyway, not with fancy new AGI, but
| merely with some new ability to cheaply crunch numbers for
| boring old school ODEs.
| ddp26 wrote:
| The forecasts under "Research" are distributions, so you can
| compare the 10th percentile vs 90th percentile.
|
| Their research is consistent with a similar story unfolding
| over 8-10 years instead of 2.
| zmj wrote:
| If you described today's AI capabilities to someone from 3
| years ago, that would also sound like science fiction.
| Extrapolate.
| FeepingCreature wrote:
| > Feels reasonable in the first few paragraphs, then quickly
| starts reading like science fiction.
|
| That's kind of unavoidably what accelerating progress feels
| like.
| ahofmann wrote:
| Ok, I'll bite. I predict that everything in this article is horse
| manure. AGI will not happen. LLMs will be tools, that can
| automate away stuff, like today and they will get slightly, or
| quite a bit better at it. That will be all. See you in two years,
| I'm excited what will be the truth.
| Tenoke wrote:
| That seems naive in a status quo bias way to me. Why and where
| do you expect AI progress to stop? It sounds like somewhere
| very close to where we are at in your eyes. Why do you think
| there won't be many further improvements?
| ahofmann wrote:
| I write bog-standard PHP software. When GPT-4 came out, I was
| very frightened that my job could be automated away soon,
| because for PHP/Laravel/MySQL there must exist a lot of
| training data.
|
| The reality now is, that the current LLMs still often create
| stuff, that costs me more time to fix, than to do it myself.
| So I still write a lot of code myself. It is very impressive,
| that I can think about stopping writing code myself. But my
| job as a software developer is, very, very secure.
|
| LLMs are very unable to build maintainable software. They are
| unable to understand what humans want and what the codebase
| need. The stuff they build is good-looking garbage. One
| example I've seen yesterday: one dev committed code, where
| the LLM created 50 lines of React code, complete with all
| those useless comments and for good measure a setTimeout()
| for something that should be one HTML DIV with two tailwind
| classes. They can't write idiomatic code, because they write
| code, that they were prompted for.
|
| Almost daily I get code, commit messages, and even issue
| discussions that are clearly AI-generated. And it costs me
| time to deal with good-looking but useless content.
|
| To be honest, I hope that LLMs get better soon. Because right
| now, we are in an annoying phase, where software developers
| bog me down with AI-generated stuff. It just looks good but
| doesn't help writing usable software, that can be deployed in
| production.
|
| To get to this point, LLMs need to get maybe a hundred times
| faster, maybe a thousand or ten thousand times. They need a
| much bigger context window. Then they can have an inner
| dialogue, where they really "understand" how some feature
| should be built in a given codebase. That would be very
| useful. But it will also use so much energy that I doubt that
| it will be cheaper to let a LLM do those "thinking" parts
| over, and over again instead of paying a human to build the
| software. Perhaps this will be feasible in five or eight
| years. But not two.
|
| And this won't be AGI. This will still be a very, very fast
| stochastic parrot.
| AnimalMuppet wrote:
| ahofmann didn't expect AI progress to _stop_. They expected
| it to continue, but not lead to AGI, that will not lead to
| superintelligence, that will not lead to a self-accelerating
| process of improvement.
|
| So the question is, do you think the current road leads to
| AGI? _How far_ down the road is it? As far as I can see,
| there is not a "status quo bias" answer to those questions.
| PollardsRho wrote:
| It seems to me that much of recent AI progress has not
| changed the fundamental scaling principles underlying the
| tech. Reasoning models are more effective, but at the cost of
| more computation: it's more for more, not more for less. The
| logarithmic relationship between model resources and model
| quality (as Altman himself has characterized it), phrased a
| different way, means that you need exponentially more energy
| and resources for each marginal increase in capabilities.
| GPT-4.5 is unimpressive in comparison to GPT-4, and at least
| from the outside it seems like it cost an awful lot of money.
| Maybe GPT-5 is slightly less unimpressive and significantly
| more expensive: is that the through-line that will lead to
| the singularity?
|
| Compare the automobile. Automobiles today are a lot nicer
| than they were 50 years ago, and a lot more efficient. Does
| that mean cars that never need fuel or recharging are coming
| soon, just because the trend has been higher efficiency? No,
| because the fundamental physical realities of drag still
| limit efficiency. Moreover, it turns out that making 100%
| efficient engines with 100% efficient regenerative brakes is
| really hard, and "just throw more research at it" isn't a
| silver bullet. That's not "there won't be many future
| improvements", but it is "those future improvements probably
| won't be any bigger than the jump from GPT-3 to o1, which
| does not extrapolate to what OP claims their models will do
| in 2027."
|
| AI in 2027 might be the metaphorical brand-new Lexus to
| today's beat-up Kia. That doesn't mean it will drive ten
| times faster, or take ten times less fuel. Even if high-end
| cars can be significantly more efficient than what average
| people drive, that doesn't mean the extra expense is actually
| worth it.
| jstummbillig wrote:
| When is the earliest that you would have predicted where we are
| today?
| rdlw wrote:
| Same as everybody else. Today.
| mitthrowaway2 wrote:
| What's an example of an intellectual task that you don't think
| AI will be capable of by 2027?
| coolThingsFirst wrote:
| programming
| lumenwrites wrote:
| Why would it get 60-80% as good as human programmers (which
| is what the current state of things feels like to me, as a
| programmer, using these tools for hours every day), but
| stop there?
| boringg wrote:
| Because ewe still haven't figured out fusion but its been
| promised for decades. Why would everything thats been
| promised by people with highly vested interests pan out
| any different?
|
| One is inherently a more challenging physics problem.
| kody wrote:
| It's 60-80% as good as Stack Overflow copy-pasting
| programmers, sure, but those programmers were already
| providing questionable value.
|
| It's nowhere near as good as someone actually building
| and maintaining systems. It's barely able to vomit out an
| MVP and it's almost never capable of making a meaningful
| change to that MVP.
|
| If your experiences have been different that's fine, but
| in my day job I am spending more and more time just
| fixing crappy LLM code produced and merged by STAFF
| engineers. I really don't see that changing any time
| soon.
| lumenwrites wrote:
| I'm pretty good at what I do, at least according to
| myself and the people I work with, and I'm comparing its
| capabilities (the latest version of Claude used as an
| agent inside Cursor) to myself. It can't fully do things
| on its own and makes mistakes, but it can do a lot.
|
| But suppose you're right, it's 60% as good as
| "stackoverflow copy-pasting programmers". Isn't that a
| pretty insanely impressive milestone to just dismiss?
|
| And why would it just get to this point, and then stop?
| Like, we can all see AIs continuously beating the
| benchmarks, and the progress feels very fast in terms of
| experience of using it as a user.
|
| I'd need to hear a pretty compelling argument to believe
| that it'll suddenly stop, something more compelling than
| "well, it's not very good yet, therefore it won't be any
| better", or "Sam Altman is lying to us because
| incentives".
|
| Sure, it can slow down somewhat because of the
| exponentially increasing compute costs, but that's
| assuming no more algorithmic progress, no more compute
| progress, and no more increases in the capital that flows
| into this field (I find that hard to believe).
| kody wrote:
| I appreciate your reply. My tone was a little dismissive;
| I'm currently deep deep in the trenches trying to unwind
| a tremendous amount of LLM slop in my team's codebase so
| I'm a little sensitive.
|
| I use Claude every day. It is definitely impressive, but
| in my experience only marginally more impressive than
| ChatGPT was a few years ago. It hallucinates less and
| compiles more reliably, but still produces really poor
| designs. It really is an overconfident junior developer.
|
| The real risk, and what I am seeing daily, is colleagues
| falling for the "if you aren't using Cursor you're going
| to be left behind" FUD. So they learn Cursor, discover
| that it's an easy way to close tickets without using your
| brain, and end up polluting the codebase with very
| questionable designs.
| lumenwrites wrote:
| Oh, sorry to hear that you have to deal with that!
|
| The way I'm getting a sense of the progress is using AI
| for what AI is currently good at, using my human brain to
| do the part AI is currently bad at, and comparing it to
| doing the same work without AI's help.
|
| I feel like AI is pretty close to automating 60-80% of
| the work I would've had to do manually two years ago (as
| a full-stack web developer).
|
| It doesn't mean that the remaining 20-40% will be
| automated very quickly, I'm just saying that I don't see
| the progress getting any slower.
| senordevnyc wrote:
| GPT-4 was released almost exactly two years ago, so "a
| few years ago" means GPT-3.5.
|
| And Claude 3.7 + Cursor agent is, for me, _way_ more than
| "marginally more impressive" compared to GPT-3.5
| burningion wrote:
| So I think there's an assumption you've made here, that
| the models are currently "60-80% as good as human
| programmers".
|
| If you look at code being generated by non-programmers
| (where you would expect to see these results!), you don't
| see output that is 60-80% of the output of domain experts
| (programmers) steering the models.
|
| I think we're extremely imprecise when we communicate in
| natural language, and this is part of the discrepancy
| between belief systems.
|
| Will an LLM model read a person's mind about what they
| want to build better than they can communicate?
|
| That's already what recommender systems (like the TikTok
| algorithm) do.
|
| But will LLMs be able to orchestrate and fill in the
| blanks of imprecision in our requests on their own, or
| will they need human steering?
|
| I think that's where there's a gap in (basically) belief
| systems of the future.
|
| If we truly get post human-level intelligence everywhere,
| there is no amount of "preparing" or "working with" the
| LLMs ahead of time that will save you from being rendered
| economically useless.
|
| This is mostly a question about how long the moat of
| human judgement lasts. I think there's an opportunity to
| work together to make things better than before, using
| these LLMs as tools that work _with_ us.
| coolThingsFirst wrote:
| Try this, launch Cursor.
|
| Type: print all prime numbers which are divisible by 3 up
| to 1M
|
| The result is that it will do a sieve. There's no need
| for this, it's just 3.
| mysfi wrote:
| Just tried this with Gemini 2.5 Pro. Got it right with
| meaningful thought process.
| mitthrowaway2 wrote:
| Can you phrase this in a concrete way, so that in 2027 we
| can all agree whether it's true or false, rather than
| circling a "no true scotsman" argument?
| abecedarius wrote:
| Good question. I tried to phrase a concrete-enough
| prediction 3.5 years ago, for 5 years out at the time:
| https://news.ycombinator.com/item?id=29020401
|
| It was surpassed around the beginning of this year, so
| you'll need to come up with a new one for 2027. Note that
| the other opinions in that older HN thread almost all
| expected less.
| kubb wrote:
| It won't be able to write a compelling novel, or build a
| software system solving a real-world problem, or operate
| heavy machinery, create a sprite sheet or 3d models, design a
| building or teach.
|
| Long term planning and execution and operating in the
| physical world is not within reach. Slight variations of
| known problems should be possible (as long as the size of the
| solution is small enough).
| lumenwrites wrote:
| I'm pretty sure you're wrong for at least 2 of those:
|
| For 3D models, check out blender-mcp:
|
| https://old.reddit.com/r/singularity/comments/1joaowb/claud
| e...
|
| https://old.reddit.com/r/aiwars/comments/1jbsn86/claude_cre
| a...
|
| Also this:
|
| https://old.reddit.com/r/StableDiffusion/comments/1hejglg/t
| r...
|
| For teaching, I'm using it to learn about tech I'm
| unfamiliar with every day, it's one of the things it's the
| most amazing at.
|
| For the things where the tolerance for mistakes is
| extremely low and the things where human oversight is
| extremely importamt, you might be right. It won't have to
| be perfect (just better than an average human) for that to
| happen, but I'm not sure if it will.
| kubb wrote:
| Just think about the delta of what the LLM does and what
| a human does, or why can't the LLM replace the human,
| e.g. in a game studio.
|
| If it can replace a teacher or an artist in 2027, you're
| right and I'm wrong.
| esafak wrote:
| It's already replacing artists; that's why they're up in
| arms. People don't need stock photographers or graphic
| designers as much as they used to.
|
| https://papers.ssrn.com/sol3/papers.cfm?abstract_id=46029
| 44
| kubb wrote:
| I know that artists don't like AI, because it's trained
| on their stolen work. And yet, AI can't create a sprite
| sheet for a 2d game.
|
| This is because it can steal a single artwork but it
| can't make a collection of visually consistent assets.
| cheevly wrote:
| Bro what are you even talking about? ControlNet has been
| able to produce consistent assets for years.
|
| How exactly do you think video models work? Frame to
| frame coherency has been possible for a long time now. A
| sprite sheet?! Are you joking me. Literally churning them
| out with AI since 2023.
| programd wrote:
| Does a fighter jet count as "heavy machinery"?
|
| https://apnews.com/article/artificial-intelligence-
| fighter-j...
| kubb wrote:
| Yes, when they send unmanned jets to combat.
| Philpax wrote:
| It's already starting with the drones:
| https://www.csis.org/analysis/ukraines-future-vision-and-
| cur...
| pixl97 wrote:
| > or operate heavy machinery
|
| What exactly do you mean by this one?
|
| In large mining operations we already have human assisted
| teleoperation AI equipment. Was watching one recently where
| the human got 5 or so push dozers lined up with a
| (admittedly simple) task of cutting a hill down and then
| just got them back in line if they ran into anything
| outside of their training. The push and backup operations
| along with blade control were done by the AI/dozer itself.
|
| Now, this isn't long term planning, but it is operating in
| the real world.
| kubb wrote:
| Operating an excavator when building a stretch of road.
| Won't happen by 2027.
| jdauriemma wrote:
| Being accountable for telling the truth
| myhf wrote:
| accountability sinks are all you need
| bayarearefugee wrote:
| I predict AGI will be solved 5 years after full self driving
| which itself is 1 year out (same as it has been for the past 10
| years).
| ahofmann wrote:
| Well said!
| arduanika wrote:
| ...not before I get in peak shape, six months from now.
| kristopolous wrote:
| People want to live their lives free of finance and centralized
| personal information.
|
| If you think most people like this stuff you're living in a
| bubble. I use it every day but the vast majority of people have
| no interest in using these nightmares of philip k dick imagined
| by silicon dreamers.
| meroes wrote:
| I'm also unafraid to say it's BS. I don't even want to call it
| scifi. It's propaganda.
| WhatsName wrote:
| This is absurd, like taking any trend and drawing a straight line
| to interpolate the future. If I would do this with my tech stock
| portfolio, we would probably cross the zero line somewhere late
| 2025...
|
| If this article were a AI model, it would be catastrophically
| overfit.
| AnimalMuppet wrote:
| It's worse. It's not drawing a straight line, it's drawing one
| that curves up, _on a log graph_.
| Lionga wrote:
| AI now even got it's own fan fiction porn. It is so stupid not
| sure whether it is worse if it is written by AI or by a human.
| the_cat_kittles wrote:
| "we demand to be taken seriously!"
| beklein wrote:
| Older and related article from one of the authors titled "What
| 2026 looks like", that is holding up very well against time.
| Written in mid 2021 (pre ChatGPT)
|
| https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
|
| //edit: remove the referral tags from URL
| dkdcwashere wrote:
| > The alignment community now starts another research agenda,
| to interrogate AIs about AI-safety-related topics. For example,
| they literally ask the models "so, are you aligned? If we made
| bigger versions of you, would they kill us? Why or why not?"
| (In Diplomacy, you can actually collect data on the analogue of
| this question, i.e. "will you betray me?" Alas, the models
| often lie about that. But it's Diplomacy, they are literally
| trained to lie, so no one cares.)
|
| ...yeah?
| motoxpro wrote:
| That's incredible how much it broadly aligns with what has
| happened. Especially because it was before ChatGPT.
| reducesuffering wrote:
| Will people finally wake up that the AGI X-Risk people have
| been right and we're rapidly approaching a really fucking big
| deal?
|
| This forum has been so behind for too long.
|
| Sama has been saying this a decade now: "Development of
| Superhuman machine intelligence is probably the greatest
| threat to the continued existence of humanity" 2015
| https://blog.samaltman.com/machine-intelligence-part-1
|
| Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders.
| They all get it, which is why they're the smart cookies in
| those positions.
|
| First stage is denial, I get it, not easy to swallow the
| gravity of what's coming.
| ffsm8 wrote:
| People have been predicting the singularity to occur
| sometimes around 2030 and 2045 waaaay further back then
| 2015. And not just by enthusiasts, I dimly remember an
| interview with Richard Darkins from back in the day...
|
| Though that doesn't mean that the current version of
| language models will ever achieve AGI, and I sincerely
| doubt they will. They'll likely be a component in the AI,
| but likely not the thing that "drives"
| neural_thing wrote:
| Vernor Vinge as much as anyone can be credited with the
| concept of the singularity. In his 1993 essay on it, he
| said he'd be surprised if it happened before 2005 or
| after 2030
|
| https://edoras.sdsu.edu/~vinge/misc/singularity.html
| ffsm8 wrote:
| Fwiw, that prediction was during Moore's law though. If
| that held until now, CPUs would run laps around what our
| current gpus do for LLMs.
| archagon wrote:
| And why are Altman's words worth anything? Is he some sort
| of great thinker? Or a leading AI researcher, perhaps?
|
| No. Altman is in his current position because he's highly
| effective at consolidating power and has friends in high
| places. That's it. Everything he says can be seen as
| marketing for the next power grab.
| skeeter2020 wrote:
| well, he did also have a an early (failed) YC startup -
| does that add cred?
| tim333 wrote:
| Altman did play some part in bringing ChatGPT about. I
| think the point is the people making AI or running
| companies making current AI are saying be wary.
|
| In general it's worth weighting the opinions of people
| who are leaders in a field, about that field, over people
| who know little about it.
| hn_throwaway_99 wrote:
| > Will people finally wake up that the AGI X-Risk people
| have been right and we're rapidly approaching a really
| fucking big deal?
|
| OK, say I totally believe this. What, pray tell, are we
| supposed to do about it?
|
| Don't you at least see the irony of quoting Sama's dire
| warnings about the development of AI, without at least
| mentioning that he is at the absolute forefront of the push
| to build this technology that can destroy all of humanity.
| It's like he's saying "This potion can destroy all of
| humanity if we make it" as he works faster and faster to
| figure out how to make it.
|
| I mean, I get it, "if we don't build it, someone else
| will", but all of the discussion around "alignment" seems
| just blatantly laughable to me. If on one hand your goal is
| to build "super intelligence", i.e. way smarter than any
| human or group of humans, how do you expect to control that
| super intelligence when you're just acting at the middling
| level of human intelligence?
|
| While I'm skeptical on the timeline, if we do ever end up
| building super intelligence, the idea that we can control
| it is a pipe dream. We may not be toast (I mean, we're
| smarter than dogs, and we keep them around), but we won't
| be in control.
|
| So if you truly believe super intelligent AI is coming, you
| may as well enjoy the view now, because there ain't nothing
| you or anyone else will be able to do to "save humanity" if
| or when it arrives.
| achierius wrote:
| Political organization to force a stop to ongoing
| research? Protest outside OAI HQ? There are lots of thing
| we could, and many of us _would_ , do if more people were
| actually convinced their life were in danger.
| hn_throwaway_99 wrote:
| > Political organization to force a stop to ongoing
| research? Protest outside OAI HQ?
|
| Come on, be real. Do you honestly think that would make a
| lick of difference? _Maybe_ , at best, delay things by a
| couple months. But this is a worldwide phenomenon, and
| humans have shown time and time again that they are not
| able to self organize globally. How successful do you
| think that political organization is going to be in
| slowing China's progress?
| achierius wrote:
| Humans have shown time and time again that they _are_
| able to self-organize globally.
|
| Nuclear deterrence -- human cloning -- bioweapon
| proliferation -- Antarctic neutrality -- the list goes
| on.
|
| > How successful do you think that political organization
| is going to be in slowing China's progress?
|
| I wish people would stop with this tired war-mongering.
| China was not the one who opened up this can of worms.
| China has _never_ been the one pushing the edge of
| capabilities. Before Sam Altman decided to give ChatGPT
| to the world, they were actively cracking down on
| software companies (in favor of hardware & "concrete"
| production).
|
| We, the US, are the ones who chose to do this. We started
| the race. We put the world, all of humanity, on this
| path.
|
| > Do you honestly think that would make a lick of
| difference?
|
| I don't know, it depends. Perhaps we're lucky and the
| timelines are slow enough that 20-30% of the population
| loses their jobs before things become unrecoverable. Tech
| companies used to warn people not to wear their badges in
| public in San Francisco -- and that was what, 2020? Would
| you really want to work at "Human Replacer, Inc." when
| that means walking out and about among a population who
| you know hates you, viscerally? Or if we make it to 2028
| in the same condition. The Bonus Army was bad enough --
| how confident are you that the government would stand
| their ground, keep letting these labs advance
| capabilities, when their electoral necks were on the
| line?
|
| This defeatism is a self-fulfilling prophecy. The people
| _have_ the power to make things happen, and rhetoric like
| this is the most powerful thing holding them back.
| eagleislandsong wrote:
| > China was not the one who opened up this can of worms
|
| Thank you. As someone who lives in Southeast Asia (and
| who also has lived in East Asia -- pardon the deliberate
| vagueness, for I do not wish to reveal too many
| potentially personally identifying information), this is
| how many of us in these regions view the current tensions
| between China and Taiwan as well.
|
| Don't get me wrong; we acknowledge that many Taiwanese
| people want independence, that they are a people with
| their own aspirations and agency. But we can also see
| that the US -- and its European friends, which often
| blindly adopt its rhetoric and foreign policy -- is
| deliberately using Taiwan as a disposable pawn to attempt
| to provoke China into a conflict. The US will do what it
| has always done ever since the post-WW2 period --
| destabilise entire regions of countries to further its
| own imperialistic goals, causing the deaths and suffering
| of millions, and then leaving the local populations to
| deal with the fallout for many decades after.
|
| Without the US intentionally stoking the flames of mutual
| antagonism between China and Taiwan, the two countries
| could have slowly (perhaps over the next decades) come to
| terms with each other, be it voluntary reunification or
| peaceful separation. If you know a bit of Chinese
| history, it is not entirely far-fetched at all to think
| that the Chinese might eventually agree to recognising
| Taiwan as an independent nation, but now this option has
| now been denied because the US has decided to use Taiwan
| as a pawn in a proxy conflict.
|
| To anticipate questions about China's military invasion
| of Taiwan by 2027: No, I do not believe it will happen.
| Don't believe everything the US authorities claim.
| ctoth wrote:
| We're all gonna die but come on, who wants to stop that!
| ctoth wrote:
| I love this pattern, the oldest pattern.
|
| There is nothing happening!
|
| The thing that is happening is not important!
|
| The thing that is happening is important, but it's too
| late to do anything about it!
|
| Well, maybe if you had done something when we first
| started warning about this...
|
| See also: Covid/Climate/Bird Flu/the news.
| reducesuffering wrote:
| > If on one hand your goal is to build "super
| intelligence", i.e. way smarter than any human or group
| of humans, how do you expect to control that super
| intelligence when you're just acting at the middling
| level of human intelligence?
|
| That's exactly what the true AGI X-Riskers think! Sama
| acknowledges the intense risk but thinks the path forward
| is inevitable anyway so hoping that building intelligence
| will give them the intelligence to solve alignment. The
| other camp, a la Yudkowsky, believe it's futile to just
| hope it gets solved without AGI capabilities first
| becoming more intelligent, powerful, and disregarding any
| of our wishes. And then we've ceded any control of our
| future to an uncaring system that treats us as a means to
| achieve its original goals like how an ant is in the way
| of a Google datacenter. I don't see how anyone who thinks
| "maybe stock number go up as your only goal is not the
| best way to make people happy", can miss this.
| hollerith wrote:
| Slightly more detail: until about 2001 Yudkowsky was what
| we would now call an AI accelerationist, then it dawned
| on him that creating an AI that is much "better at
| reality" than people are would probably kill all the
| people unless the AI has been carefully designed to stay
| aligned with human values (i.e., to want what we want)
| and that ensuring that it stays aligned is a very thorny
| technical problem, but was still hopeful that humankind
| would solve the thorny problem. He worked full time on
| the alignment problem himself. In 2015 he came to believe
| that the alignment problem is so hard that it is very
| very unlikely to be solved by the time it is needed
| (namely, when the first AI is deployed that is much
| "better at reality" than people are). He went public with
| his pessimism in Apr 2022, and his nonprofit (the Machine
| Intelligence Research Institute) fired most of its
| technical alignment researchers and changed its focus to
| lobbying governments to ban the dangerous kind of AI
| research.
| pixl97 wrote:
| >This forum has been so behind for too long.
|
| There is a strong financial incentive for a lot of people
| on this site to deny they are at risk from it, or to deny
| what they are building has risk and they should have
| culpability from that.
| samr71 wrote:
| It's not something you need to worry about.
|
| If we get the Singularity, it's overwhelmingly likely Jesus
| will return concurrently.
| tim333 wrote:
| Though possibly only in AI form.
| goatlover wrote:
| > "Development of Superhuman machine intelligence is
| probably the greatest threat to the continued existence of
| humanity"
|
| If that's really true, why is there such a big push to
| rapidly improve AI? I'm guessing OpenAI, Google, Anthropic,
| Apple, Meta, Boston Dynamics don't really believe this.
| They believe AI will make them billions. What is OpenAI's
| definition of AGI? A model that makes $100 billion?
| AgentME wrote:
| Because they also believe the development of superhuman
| machine intelligence will probably be the greatest
| invention for humanity. The possible upsides and
| downsides are both staggeringly huge and uncertain.
| medvezhenok wrote:
| You can also have prisoner's dilemma where no single
| actor is capable of stopping AI's advance
| FairlyInvolved wrote:
| There's a pretty good summary of how well it has held up
| here, by the significance of each claim:
|
| https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating.
| ..
| smusamashah wrote:
| How does it talk about GPT-1 or 3 if it was before ChatGPT?
| dragonwriter wrote:
| GPT-3 (and, naturally, all prior versions even farther back)
| was released ~2 years before ChatGPT (whose launch model was
| GPT-3.5)
|
| The publication date on this article is about halfway between
| GPT-3 and ChatGPT releases.
| Tenoke wrote:
| GPT-2 for example came out in 2019. _Chat_ GPT wasn't the
| start of GPT.
| botro wrote:
| This is damn near prescient, I'm having a hard time believing
| it was written in 2021.
|
| He did get this part wrong though, we ended up calling them
| 'Mixture of Experts' instead of 'AI bureaucracies'.
| stavros wrote:
| I think the bureaucracies part is referring more to Deep
| Research than to MoE.
| robotresearcher wrote:
| We were calling them 'Mixture of Experts' ~30 years before
| that.
|
| https://ieeexplore.ieee.org/document/6215056
| dingnuts wrote:
| nevermind, I hate this website :D
| comp_throw7 wrote:
| Surely you're familiar with
| https://ai.meta.com/research/cicero/diplomacy/ (2022)?
|
| > I wonder who pays the bills of the authors. And your bills,
| for that matter.
|
| Also, what a weirdly conspiratorial question. There's a
| prominent "Who are we?" button near the top of the page and
| it's not a secret what any of the authors did or do for a
| living.
| dingnuts wrote:
| hmmm I apparently confused it with an RTS, oops.
|
| also it's not conspiratorial to wonder if someone in
| silicon valley today receives funding through the AI
| industry lol like half the industry is currently propped up
| by that hype, probably half the commenters here are paid
| via AI VC investments
| samth wrote:
| I think it's not holding up that well outside of predictions
| about AI research itself. In particular, he makes a lot of
| predictions about AI impact on persuasion, propaganda, the
| information environment, etc that have not happened.
| madethisnow wrote:
| something you can't know
| elicksaur wrote:
| This doesn't seem like a great way to reason about the
| predictions.
|
| For something like this, saying "There is no evidence
| showing it" is a good enough refutation.
|
| Counterpointing that "Well, there could be a lot of this
| going on, but it is in secret." - that could be a
| justification for any kooky theory out there. Bigfoot,
| UFOs, ghosts. Maybe AI has already replaced all of us and
| we're Cylons. Something we couldn't know.
|
| The predictions are specific enough that they are
| falsifiable, so they should stand or fall based on the
| clear material evidence supporting or contradicting them.
| LordDragonfang wrote:
| Could you give some specific examples of things you feel
| definitely did not come to pass? Because I see a lot of
| people here talking about how the article missed the mark on
| propaganda; meanwhile I can tab over to twitter and see a
| substantial portion of the comment section of every high-
| engagement tweet being accused of being Russia-run LLM
| propaganda bots.
| Aurornis wrote:
| Agree. The base claims about LLMs getting bigger, more
| popular, and capturing people's imagination are right. Those
| claims are as easy as it gets, though.
|
| Look into the specific claims and it's not as amazing. Like
| the claim that models will require an entire year to train,
| when in reality it's on the order of weeks.
|
| The societal claims also fall apart quickly:
|
| > Censorship is widespread and increasing, as it has for the
| last decade or two. Big neural nets read posts and view
| memes, scanning for toxicity and hate speech and a few other
| things. (More things keep getting added to the list.) Someone
| had the bright idea of making the newsfeed recommendation
| algorithm gently 'nudge' people towards spewing less hate
| speech; now a component of its reward function is minimizing
| the probability that the user will say something worthy of
| censorship in the next 48 hours.
|
| This is a common trend in rationalist and "X-risk" writers:
| Write a big article with mostly safe claims (LLMs will get
| bigger and perform better!) and a lot of hedging, then people
| will always see the article as primarily correct. When you
| extract out the easy claims and look at the specifics, it's
| not as impressive.
|
| This article also shows some major signs that the author is
| deeply embedded in specific online bubbles, like this:
|
| > Most of America gets their news from Twitter, Reddit, etc.
|
| Sites like Reddit and Twitter feel like the entire universe
| when you're embedded in them, but when you step back and look
| at the numbers only a fraction of the US population are
| active users.
| LordDragonfang wrote:
| > (2025) Making models bigger is not what's cool anymore. They
| are trillions of parameters big already. What's cool is making
| them run longer, in bureaucracies of various designs, before
| giving their answers.
|
| Holy shit. That's a hell of a called shot from 2021.
| someothherguyy wrote:
| its vague, and could have meant anything. everyone knew
| parameters would grow and its reasonable to expect that
| things that grow have diminishing returns at some point. this
| happened in late 2023 and throughout 2024 as well.
| LordDragonfang wrote:
| That quote almost perfectly describes o1, which was the
| first major model to explicitly build in compute time as a
| part of its scaling. (And despite claims of vagueness, I
| can't think of a single model release it describes better).
| The idea of a scratchpad was obvious, but no major chatbot
| had integrated it until then, because they were all focused
| on parameter scaling. o1 was released at the very end of
| 2024.
| cavisne wrote:
| This article was prescient enough that I had to check in
| wayback machine. Very cool.
| torginus wrote:
| I'm not seeing the prescience here - I don't wanna go through
| the specific points but the main gist here seems to be that
| chatbots will become very good at pretending to be human and
| influencing people to their own ends.
|
| I don't think much has happened on these fronts (owning to a
| lack of interest, not technical difficulty). AI
| boyfriends/roleplaying etc. seems to have stayed a very niche
| interest, with models improving very little over GPT3.5, and
| the actual products are seemingly absent.
|
| It's very much the product of the culture war era, where one of
| the scary scenarios show off, is a chatbot riling up a set of
| internet commenters and goarding them lashing out against
| modern leftist orthodoxy, and then cancelling them.
|
| With all thestrongholds of leftist orthodoxy falling into
| Trump's hands overnight, this view of the internet seems
| outdated.
|
| Troll chatbots still are a minor weapon in information warfare/
| The 'opinion bubbles' and manipulation of trending topics on
| social media (with the most influential content still written
| by humans), to change the perception of what's the popular
| concensus still seem to hold up as primary tools of influence.
|
| Nowadays, when most people are concerned about stuff like 'will
| the US go into a shooting war against NATO' or 'will they
| manage to crash the global economy', just to name a few of the
| dozen immediately pressing global issues, I think people are
| worried about different stuff nowadays.
|
| At the same time, there's very little mention of 'AI will take
| our jobs and make us poor' in both the intellectual and
| physical realms, something that's driving most people's anxiety
| around AI nowadays.
|
| It also puts the 'superintelligent unaligned AI will kill us
| all' argument very often presented by alignment people as a
| primary threat rather than the more plausible 'people
| controlling AI are the real danger'.
| amarcheschi wrote:
| I just spent some time trying to make claude and gemini make a
| violin plot of some polar dataframe. I've never used it and it's
| just for prototyping so i just went "apply a log to the values
| and make a violin plot of this polars dataframe". ANd had to
| iterate with them for 4/5 times each. Gemini got it right but
| then used deprecated methods
|
| I might be doing llm wrong, but i just can't get how people might
| actually do something not trivial just by vibe coding. And it's
| not like i'm an old fart either, i'm a university student
| VOIPThrowaway wrote:
| You're asking it to think and it can't.
|
| It's spicy auto complete. Ask it to create a program that can
| create a violin plot from a CVS file. Because this has been
| "done before", it will do a decent job.
| suddenlybananas wrote:
| But this blog post said that it's going to be God in like 5
| years?!
| pydry wrote:
| all tech hype cycles are a bit like this. when you were born
| people were predicting the end of offline shops.
|
| The trough of disillusionment will set in for everybody else in
| due time.
| dinfinity wrote:
| Yes, you're most likely doing it wrong. I would like to add
| that "vibe coding" is a dreadful term thought up by someone who
| is arguably not very good at software engineering, as talented
| as he may be in other respects. The term has become a
| misleading and frankly pejorative term. A better, more neutral
| one is AI assisted software engineering.
|
| This is an article that describes a pretty good approach for
| that: https://getstream.io/blog/cursor-ai-large-projects/
|
| But do skip (or at least significantly postpone) enabling the
| 'yolo mode' (sigh).
| amarcheschi wrote:
| You see, the issue I get petty about is that Ai is advertised
| as the one ring to rule them all software. VCs creaming
| themselves at the thought of not having to pay developers and
| using natural language. But then, you have to still adapt to
| the Ai, and not vice versa. "you're doing it wrong". This is
| not the idea that VCs bros are selling
|
| Then, I absolutely love being aided by llms for my day to day
| tasks. I'm much more efficient when studying and they can be
| a game changer when you're stuck and you don't know how to
| proceed. You can discuss different implementation ideas as if
| you had a colleague, perhaps not a PhD smart one but still
| someone with a quite deep knowledge of everything
|
| But, it's no miracle. That's the issue I have with the way
| the idea of Ai is sold to the c suites and the general public
| pixl97 wrote:
| >But, it's no miracle.
|
| All I can say to this is fucking good!
|
| Lets imagine we got AGI at the start of 2022. I'm talking
| about human level+ as good as you coding and reasoning AI
| that works well on the hardware from that age.
|
| What would the world look like today? Would you still have
| your job. With the world be in total disarray? Would
| unethical companies quickly fire most their staff and
| replace them with machines? Would their be mass riots in
| the streets by starving neo-luddites? Would automated
| drones be shooting at them?
|
| Simply put people and our social systems are not ready for
| competent machine intelligence and how fast it will change
| the world. We should feel lucky we are getting a ramp up
| period, and hopefully one that draws out a while longer.
| hiq wrote:
| > had to iterate with them for 4/5 times each. Gemini got it
| right but then used deprecated methods
|
| How hard would it be to automate these iterations?
|
| How hard would it be to automatically check and improve the
| code to avoid deprecated methods?
|
| I agree that most products are still underwhelming, but that
| doesn't mean that the underlying tech is not already enough to
| deliver better LLM-based products. Lately I've been using LLMs
| more and more to get started with writing tests on components
| I'm not familiar with, it really helps.
| henryjcee wrote:
| > How hard would it be to automate these iterations?
|
| The fact that we're no closer to doing this than we were when
| chatgpt launched suggests that it's really hard. If anything
| I think it's _the_ hard bit vs. building something that
| generates plausible text.
|
| Solving this for the general case is imo a completely
| different problem to being able to generate plausible text in
| the general case.
| HDThoreaun wrote:
| This is not true. The chain of logic models are able to
| check their work and try again given enough compute.
| lelandbatey wrote:
| They can check their work and try again an infinite
| number of times, but the rate at which they _succeed_
| seems to just get worse and worse the further from the
| beaten path (of existing code from existing solutions)
| that they stray.
| jaccola wrote:
| How hard can it be to create a universal "correctness"
| checker? Pretty damn hard!
|
| Our notion of "correct" for most things is basically derived
| from a very long training run on reality with the loss
| function being for how long a gene propagated.
| 9dev wrote:
| How hard would it be, in terms of the energy wasted for it?
| Is everything we can do worth doing, just for the sake of
| being able to?
| juped wrote:
| You pretty much just have to play around with them enough to be
| able to intuit what things they can do and what things they
| can't. I'd rather have another underling, and not just because
| they grow into peers eventually, but LLMs are useful with a bit
| of practice.
| moab wrote:
| > "OpenBrain (the leading US AI project) builds AI agents that
| are good enough to dramatically accelerate their research. The
| humans, who up until very recently had been the best AI
| researchers on the planet, sit back and watch the AIs do their
| jobs, making better and better AI systems."
|
| I'm not sure what gives the authors the confidence to predict
| such statements. Wishful thinking? Worst-case paranoia? I agree
| that such an outcome is possible, but on 2--3 year timelines?
| This would imply that the approach everyone is taking right now
| is the _right_ approach and that there are no hidden conceptual
| roadblocks to achieving AGI /superintelligence from DFS-ing down
| this path.
|
| All of the predictions seem to ignore the possibility of such
| barriers, or at most acknowledge the possibility but wave it away
| by appealing to the army of AI researchers and industry funding
| being allocated to this problem. IMO it is the onus of the
| proposers of such timelines to argue why there are no such
| barriers and that we will see predictable scaling in the 2--3
| year horizon.
| throwawaylolllm wrote:
| It's my belief (and I'm far from the only person who thinks
| this) that many AI optimists are motivated by an essentially
| religious belief that you could call Singularitarianism. So
| "wishful thinking" would be one answer. This document would
| then be the rough equivalent of a Christian fundamentalist
| outlining, on the basis of tangentially related news stories,
| how the Second Coming will come to pass in the next few years.
| pixl97 wrote:
| Eh, not sure if the second coming is a great analogy. That
| wholly depends on the whims of a fictional entity performing
| some unlikely actions.
|
| Instead think of them saying a crusade occurring in the next
| few years. When the group saying the crusade is coming is
| spending billions of dollars to trying to make just that
| occur you no longer have the ability to say it's not going to
| happen. You are now forced to examine the risks of their
| actions.
| viccis wrote:
| Crackpot millenarians have always been a thing. This crop of
| them is just particularly lame and hellbent on boiling the
| oceans to get their eschatological outcome.
| ivm wrote:
| Spot on, see the 2017 article "God in the machine: my strange
| journey into transhumanism" about that dynamic:
|
| https://www.theguardian.com/technology/2017/apr/18/god-in-
| th...
| spacephysics wrote:
| Reminds me of Fallout's Children of Atom "Church of the
| Children of Atom"
|
| Maybe we'll see "Church of the Children of Altman" /s
|
| It seems without a framework of ethics/morality (insert XYZ
| religion), us humans find one to grasp onto. Be it a cult, a
| set of not-so-fleshed-out ideas/philosophies etc.
|
| People who say they aren't religious per-se, seem to have
| some set of beliefs that amount to religion. Just depends who
| or what you look towards for those beliefs, many of which
| seem to be half-hazard.
|
| People I may disagree with the most, many times at least have
| a realization of _what_ ideas /beliefs are unifying their
| structure of reality, with others just not aware.
|
| A small minority of people can rely on schools of
| philosophical thought, and 'try on' or play with different
| ideas, but have a self-reflection that allows them to see
| when they transgress from ABC philosophy or when the
| philosophy doesn't match with their identity to a degree.
| barbarr wrote:
| It also ignores the possibility of plateau... maybe there's a
| maximum amount of intelligence that matter can support, and it
| doesn't scale up with copies or speed.
| AlexandrB wrote:
| Or scales sub-linearly with hardware. When you're in the
| rising portion of an S-curve[1] you can't tell how much
| longer it will go on before plateauing.
|
| A lot of this resembles post-war futurism that assumed we
| would all be flying around in spaceships and personal flying
| cars within a decade. Unfortunately the rapid pace of
| transportation innovation slowed due to physical and cost
| constraints and we've made little progress (beyond cost
| optimization) since.
|
| [1] https://en.wikipedia.org/wiki/Sigmoid_function
| Tossrock wrote:
| The fact that it scales sub linearly with hardware is well
| known and in fact foundational to the scaling laws on which
| modern LLMs are built, ie performance scales remarkably
| closely to log(compute+data), over many orders of
| magnitude.
| pixl97 wrote:
| Eh, these mathematics still don't work out in humans favor...
|
| Lets say intelligence caps out at the maximum smartest person
| that's ever lived. Well, the first thing we'd attempt to do
| is build machines up to that limit that 99.99999 percent of
| us would never get close to. Moreso the thinking parts of
| humans is only around 2 pounds of mush in side of our heads.
| On top of that you don't have to grow them for 18 years first
| before they start outputting something useful. That and they
| won't need sleep. Oh and you can feed them with solar panels.
| And they won't be getting distracted by that super sleek
| server rack across the aisle.
|
| We do know 'hive' or societal intelligence does scale over
| time especially with integration with tooling. The amount of
| knowledge we have and the means of which we can apply it
| simply dwarf previous generations.
| ddp26 wrote:
| Check out the Timelines Forecast under "research". They model
| this very carefully.
|
| (They could be wrong, but this isn't a guess, it's a well-
| researched forecast.)
| MrScruff wrote:
| I would assume this comes from having faith in the overall
| exponential trend rather than getting that much into the weeds
| of how this will come about. I can _sort_ of see why you might
| think that way - everyone was talking about hitting a wall with
| brute force scaling and then inference time scaling comes along
| to keep things progressing. I wouldn 't be quite as confident
| personally and as have many have said before, a sigmoid looks
| like an exponential in it's initial phase.
| zvitiate wrote:
| There's a lot to potentially unpack here, but idk, the idea that
| humanity entering hell (extermination) or heaven (brain
| uploading; aging cure) is whether or not we listen to AI safety
| researchers for a few months makes me question whether it's
| really worth unpacking.
| amelius wrote:
| If _we_ don 't do it, someone else will.
| itishappy wrote:
| Which? Exterminate humanity or cure aging?
| ethersteeds wrote:
| Yes
| amelius wrote:
| The thing whose outcome can go either way.
| itishappy wrote:
| I honestly can't tell what you're trying to say here. I'd
| argue there's some pretty significant barriers to each.
| layer8 wrote:
| I'm okay if someone else unpacks it.
| achierius wrote:
| That's obviously not true. Before OpenAI blew the field open,
| multiple labs -- e.g. Google -- were _intentionally holding
| back_ their research from the public eye because they thought
| the world was not ready. Investors were not pouring billions
| into capabilities. China did not particularly care to focus
| on this one research area, among many, that the US is still
| solidly ahead in.
|
| The only reason timelines are as short as they are is
| _because_ of people at OpenAI and thereafter Anthropic
| deciding that "they had no choice". They had a choice, and
| they took the one which has chopped at the very least _years_
| off of the time we would otherwise have had to handle all of
| this. I can barely begin to describe the magnitude of the
| crime that they have committed -- and so I suggest that you
| consider that before propagating the same destructive lies
| that led us here in the first place.
| pixl97 wrote:
| The simplicity of the statement "If we don't do it, someone
| else will." and thinking behind it eventually means someone
| will do just that unless otherwise prevented by some
| regulatory function.
|
| Simply put, with the ever increasing hardware speeds we
| were dumping out for other purposes this day would have
| come sooner than later. We're talking about only a year or
| two really.
| HeatrayEnjoyer wrote:
| Cloning? Bioweapons? Ever larger nuclear stockpiles? The
| world has collectively agreed not to do something more
| than once. AI would be easier to control than any of the
| above. GPUs can't be dug out of the ground.
| achierius wrote:
| But every time, it doesn't have to happen yet. And when
| you're talking about the potential deaths of millions, or
| billions, why be the one who spawns the seed of
| destruction in their own home country? Why not give human
| brotherhood a chance? People have, and do, hold back. You
| notice the times they don't, and the few who don't -- you
| forget the many, many more who do refrain from doing
| what's wrong.
|
| "We have to nuke the Russians, if we don't do it first,
| they will"
|
| "We have to clone humans, if we don't do it, someone else
| will"
|
| "We have to annex Antarctica, if we don't do it, someone
| else will"
| 9dev wrote:
| Maybe people should just don't listen to AI safety researchers
| for a few months? Maybe they are qualified to talk about
| inference and model weights and natural language processing,
| but not particularly knowledgeable about economics, biology,
| psychology, or... pretty much every other field of study?
|
| The hubris is strong with some people, and a certain oligarch
| with a god complex is acting out where that can lead right now.
| arduanika wrote:
| It's charitable of you to think that they might be qualified
| to talk about inference and model weights and such. They are
| AI _safety_ researchers, not AI researchers. Basically, a
| bunch of doom bloggers, jerking each other in a circle, a few
| of whom were tolerated at one of the major labs for a few
| years, to do their jerking on company time.
| Q6T46nT668w6i3m wrote:
| This is worse than the mansplaining scene from Annie Hall.
| arduanika wrote:
| You mean the part where he pulls out Marshal McLuhan to back
| him up in an argument? "You know nothing of my work..."
| qwertox wrote:
| That is some awesome webdesign.
| IshKebab wrote:
| This is hilariously over-optimistic on the timescales. Like on
| this timeline we'll have a Mars colony in 10 years, immortality
| drugs in 15 and Half Life 3 in 20.
| sva_ wrote:
| You forgot fusion energy
| klabb3 wrote:
| Quantum AI powered by cold fusion and blockchain when?
| zvitiate wrote:
| No, sooner lol. We'll have aging cures and brain uploading by
| late 2028. Dyson Swarms will be "emerging tech".
| mchusma wrote:
| I like that the "slowdown" scenario has by 2030 we have a robot
| economy, cure for aging, brain uploading, and are working on a
| Dyson Sphere.
| Aurornis wrote:
| The story is very clearly modeled to follow the exponential
| curve they show.
|
| Like the drew the curve out into the shape they wanted, put
| some milestones on it, and then went to work imagining what
| would happen if it continued with a heavy dose of X-risk
| doomerism to keep it spicy.
|
| It conveniently ignores all of the physical constraints
| around things like manufacturing GPUs and scaling training
| networks.
| joshjob42 wrote:
| https://ai-2027.com/research/compute-forecast
|
| In section 4 they discuss their projections specifically
| for model size, the state of inference chips in 2027, etc.
| It's largely pretty in line with expectations in terms of
| the capacity, and they only project them using 10k of their
| latest gen wafer scale inference chips by late 2027,
| roughly like 1M H100 equivalents. That doesn't seem at all
| impossible. They also earlier on discuss expectations for
| growth in efficiency of chips, and for growth in spending,
| which is only ~10x over the next 2.5 years, not
| unreasonable in absolute terms at all given the many tens
| of billions of dollars flooding in.
|
| So on the "can we train the AI" front, they mostly are just
| projecting 2.5 years of the growth in scale we've been
| seeing.
|
| The reason they predict a fairly hard takeoff is they
| expect that distillation, some algorithmic improvements,
| and iterated creation of synthetic data, training, and then
| making more synthetic data will enable significant
| improvements in efficiency of the underlying models
| (something still largely in line with developments over the
| last 2 years). In particular they expect a 10T parameter
| model in early 2027 to be basically human equivalent, and
| they expect it to "think" at about the rate humans do, 10
| words/second. That would require ~300 teraflops of compute
| per second to think at that rate, or ~0.1H100e. That means
| one of their inference chips could potentially run ~1000
| copies (or fewer copies faster etc. etc.) and thus they
| have the capacity for millions of human equivalent
| researchers (or 100k 40x speed researchers) in early 2027.
|
| They further expect distillation of such models etc. to
| squeeze the necessary size down / more expensive models
| overseeing much smaller but still good models squeezing the
| effective amount of compute necessary, down to just 2T
| parameters and ~60 teraflops each, or 5000 human-
| equivalents per inference chip, making for up to 50M human-
| equivalents by late 2027.
|
| This is probably the biggest open question and the place
| where the most criticism seems to me to be warranted. Their
| hardware timelines are pretty reasonable, but one could
| easily expect needing 10-100x more compute or even perhaps
| 1000x than they describe to achieve Nobel-winner AGI or
| superintelligence.
| tsurba wrote:
| I don't believe so. I think all important parts that each
| need to be scaled to advance significantly in the LLM
| paradigm are at or near the end of the steep part of the
| sigmoid:
|
| 1) useful training data available in the internet 2)
| number of humans creating more training data "manually"
| 3) parameter scaling 4) "easy" algorithmic inventions 5)
| available+buildable compute
|
| "Just" needing a few more algorithmic inventions to keep
| the graphs exponential is a cop out. It is already
| obvious that just scaling parameters and compute is not
| enough.
|
| I personally predict that scaling LLMs for solving all
| physical tasks (eg cleaning robots) or intellectual
| pursuits (they suck at multiplication) will not work out.
|
| We _will_ get better specialized tools by collecting data
| from specific, high economic value, constrained tasks,
| and automating them, but scaling a (multimodal) LLM to
| solve everything in a single model will not be
| economically viable. We will get more natural interfaces
| for many tasks.
|
| This is how I think right now as a ML researcher, will be
| interesting to see how wrong was I in 2 years.
|
| EDIT: addition about latest algorithmic advances:
|
| - Deepseek style GRPO requires a ladder of scored
| problems progressively more difficult and appropriate to
| get useful gradients. For open-ended problems (like most
| interesting ones are) we have no ladders for, and it
| doesn't work. In particular, learning to generate code
| for leetcode problems with a good number of well made
| unit tests is what it is good for.
|
| - Test-time inference is just adding an insane amount of
| more compute after training to brute-force double-check
| the sanity of answers
|
| Neither will keep the graphs exponential.
| ctoth wrote:
| Can you share your detailed projection of what you expect the
| future to look like so I can compare?
| Gud wrote:
| Slightly slower web frameworks by 2026. By 2030, a lot
| slower.
| IshKebab wrote:
| Sure
|
| 5 years: AI coding assistants are a lot better than they are
| now, but still can't actually replace junior engineers (at
| least ones that aren't shit). AI fraud is rampant, with faked
| audio commonplace. Some companies try replacing call centres
| with AI, but it doesn't really work and everyone hates it.
|
| Tesla's robotaxi won't be available, but Waymo will be in
| most major US cities.
|
| 10 years: AI assistants are now useful enough that you can
| use them in the ways that Apple and Google really wanted you
| to use Siri/Google Assistant 5 years ago. "What have I got
| scheduled for today?" will give useful results, and you'll be
| able to have a natural conversation and take actions that you
| trust ("cancel my 10am meeting; tell them I'm sick").
|
| AI coding assistants are now _very_ good and everyone will
| use them. Junior devs will still exist. Vibe coding will
| actually work.
|
| Most AI Startups will have gone bust, leaving only a few
| players.
|
| Art-based AI will be very popular and artists will use it all
| the time. It will be part of their normal workflow.
|
| Waymo will become available in Europe.
|
| Some receptionists and PAs have been replaced by AI.
|
| 15 years: AI researchers finally discover how to do on-line
| learning.
|
| Humanoid robots are robust and smart enough to survive in the
| real world and start to be deployed in controlled
| environments (e.g. factories) doing simple tasks.
|
| Driverless cars are "normal" but not owned by individuals and
| driverful cars are still way more common.
|
| Small light computers become fast enough that autonomous
| slaughter it's become reality (i.e. drones that can do their
| own navigation and face recognition etc.)
|
| 20 years: Valve confirms no Half Life 3.
| archagon wrote:
| > _Small light computers become fast enough that autonomous
| slaughter it 's become reality_
|
| This is the real scary bit. I'm not convinced that AI will
| _ever_ be good enough to think independently and create
| novel things without some serious human supervision, but
| none of that matters when applied to machines that are
| destructive by design and already have expectations of
| collateral damage. Slaughterbots are going to be the new
| WMDs -- and corporations are salivating at the prospect of
| being first movers.
| https://www.youtube.com/watch?v=UiiqiaUBAL8
| dontlikeyoueith wrote:
| Zero Dawn future confirmed.
| Trumpion wrote:
| Why do you believe that?
|
| The lowest estimations of how much compute our brain
| represents was already achieved with the last chip from
| Nvidia (Blackwell).
|
| The newest gpu cluster from Google, Microsoft, Facebook,
| iax, and co have added so crazy much compute it's absurd.
| pixl97 wrote:
| >I'm not convinced that AI will ever be good enough to
| think independently a
|
| and
|
| >Why do you believe that?
|
| What takes less effort, time to deploy, and cost? I mean
| there is at least some probability we kill ourselves off
| with dangerous semi-thinking war machines leading to
| theater scale wars to the point society falls apart and
| we don't have the expensive infrastructure to make AI as
| envisioned in the future.
|
| With that said, I'm in the camp that we can create AGI as
| nature was able to with a random walk, we'll be able to
| reproduce it with intelligent design.
| baq wrote:
| If you bake the model onto the chip itself, which is what
| should be happening for local LLMs once a good enough one
| is trained eventually, you'll be looking at orders of
| magnitude reduction in power consumption at constant
| inference speed.
| Quarrelsome wrote:
| you should add a bit where AI is pushed really hard in
| places where the subjects have low political power, like
| management of entry level workers, care homes or education
| and super bad stuff happens.
|
| Also we need a big legal event to happen where (for
| example) autonomous driving is part of a really big
| accident where lots of people die or someone brings a
| successful court case that an AI mortgage underwriter is
| discriminating based on race or caste. It won't matter if
| AI is actually genuinely responsible for this or not, what
| will matter is the push-back and the news cycle.
|
| Maybe more events where people start successfully gaming
| deployed AI at scale in order to get mortgages they
| shouldn't or get A-grades when they shouldn't.
| 9dev wrote:
| It's soothing to read a realistic scenario amongst all of
| the ludicrous hype on here.
| FairlyInvolved wrote:
| We are going to scale up GPT4 by a factor of ~10,000 and
| that will result in getting an accurate summary of your
| daily schedule?
| stale2002 wrote:
| Unfortunately with the way scaling laws are working out,
| each order of magnitude increase in computer only makes
| models a little better.
|
| Meaning they nobody will even bother to 10,000X GPT4.
| tsunagatta wrote:
| If we're lucky.
| petesergeant wrote:
| > Some companies try replacing call centres with AI, but it
| doesn't really work and everyone hates it.
|
| I think this is much closer than you think, because there's
| a good percentage of call centers that are basically just
| humans with no power cosplaying as people who can help.
|
| My fiber connection went to shit recently. I messaged the
| company, and got a human who told me they were going to
| reset the connection from their side, if I rebooted my
| router. 30m later with no progress, I got a human who told
| me that they'd reset my ports, which I was skeptical about,
| but put down to a language issue, and again reset my
| router. 30m later, the human gave me an even more
| outlandish technical explanation of what they'd do, at
| which point I stumbled across the magical term "complaint"
| ... an engineer phoned me 15m later, said there was
| something genuinely wrong with the physical connection, and
| they had a human show up a few hours later and fix it.
|
| No part of the first-layer support experience there would
| have been degraded if replaced by AI, but the company would
| have saved some cash.
| FeepingCreature wrote:
| It kind of sounds like you're saying "exactly everything we
| have today, we will have mildly more of."
| WXLCKNO wrote:
| So in the past 5 years we went from not having ChatGPT at
| all and it being released in 2022 (with non "chat" models
| before that) but in the next 5 now that the entire tech
| world is consumed with making better AI models, we'll just
| get slightly better AI coding assistants?
|
| Reminds me of that comment about the first iPod being lame
| and having less space than a nomad. Worst take I've ever
| seen on here recently.
| arduanika wrote:
| With each passing year, AI doom grifters will learn more and
| more web design gimmicks.
| Trumpion wrote:
| We currently don't see any ceiling if this continues in this
| speed, we will have cheaper, faster and better models every
| quarter.
|
| Therewas never something progressing so fast
|
| It would be very ignorant not to keep a very close eye on it
|
| There is still a a chance that it will happen a lot slower and
| the progression will be slow enough that we adjust in time.
|
| But besides AI we also now get robots. The impact for a lot of
| people will be very real
| turnsout wrote:
| IMO they haven't even predicted mid-2025. >
| Coding AIs increasingly look like autonomous agents rather than
| mere assistants: taking instructions via Slack or Teams and
| making substantial code changes on their own, sometimes saving
| hours or even days.
|
| Yeah, we are _so_ not there yet.
| Tossrock wrote:
| That is literally the pitch line for Devin. I recently spoke
| to the CTO of a small healthtech startup and he was very pro-
| Devin for small fixes and PRs, and thought he was getting his
| money worth. Claude Code is a little clunkier but gives
| better results, and it wouldn't take much effort to hook it
| up to a Slack interface.
| turnsout wrote:
| Yeah, I get that there are startups trying to do it. But I
| work with Cursor quite a bit... there is no way I would
| trust an LLM code agent to take high-level direction and
| issue a PR on anything but the most trivial bug fix.
| baq wrote:
| Last year they couldn't even do a simple fix (they could
| add a null coalescing operator or an early return which
| didn't make sense, that's about it). Now I'm getting
| hundreds of LOC of functionality with multiple kLOC of
| tests out of the agent mode. No way it gets in without a
| few iterations, but it's sooo much better than last
| April.
| danpalmer wrote:
| These timelines always assume that things progress as quickly
| as they can be conceived of, likely because these timelines
| come from "Ideas Guys" whose involvement typically ends at that
| point.
|
| Orbital mechanics begs to disagree about a Mars colony in 10
| years. Drug discovery has many steps that take time, even just
| the trials will take 5 years, let alone actually finding the
| drugs.
| wkat4242 wrote:
| Didn't the covid significantly reduce trial times? I thought
| that was such a success that they continued on the same foot.
| pama wrote:
| No it didn't. At least not for new small molecule drugs. It
| did reduce times a bit for the first vaccines because there
| were many volunteers available, and it did allow some
| antibody drug candidates to be used before full testing was
| complete. The only approved small molecule drug for covid
| is paxlovid, with both components of its formulation tested
| on humans for the first time many years before covid. All
| the rest of the small molecule drugs are still in early
| parts of the pipeline or have been abandoned.
| danpalmer wrote:
| The other reply has better info on covid specifically, but
| also consider that this refers to "immortality drugs". How
| long do we have to test those to conclude that they do in
| fact provide "immortality"?
|
| Now sure, they don't actually mean immortality, and we
| don't need to test forever to conclude they extend life,
| but we probably do have to test for years to get good data
| on whether a generic life extension drug is effective,
| because you're testing against illness, old age, etc,
| things that take literally decades to kill.
|
| That's not to mention that any drug like that will be met
| with intense skepticism and likely need to overcome far
| more scrutiny than normal (rather than the potentially less
| scrutiny that covid drugs might have managed).
| agos wrote:
| trial times were very brief for Covid vaccines because 1)
| there was no shortage of volunteers, capital, and political
| alignment at every level 2) the virus was everywhere and so
| it was really, really easy to verify if it was working.
| Compare this with a vaccination for a very rare but deadly
| disease: it's really hard to know if it's working because
| you can't just expose your test subjects to the deadly
| disease!
| movpasd wrote:
| It reminds me of this rather classic post:
| http://johnsalvatier.org/blog/2017/reality-has-a-
| surprising-...
|
| Science is not ideas: new conceptual schemes must be
| invented, confounding variables must be controlled, dead-ends
| explored. This process takes years.
|
| Engineering is not science: kinks must be worked out,
| confounding variables incorporated. This process also takes
| years.
|
| Technology is not engineering: the purely technical
| implementation must spread, become widespread and beat social
| inertia and its competition, network effects must be
| established. Investors and consumers must be convinced in the
| long term. It must survive social and political
| repercussions. This process takes yet more years.
| noncoml wrote:
| 2015: We will have FSD(full autonomy) by 2017
| wkat4242 wrote:
| Well, Teslas do have "Full Self Driving". It's not actually
| fully self driving and that doesn't even seem to be on the
| horizon but it doesn't appear to be stopping Tesla supporters.
| porphyra wrote:
| Seems very sinophobic. Deepseek and Manus have shown that China
| is legitimately an innovation powerhouse in AI but this article
| makes it sound like they will just keep falling behind without
| stealing.
| princealiiiii wrote:
| Stealing model weights isn't even particularly useful long-
| term, it's the training + data generation recipes that have
| value.
| MugaSofer wrote:
| That whole section seems to be pretty directly based on
| DeepSeek's "very impressive work" with R1 being simultaneously
| very impressive, and several months behind OpenAI. (They more
| or less say as much in footnote 36.) They blame this on US chip
| controls just barely holding China back from the cutting edge
| by a few months. I wouldn't call that a knock on Chinese
| innovation.
| ugh123 wrote:
| Don't confuse innovation with optimisation.
| pixl97 wrote:
| Don't confuse designing the product with winning the market.
| a3w wrote:
| How so? Spoiler: US dooms mankind, China is the saviour in the
| two endings.
| hexator wrote:
| Yes, it's extremely sinophobic and entirely too dismissive of
| China. It's pretty clear what the author's political leanings
| are, by what they mention and by what they do not.
| aoanevdus wrote:
| Don't assume that because the article depicts this competition
| between the US and China, that the authors actually want China
| to fail. Consider the authors and the audience.
|
| The work is written by western AI safety proponents, who often
| need to argue with important people who say we need to
| accelerate AI to "win against China" and don't want us to be
| slowed down by worrying about safety.
|
| From that perspective, there is value in exploring the
| scenario: ok, if we accept that we need to compete with China,
| what would that look like? Is accelerating always the right
| move? The article, by telling a narrative where slowing down to
| be careful with alignment helps the US win, tries to convince
| that crowd to care about alignment.
|
| Perhaps, people in China can make the same case about how
| alignment will help China win against US.
| usef- wrote:
| In both endings it's saying that because compute becomes the
| bottleneck, and US has far more chips. Isn't it?
| disambiguation wrote:
| Amusing sci-fi, i give it a B- for bland prose, weak story
| structure, and lack of originality - assuming this isn't all AI
| gen slop which is awarded an automatic F.
|
| >All three sets of worries--misalignment, concentration of power
| in a private company, and normal concerns like job loss--motivate
| the government to tighten its control.
|
| A private company becoming "too powerful" is a non issue for
| governments, unless a drone army is somewhere in that timeline.
| Fun fact the former head of the NSA sits on the board of Open AI.
|
| Job loss is a non issue, if there are corresponding economic
| gains they can be redistributed.
|
| "Alignment" is too far into the fiction side of sci-fi.
| Anthropomorphizing today's AI is tantamount to mental illness.
|
| "But really, what if AGI?" We either get the final say or we
| don't. If we're dumb enough to hand over all responsibility to an
| unproven agent and we get burned, then serves us right for being
| lazy. But if we forge ahead anyway and AGI becomes something
| beyond review, we still have the final say on the power switch.
| atemerev wrote:
| What is this, some OpenAI employee fan fiction? Did Sam himself
| write this?
|
| OpenAI models are not even SOTA, except that new-ish style
| transfer / illustration thing that made all us living in Ghibli
| world for a few days. R1 is _better_ than o1, and open-weights.
| GPT-4.5 is disappointing, except for a few narrow areas where it
| excels. DeepResearch is impressive though, but the moat is in
| tight web search / Google Scholar search integration, not
| weights. So far, I'd bet on open models or maybe Anthropic, as
| Claude 3.7 is the current SOTA for most tasks.
|
| As of the timeline, this is _pessimistic_. I already write 90%
| code with Claude, so are most of my colleagues. Yes, it does
| errors, and overdoes things. Just like a regular human middle-
| stage software engineer.
|
| Also fun that this assumes relatively stable politics in the US
| and relatively functioning world economy, which I think is crazy
| optimistic to rely on these days.
|
| Also, superpersuasion _already works_, this is what I am
| researching and testing. It is not autonomous, it is human-
| assisted by now, but it is a superpower for those who have it,
| and it explains some of the things happening with the world right
| now.
| achierius wrote:
| > superpersuasion _already works_
|
| Is this demonstrated in any public research? Unless you just
| mean something like "good at persuading" -- which is different
| from my understanding of the term -- I find this hard to
| believe.
| atemerev wrote:
| No, I meant "good at persuading", it is not 100% efficiency
| of course.
| pixodaros wrote:
| That singularity happened in the fifth century BCE when
| people figured out that they could charge silver to teach
| the art of rhetoric and not just teach their sons and
| nephews
| ddp26 wrote:
| The story isn't about OpenAI, they say the company could be
| Xai, Anthropic, Google, or another.
| infecto wrote:
| Could not get through the entire thing. It's mostly a bunch of
| fantasy intermingled with bits of possible interesting discussion
| points. The whole right side metrics are purely a distraction
| because entirely fiction.
| archagon wrote:
| Website design is nice, though.
| Willingham wrote:
| - October 2027 - 'The ability to automate most white-collar jobs'
|
| I wonder which jobs would not be automated? Therapy? HR?
| hsuduebc2 wrote:
| Board of directors
| Joshuatanderson wrote:
| This is extremely important. Scott Alexander's earlier
| predictions are holding up extremely well, at least on image
| progress.
| dingnuts wrote:
| how am I supposed to take articles like this seriously when they
| say absolutely false bullshit like this
|
| > the AIs can do everything taught by a CS degree
|
| no, they fucking can't. not at all. not even close. I feel like
| I'm taking crazy pills. Does anyone really think this?
|
| Why have I not seen -any- complete software created via vibe
| coding yet?
| ladberg wrote:
| It doesn't claim it's possible now, it's a fictional short
| story claiming "AIs can do everything taught by a CS degree" by
| the end of 2026.
| senordevnyc wrote:
| Ironically, the models of today _can_ read an article better
| than some of us.
| casey2 wrote:
| Lesswrong brigade. They are all dropout philosophers just
| ignore them.
| vagab0nd wrote:
| Bad future predictions: short-sighted guesses based on current
| trends and vibe. Often depend on individuals or companies. Made
| by free-riders. Example: Twitter.
|
| Good future predictions: insights into the fundamental principles
| that shape society, more law than speculation. Made by
| visionaries. Example: Vernor Vinge.
| dalmo3 wrote:
| "1984 was set in 1984."
|
| https://youtu.be/BLYwQb2T_i8?si=JpIXIFd9u-vUJCS4
| pera wrote:
| _From the same dilettantes who brought you the Zizians and other
| bizarre cults..._ thanks but I rather read Nostradamus
| arduanika wrote:
| What a bad faith argument. No true AI safety scaremonger brat
| stabs their landlord with a katana. The rationality of these
| rationalists is 100% uncorrolated with the rationality of
| *those* rationalists.
| soupfordummies wrote:
| The "race" ending reads like Universal Paperclips fan fiction :)
| 827a wrote:
| Readers should, charitably, interpret this as "the sequence of
| events which need to happen in order for OpenAI to justify the
| inflow of capital necessary to survive".
|
| Your daily vibe coding challenge: Get GPT-4o to output functional
| code which uses Google Vertex AI to generate a text embedding. If
| they can solve that one by July, then maybe we're on track for
| "curing all disease and aging, brain uploading, and colonizing
| the solar system" by 2030.
| slaterbug wrote:
| You've intentionally hamstrung your test by choosing an
| inferior model though.
| 827a wrote:
| o1 fails at this, likely because it does not seem to have
| access to search, so it is operating on outdated information.
| It recommends the usage of methods that have been removed by
| Google in later versions of the library. This is also, to be
| fair, a mistake gpt-4o can make if you don't explicitly tell
| it to search.
|
| o3-mini-high's output might work, but it isn't ideal: It
| immediately jumps to recommending avoiding all google cloud
| libraries and directly issuing a request to their API with
| fetch.
| Philpax wrote:
| Haven't tested this (cbf setting up Google Cloud), but the
| output looks consistent with the docs it cites:
| https://chatgpt.com/share/67efd449-ce34-8003-bd37-9ec688a11b...
|
| You may consider using search to be cheating, but we do it, so
| why shouldn't LLMs?
| 827a wrote:
| I should have specified "nodejs", as that has been my most
| recent difficulty. The challenge, specifically, with that
| prompt is that Google has at least four nodejs libraries that
| are all seem at least reasonably capable of accessing text
| embedding models on vertex ai (@google-ai/generativelanguage,
| @google-cloud/vertexai, @google-cloud/aiplatform, and
| @google/genai), and they've also published breaking changes
| multiple times to all of them. So, in my experience, GPT not
| only will confuse methods from one of their libraries with
| the other, but will also sometimes hallucinate answers only
| applicable to older versions of the library, without
| understanding which version its giving code for. Once it has
| struggled enough, it'll sometimes just give up and tell you
| to use axios, but the APIs it recommends axios calls for are
| all their protobuf APIs; so I'm not even sure if that would
| work.
|
| Search is totally reasonable, but in this case: Even Google's
| own documentation on these libraries is exceedingly bad.
| Nearly all the examples they give for them are for accessing
| the language models, not text embedding models; so GPT will
| also sometimes generate code that is perfectly correct for
| accessing one of the generative language models, but will
| swap e.g the "model: gemini-2.0" parameter for "model: text-
| embedding-005"; which also does not work.
| MaxfordAndSons wrote:
| As someone who's fairly ignorant of how AI actually works at a
| low level, I feel incapable of assessing how realistic any of
| these projections are. But the "bad ending" was certainly
| chilling.
|
| That said, this snippet from the bad ending nearly made me spit
| my coffee out laughing:
|
| > There are even bioengineered human-like creatures (to humans
| what corgis are to wolves) sitting in office-like environments
| all day viewing readouts of what's going on and excitedly
| approving of everything, since that satisfies some of Agent-4's
| drives.
| arduanika wrote:
| Sigh. When you talk to these people their eugenics obsession
| always comes out eventually. Set a timer and wait for it.
| Philpax wrote:
| While I don't disagree that I've seen a lot of eugenics talk
| from rationalist(-adjacent)s, I don't think this is an
| example of it: this is describing how misaligned AI could
| technically keep humans alive while still killing "humanity."
| arduanika wrote:
| Fair enough. Sometimes it comes out as a dark fantasy
| projected onto their AI gods, rather than a thing that they
| themselves want to do to us.
| Jun8 wrote:
| ACT post where Scott Alexander provides some additional info:
| https://www.astralcodexten.com/p/introducing-ai-2027.
|
| Manifold currently predicts 30%:
| https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
| crazystar wrote:
| 47% now soo a coin toss
| layer8 wrote:
| 32% again now.
| elicksaur wrote:
| Note the market resolves by:
|
| > Resolution will be via a poll of Manifold moderators. If
| they're split on the issue, with anywhere from 30% to 70% YES
| votes, it'll resolve to the proportion of YES votes.
|
| So you should really read it as "Will >30% of Manifold
| moderators in 2027 think the 'predictions seem to have been
| roughly correct up until that point'?"
| Aurornis wrote:
| > ACT post where Scott Alexander provides some additional info:
| https://www.astralcodexten.com/p/introducing-ai-2027
|
| The pattern where Scott Alexander puts forth a huge claim and
| then immediately hedges it backward is becoming a tiresome
| theme. The linguistic equivalent of putting claims into a
| superposition where the author is both owning it and distancing
| themselves from it at the same time, leaving the writing just
| ambiguous enough that anyone reading it 5 years from now
| couldn't pin down any claim as false because it was hedged in
| both directions. Schrodinger's prediction.
|
| > Do we really think things will move this fast? Sort of no
|
| > So maybe think of this as a vision of what an 80th percentile
| fast scenario looks like - not our precise median, but also not
| something we feel safe ruling out.
|
| The talk of "not our precise median" and "Not something we feel
| safe ruling out" is an elaborate way of hedging that this isn't
| their _actual_ prediction but, hey, anything can happen so here
| 's a wild story! When the claims don't come true they can just
| point back to those hedges and say that it wasn't _really_
| their median prediction (which is conveniently not noted).
|
| My prediction: The vague claims about AI becoming more powerful
| and useful will come true because, well, they're vague.
| Technology isn't about to reverse course and get worse.
|
| The actual bold claims like humanity colonizing space in the
| late 2020s with the help of AI are where you start to realize
| how fanciful their actual predictions are. It's like they put a
| couple points of recent AI progress on a curve, assumed an
| exponential trajectory would continue forever, and extrapolated
| from that regression until AI was helping us colonize space in
| less than 5 years.
|
| > Manifold currently predicts 30%:
|
| Read the fine print. It only requires 30% of judges to vote YES
| for it to resolve to YES.
|
| This is one of those bets where it's more about gaming the
| market than being right.
| leonidasv wrote:
| > Do we really think things will move this fast? Sort of no -
| between the beginning of the project last summer and the
| present, Daniel's median for the intelligence explosion shifted
| from 2027 to 2028. We keep the scenario centered around 2027
| because it's still his modal prediction (and because it would
| be annoying to change). Other members of the team (including
| me) have medians later in the 2020s or early 2030s, and also
| think automation will progress more slowly. So maybe think of
| this as a vision of what an 80th percentile fast scenario looks
| like - not our precise median, but also not something we feel
| safe ruling out.
|
| Important disclaimer that's lacking in OP's link.
| whiddershins wrote:
| > A rise in AI-generated propaganda failed to materialize.
|
| hah!
| nmilo wrote:
| The whole thing hinges on the fact that AI will be able to help
| with AI research
|
| How will it come up with the theoretical breakthroughs necessary
| to beat the scaling problem GPT-4.5 revealed when it hasn't been
| proven that LLMs can come up with novel research in any field at
| all?
| cavisne wrote:
| Scaling transformers has been basically alchemy, the
| breakthroughs aren't from rigorous science they are from trying
| stuff and hoping you don't waste millions of dollars in
| compute.
|
| Maybe the company that just tells an AI to generate 100s of
| random scaling ideas, and tries them all is the one that will
| win. That company should probably be 100 percent committed to
| this approach also, no FLOPs spent on ghibli inference.
| acje wrote:
| 2028 human text is too ambiguous a data source to get to AGI.
| 2127 AGI figures out flying cars and fusion power.
| wkat4242 wrote:
| I think it also really limits the AI to the context of human
| discourse which means it's hamstrung by our imagination,
| interests and knowledge. This is not where an AGI needs to go,
| it shouldn't copy and paste what we think. It should think on
| its own.
|
| But I view LLMs not as a path to AGI on their own. I think
| they're really great at being text engines and for human
| interfacing but there will need to be other models for the
| actual thinking. Instead of having just one model (the LLM)
| doing everything, I think there will be a hive of different
| more specific purpose models and the LLM will be how they
| communicate with us. That solves so many problems that we
| currently have by using LLMs for things they were never meant
| to do.
| suddenlybananas wrote:
| https://en.wikipedia.org/wiki/Great_Disappointment
|
| I suspect something similar will come for the people who actually
| believe this.
| panic08 wrote:
| LOL
| superconduct123 wrote:
| Why are the biggest AI predictions always made by people who
| aren't deep in the tech side of it? Or actually trying to use the
| models day-to-day...
| ZeroTalent wrote:
| People who are skilled fiction writers might lack technical
| expertise. In my opinion, this is simply an interesting piece
| of science fiction.
| AlphaAndOmega0 wrote:
| Daniel Kokotajlo released the (excellent) 2021 forecast. He was
| then hired by OpenAI, and not at liberty to speak freely, until
| he quit in 2024. He's part of the team making this forecast.
|
| The others include:
|
| Eli Lifland, a superforecaster who is ranked first on RAND's
| Forecasting initiative. You can read more about him and his
| forecasting team here. He cofounded and advises AI Digest and
| co-created TextAttack, an adversarial attack framework for
| language models.
|
| Jonas Vollmer, a VC at Macroscopic Ventures, which has done its
| own, more practical form of successful AI forecasting: they
| made an early stage investment in Anthropic, now worth $60
| billion.
|
| Thomas Larsen, the former executive director of the Center for
| AI Policy, a group which advises policymakers on both sides of
| the aisle.
|
| Romeo Dean, a leader of Harvard's AI Safety Student Team and
| budding expert in AI hardware.
|
| And finally, Scott Alexander himself.
| kridsdale3 wrote:
| TBH, this kind of reads like the pedigrees of the former
| members of the OpenAI board. When the thing blew up, and
| people started to apply real scrutiny, it turned out that
| about half of them had no real experience in pretty much
| anything at all, except founding Foundations and instituting
| Institutes.
|
| A lot of people (like the Effective Altruism cult) seem to
| have made a career out of selling their Sci-Fi content as
| policy advice.
| flappyeagle wrote:
| c'mon man, you don't believe that, let's have a little less
| disingenuousness on the internet
| arduanika wrote:
| How would you know what he believes?
|
| There's hype and there's people calling bullshit. If you
| work from the assumption that the hype people are
| genuine, but the people calling bullshit can't be for
| real, that's how you get a bubble.
| flappyeagle wrote:
| Because they are not the same in any way. It's not a
| bunch of junior academics, it's literally including
| someone who worked at OpenAI
| MrScruff wrote:
| I kind of agree - since the Bostrom book there is a cottage
| industry of people with non-technical backgrounds writing
| papers about singularity thought experiments, and it does
| seem to be on the spectrum with hard sci-fi writing. A lot
| of these people are clearly intelligent, and it's not even
| that I think everything they say is wrong (I made similar
| assumptions long ago before I'd even heard of Ray Kurzweil
| and the Singularity, although at the time I would have
| guessed 2050). It's just that they seem to believe their
| thought process and Bayesian logic is more rigourous than
| it actually is.
| superconduct123 wrote:
| I mean either researchers creating new models or people
| building products using the current models
|
| Not all these soft roles
| nice_byte wrote:
| this sounds like a bunch of people who make a living
| _talking_ about the technology, which lends them close to 0
| credibility.
| pixodaros wrote:
| Scott Alexander, for what its worth, is a psychiatrist, race
| science enthusiast, and blogger whose closest connection to
| software development is Bay Area house parties and a failed
| startup called MetaMed (2012-2015)
| https://rationalwiki.org/wiki/MetaMed
| Tenoke wrote:
| ..The first person listed is ex-OpenAI.
| torginus wrote:
| Because these people understand human psychology and how to
| play on fears (of doom, or missing out) and insecurities of
| people, and write compelling narratives while sounding smart.
|
| They are great at selling stories - they sold the story of the
| crypto utopia, now switching their focus to AI.
|
| This seems to be another appeal to enforce AI regulation in the
| name of 'AI safetyiism', which was made 2 years ago but the
| threats in it haven't really panned out.
|
| For example an oft repeated argument is the dangerous ability
| of AI to design chemical and biological weapons, I wish some
| expert could weigh in on this, but I believe the ability to
| theorycraft pathogens effective in the real world is absolutely
| marginal - you need actual lab work and lots of physical
| experiments to confirm your theories.
|
| Likewise the dangers of AI systems to exfiltrate themselves to
| multi-million dollar AI datacenter GPU systems everyone
| supposedly just has lying about, is ... not super realistc.
|
| The ability of AIs to hack computer systems is much less
| theoretical - however as AIs will get better at black-hat
| hacking, they'll get better at white-hat hacking as well - as
| there's literally no difference between the two, other than
| intent.
|
| And here in lies a crucial limitation of alignment and
| safetyism - sometimes there's no way to tell apart harmful and
| harmless actions, other than whether the person undertaking
| them means well.
| bpodgursky wrote:
| Because you can't be a full time blogger and also a full time
| engineer. Both take all your time, even ignoring time taken to
| build talent. There is simply a tradeoff of what you do with
| your life.
|
| There _are_ engineers with AI predictions, but you aren 't
| reading them, because building an audience like Scott Alexander
| takes decades.
| m11a wrote:
| If so, then it seems the solution is for HN to upvote the
| random qualified engineer with AI predictions?
| rglover wrote:
| Aside from the other points about understanding human
| psychology here, there's also a deep well they're trying to
| fill inside themselves. That of being someone who can't create
| things without shepherding others and see AI as the "great
| equalizer" that will finally let them taste the positive
| emotions associated with creation.
|
| The funny part, to me, is that it won't. They'll continue to
| toil and move on to the next huck just as fast as they jumped
| on this one.
|
| And I say this from observation. Nearly all of the people I've
| seen pushing AI hyper-sentience are smug about it and,
| coincidentally, have never built anything on their own (besides
| a company or organization of others).
|
| Every single one of the rational "we're on the right path but
| not quite there" takes have been from seasoned engineers who at
| least have _some_ hands-on experience with the underlying tech.
| ohgr wrote:
| In the path to self value people explain their worth by what
| they say not what they know. If what they say is horse dung, it
| is irrelevant to their ego if there is someone dumber than they
| are listening.
|
| This bullshit article is written for that audience.
|
| Say bullshit enough times and people will invest.
| HeatrayEnjoyer wrote:
| So what's the product they're promoting?
| moralestapia wrote:
| Their ego.
| FeepingCreature wrote:
| I use the models daily and agree with Scott.
| fire_lake wrote:
| > OpenBrain still keeps its human engineers on staff, because
| they have complementary skills needed to manage the teams of
| Agent-3 copies
|
| Yeah, sure they do.
|
| Everyone seems to think AI will take someone else's jobs!
| mlsu wrote:
| https://xkcd.com/605/
| mullingitover wrote:
| These predictions are made without factoring in the trade version
| of the Pearl Harbor attack the US just initiated on its allies
| (and itself, by lobotomizing its own research base and decimating
| domestic corporate R&D efforts with the aforementioned trade
| war).
|
| They're going to need to rewrite this from scratch in a quarter
| unless the GOP suddenly collapses and congress reasserts control
| over tariffs.
| torginus wrote:
| Much has been made in its article about autonomous agents ability
| to do research via browsing the web - the web is 90% garbage by
| weight (including articles on certain specialist topics).
|
| And it shows. When I used GPT's deep research to research the
| topic, it generated a shallow and largely incorrect summary of
| the issue, owning mostly to its inability to find quality
| material, instead it ended up going for places like Wikipedia,
| and random infomercial listicles found on Google.
|
| I have a trusty Electronics textbook written in the 80s, I'm sure
| generating a similarly accurate, correct and deep analysis on
| circuit design using only Google to help would be 1000x harder
| than sitting down and working through that book and understanding
| it.
| somerandomness wrote:
| Agreed. However, source curation and agents are two different
| parts of Deep Research. What if you provided that textbook to a
| reliable agent?
|
| Plug: We built https://RadPod.ai to allow you to do that, i.e.
| Deep Research on your data.
| preommr wrote:
| So, once again, we're in the era of "There's an [AI] app for
| that".
| skeeter2020 wrote:
| that might solve your sourcing problem, but now you need to
| have faith it will draw conclusions and parallels from the
| material accurately. That seems even harder than the original
| problem; I'll stick with decent search on quality source
| material.
| somerandomness wrote:
| The solution is a citation mechanism that points you
| directly where in the source material it comes from (which
| is what we tried to build). Easy verification is important
| for AI to have a net-benefit to productivity IMO.
| demadog wrote:
| RadPod - what models do you use to power it?
| Aurornis wrote:
| This story isn't really about agents browsing the web. It's a
| fiction about a company that consumes all of the web and all
| other written material into a model that doesn't need to browse
| the web. The agents in this story supersede the web.
|
| But your point hits on one of the first cracks to show in this
| story: We already have companies consuming much of the web and
| training models on all of our books, but the reports they
| produce are of mixed quality.
|
| The article tries to get around this by imagining models and
| training runs a couple orders of magnitude larger will simply
| appear in the near future and the output of those models will
| yield breakthroughs that accelerate the next rounds even
| faster.
|
| Yet here we are struggling to build as much infrastructure as
| possible to squeeze incremental improvements out of the next
| generation of models.
|
| This entire story relies on AI advancement accelerating faster
| in a self-reinforcing way in the coming couple of years.
| adastra22 wrote:
| There's an old adage in AI: garbage in, garbage out.
| Consuming and training on the whole internet doesn't make you
| smarter than the average intelligence of the internet.
| drchaos wrote:
| > Consuming and training on the whole internet doesn't make
| you smarter than the average intelligence of the internet.
|
| This is only true as long as you are not able to weigh the
| quality of a source. Just like getting spam in your inbox
| may waste your time, but it doesn't make you dumber.
| skywhopper wrote:
| That's exactly why it doesn't make sense. Where would a
| datacenter-bound AI get more data about the world exactly?
|
| The story is actually quite poorly written, with weird stuff
| about "oh yeah btw we fixed hallucinations" showing up off-
| handedly halfway through. And another example of that is the
| bit where they throw in that one generation is producing
| scads of synthetic training data for the next gen system.
|
| Okay, but once you know everything there is to know based on
| written material, how do you learn new things about the
| world? How do you learn how to build insect drones, mass-
| casualty biological weapons, etc? Is the super AI supposed to
| have completely understood physics to the extent that it can
| infer all reality without having to do experimentation? Where
| does even the electricity to do this come from? Much less the
| physical materials.
|
| The idea that even a supergenius intelligence could drive
| that much physical change in the world within three years is
| just silly.
| ctoth wrote:
| How will this thing which is connected to the Internet ...
| get data?
| whiplash451 wrote:
| In my opinion, the real breakthrough described in this
| article is not bigger models to read the web, but models that
| can _experiment on their own_ and learn from these
| experiments to generate new ideas.
|
| If this happens, then we indeed enter a non-linear regime.
| dimitri-vs wrote:
| Interesting, I've hard the exact opposite experience. For
| example I was curious why in metal casting the top box is
| called the cope and the bottom is called the drag. And it found
| very niche information and quotes from page 100 in a PDF on
| some random government website. The whole report was extremely
| detailed and verifiable if I followed its links.
|
| That said I suspect (and am already starting to see) the
| increased use of anti-bot protection to combat browser use
| agents.
| tim333 wrote:
| I myself am something of an autonomous agent who browses the
| web and it's possible to be choosy about what you browse. Like
| I could download some electronics text books off the web rather
| than going to listicles. LLMs may not be that discriminating at
| the moment but they could get better.
| Balgair wrote:
| > the web is 90% garbage by weight
|
| Sturgeon's law : "Ninety percent of everything is crap"
| KaiserPro wrote:
| > AI has started to take jobs, but has also created new ones.
|
| Yeah nah, theres a key thing missing here, the number of jobs
| created needs to be more than the ones it's destroyed, _and_ they
| need to be better paying _and_ happen in time.
|
| History says that actually when this happens, an entire
| generation is yeeted on to the streets (see powered looms,
| Jacquard machine, steam powered machine tools) All of that cheap
| labour needed to power the new towns and cities was created by
| automation of agriculture and artisan jobs.
|
| Dark satanic mills were fed the decedents of once reasonably
| prosperous crafts people.
|
| AI as presented here will kneecap the wages of a good proportion
| of the decent paying jobs we have now. This will cause huge
| economic disparities, and probably revolution. There is a reason
| why the royalty of Europe all disappeared when they did...
|
| So no, the stock market will not be growing because of AI, it
| will be in spite of it.
|
| Plus china knows that unless they can occupy most of its
| population with some sort of work, they are finished. AI and
| decent robot automation are an existential threat to the CCP, as
| much as it is to what ever remains of the "west"
| OgsyedIE wrote:
| Unfortunately the current system is doing a bad job of finding
| replacements for dwindling crucial resources such as petroleum
| basins, new generations of workers, unoccupied orbital
| trajectories, fertile topsoil and copper ore deposits. Either
| the current system gets replaced with a new system or it
| doesn't.
| kypro wrote:
| > and probably revolution
|
| I theorise that revolution would be near-impossible in post-AGI
| world. If people consider where power comes from it's
| relatively obvious that people will likely suffer and die on
| mass if we ever create AGI.
|
| Historically the general public have held the vast majority of
| power in society. 100+ years ago this would have been physical
| power - the state has to keep you happy or the public will come
| for them with pitchforks. But in an age of modern weaponry the
| public today would be pose little physical threat to the state.
|
| Instead in todays democracy power comes from the publics
| collective labour and purchasing power. A government can't risk
| upsetting people too much because a government's power today is
| not a product of its standing army, but the product of its
| economic strength. A government needs workers to create
| businesses and produce goods and therefore the goals of
| government generally align with the goals of the public.
|
| But in an post-AGI world neither businesses or the state need
| workers or consumers. In this world if you want something you
| wouldn't pay anyone for it or workers to produce it for you,
| instead you would just ask your fleet of AGIs to get you the
| resource.
|
| In this world people become more like pests. They offer no
| economic value yet demand that AGI owners (wherever publicly or
| privately owned) share resources with them. If people revolted
| any AGI owner would be far better off just deploying a
| bioweapon to humanely kill the protestors rather than sharing
| resources with them.
|
| Of course, this is assuming the AGI doesn't have it's own goals
| and just sees the whole of humanely as nuance to be stepped
| over in the same way humans will happy step over animals if
| they interfere with our goals.
|
| Imo humanity has 10-20 years left max if we continue on this
| path. There can be no good outcome of AGI because it would even
| make sense for the AGI or those who control the AGI to be
| aligned with goals of humanity.
| wkat4242 wrote:
| > I theorise that revolution would be near-impossible in
| post-AGI world. If people consider where power comes from
| it's relatively obvious that people will likely suffer and
| die on mass if we ever create AGI.
|
| I agree but for a different reason. It's very hard to
| outsmart an entity with an IQ in the thousands and pervasive
| information gathering. For a revolution you need to
| coordinate. The Chinese know this very well and this is why
| they control communication so closely (and why they had Apple
| restrict AirDrop). But their security agencies are still
| beholden to people with average IQs and the inefficient
| communication between them.
|
| An entity that can collect all this info on its own and have
| a huge IQ to spot patterns and not have to communicate it to
| convince other people in its organisation to take action,
| that will crush any fledgling rebellion. It will never be
| able to reach critical mass. We'll just be ants in an anthill
| and it will be the boot that crushes us when it feels like
| it.
| robinhoode wrote:
| > In this world people become more like pests. They offer no
| economic value yet demand that AGI owners (wherever publicly
| or privately owned) share resources with them. If people
| revolted any AGI owner would be far better off just deploying
| a bioweapon to humanely kill the protestors rather than
| sharing resources with them.
|
| This is a very doomer take. The threats are real, and I'm
| certain some people feel this way, but eliminating large
| swaths of humanity is something dicatorships have tried in
| the past.
|
| Waking up every morning means believing there are others who
| will cooperate with you.
|
| Most of humanity has empathy for others. I would prefer to
| have hope that we will make it through, rather than drown in
| fear.
| 758597464 wrote:
| > This is a very doomer take. The threats are real, and I'm
| certain some people feel this way, but eliminating large
| swaths of humanity is something dicatorships have tried in
| the past.
|
| Tried, and succeeded in. In times where people held more
| power than today. Not sure what point you're trying to make
| here.
|
| > Most of humanity has empathy for others. I would prefer
| to have hope that we will make it through, rather than
| drown in fear.
|
| I agree that most of humanity has empathy for others -- but
| it's been shown that the prevalence of psychopaths
| increases as you climb the leadership ladder.
|
| Fear or hope are the responses of the passive. There are
| other routes to take.
| bamboozled wrote:
| Basically why open source everything is increasingly more
| important and imo already making "AI" safer.
|
| If the many have access to the latest AI then there is
| less chance the masses are blindsided by some rogue tech.
| 542354234235 wrote:
| >but eliminating large swaths of humanity is something
| dicatorships have tried in the past.
|
| Technology changes things though. Things aren't "the same
| as it ever was". The Napoleonic wars killed 6.5 million
| people with muskets and cannons. The total warfare of WWII
| killed 70 to 85 million people with tanks, turboprop
| bombers, aircraft carriers, and 36 kilotons TNT of Atomic
| bombs, among other weaponry.
|
| Total war today includes modern thermonuclear weapons. In
| 60 seconds, just one Ohio class submarine can launch 80
| independent warheads, totaling over 36 megatons of TNT.
| That is over 20 times more than all explosives, used by all
| sides, for all of WWII, including both Atomic bombs.
|
| AGI is a leap forward in power equivalent to what
| thermonuclear bombs are to warfare. Humans have been trying
| to destroy each other for all of time but we can only have
| one nuclear war, and it is likely we can only have one AGI
| revolt.
| jplusequalt wrote:
| I don't understand the psychology of doomerism. Are
| people truly so scared of these futures they are
| incapable of imagining an alternate path where anything
| less than total human extinction occurs?
|
| Like if you're truly afraid of this, what are you doing
| here on HN? Go organize and try to do something about
| this.
| 542354234235 wrote:
| I don't see it as doomerism, just realism. Looking at the
| realities of nuclear war shows that it is a world ending
| holocaust that could happen by accident or by the launch
| of a single nuclear ICBM by North Korea, and there is
| almost no chance of de-escalation once a missile is in
| the air. There is nothing to be done, other than advocate
| of nuclear arms treaties in my own country, but that has
| no effect on Russia, China, North Korea, Pakistan, India,
| or Iran. Bertrand Russell said, "You may reasonably
| expect a man to walk a tightrope safely for ten minutes;
| it would be unreasonable to do so without accident for
| two hundred years." We will either walk the tightrope for
| another 100 years or so until global society progresses
| to where there is nuclear disarmament, or we won't.
|
| It is the same with Gen AI. We will either find a way to
| control an entity that rapidly becomes orders of
| magnitude more intelligent than us, or we won't. We will
| either find a way to prevent the rich and powerful from
| controlling a Gen AI that can build and operate anything
| they need, including an army to protect them from
| everyone without a powerful Gen AI, or we won't.
|
| I hope for a future of abundance for all, brought to us
| by technology. But I understand that some existential
| threats only need to turn the wrong way once, and there
| will be no second chance ever.
| jplusequalt wrote:
| I think it's a fallacy to equate pessimistic outcomes
| with "realism"
|
| >It is the same with Gen AI. We will either find a way to
| control an entity that rapidly becomes orders of
| magnitude more intelligent than us, or we won't. We will
| either find a way to prevent the rich and powerful from
| controlling a Gen AI that can build and operate anything
| they need, including an army to protect them from
| everyone without a powerful Gen AI, or we won't
|
| Okay, you've laid out two paths here. What are *you*
| doing to influence the course we take? That's my point.
| Enumerating all the possible ways humanity faces
| extinction is nothing more than doomerism if you aren't
| taking any meaningful steps to lessen the likelihood any
| of them may occur.
| Centigonal wrote:
| I think "resource curse" countries are a great surrogate for
| studying possible future AGI-induced economic and political
| phenomena. A country like the UAE (oil) or Botswana
| (diamonds) essentially has an economic equivalent to AGI:
| they control a small, extremely productive utility (an
| oilfield or a mine instead of a server farm), and the wealth
| generated by that utility is far in excess of what those
| countries' leaders need to maintain power. Sure, you hire
| foreign labor and trade for resources instead of having your
| AGI supply those things, but the end result is the same.
| dovin wrote:
| Dogs offer humans no economic value, but we haven't genocided
| them. There are a lot of ways that we could offer value
| that's not necessarily just in the form of watts and
| minerals. I'm not so sure that our future superintelligent
| summoned demons will be motivated purely by increasing their
| own power, resources, and leverage. Then again, maybe they
| will. Thus far, AI systems that we have created seem
| surprisingly goal-less. I'm more worried about how humans are
| going to use them than some sort of breakaway event but yeah,
| don't love that it's a real possible future.
| chipsrafferty wrote:
| A world in which most humans fill the role of "pets" of the
| ultra rich doesn't sound that great.
| dovin wrote:
| Humans becoming domesticated by benevolent
| superintelligences are some of the better futures with
| superintelligences, in my mind. Iain M Banks' Culture
| series is the best depiction of this I've come across;
| they're kind of the utopian rendition of the phrase "all
| watched over by machines of loving grace". Though it's a
| little hard to see how we get from here to there.
| autumnstwilight wrote:
| Honestly that part of the article and some other comments
| have given me the idle speculation, what if that was the
| solution to the, "Humans no longer feel they can
| meaningfully contribute to the world," issue?
|
| Like we can satisfy the hunting and retrieval instincts
| of dogs by throwing a stick, surely an AI that is 10,000
| times more intelligent can devise a stick-retrieval-task
| for humans in a way that feels like satisfying
| achievement and meaningful work from our perspective.
|
| (Leaving aside the question of whether any of that is a
| likely or desirable outcome.)
| bamboozled wrote:
| What will AI find fulfilling itself? I find that to be
| quite a deep question.
|
| I feel the limitations of humans are quite a feature when
| you think about what the experience of life would be like
| if you couldn't forget or experienced things for the
| first time. If you already knew everything and you could
| achieve almost anything with zero effort. It actually
| sounds...insufferable.
| te0006 wrote:
| You might find Stanislav Lem's Golem XIV worth a read, in
| which a what we now call an AGI shares, amongst other
| things, its knowledge and speculations about long-term
| evolution of superintelligences, in a lecture to humans,
| before entering the next stage itself.
| https://www.goodreads.com/book/show/10208493 It seems
| difficult to obtain an English edition these days but
| there is a reddit thread you might want to look into.
| weatherlite wrote:
| > In this world people become more like pests. They offer no
| economic value yet demand that AGI owners (wherever publicly
| or privately owned) share resources with them. If people
| revolted any AGI owner would be far better off just deploying
| a bioweapon to humanely kill the protestors rather than
| sharing resources with them.
|
| That will be quite a hard thing to pull off, even for some
| evil person with a AGI. Let's say Putin gets AGI and is
| actually evil and crazy enough to try wipe people out. If he
| just targets Russians and starts killing millions of people
| daily with some engineered virus or something similar, he'll
| have to fear a strike from the West which would be fearful
| they're next (and rightfully so). If he instead tries to wipe
| out all of humanity at once to escape a second strike, he
| again will have to devise such a good plan there won't be any
| second strike - meaning his "AGI" will have to be way better
| than all other competing AGIs (how exactly?).
|
| It would have made sense if all "owners of AGI" somehow
| conspired together to do this but there's not really such a
| thing as owners of AGI and even if there was Chinese, Russian
| and American owners of AGI don't trust each other at all and
| are also bound to their governments.
| jplusequalt wrote:
| The apathy spewed by doomers actively contributes to the
| future they whine about. Join a union. Organize with real
| people. People will always have the power in society.
| pydry wrote:
| >History says that actually when this happens, an entire
| generation is yeeted on to the streets
|
| History hasnt had to contend with a birth rate of 0.7-1.6.
|
| It's kind of interesting that the elite capitalist media
| (economist, bloomberg, forbes, etc) is projecting a future
| crisis of both not enough workers and not enough jobs
| simultaneously.
| wkat4242 wrote:
| I don't really get the American preoccupation with birth
| rates. We're already way overpopulated for our planet and
| this is showing in environmental issues, housing cost,
| overcrowded cities etc.
|
| It's totally a great thing if we start plateauing our
| population and even reduce it a bit. And no we're not going
| extinct. It'll just cause some temporary issues like an
| ageing population that has to be cared for but those issues
| are much more readily fixable than environmental destruction.
| torlok wrote:
| Don't try to reason with this population collapse nonsense.
| This has always been about racists fearing that "not
| enough" white westerners are being born, or about
| industrialists wanting infinite growth. For some prominent
| technocrats it's both.
| gmoot wrote:
| The welfare state is predicated on a pyramid-shaped
| population.
|
| Also: people deride infinite growth, but growth is what
| is responsible for lifting large portions of the
| population out of poverty. If global markets were
| repriced tomorrow to expect no future growth, economies
| would collapse.
|
| There may be a way to accept low or no growth without
| economic collapse, but if there is no one has figured it
| out yet. That's nothing to be cavalier about.
| pydry wrote:
| The welfare state isnt predicated on a pyramid shape but
| the continued growth of the stock market and endless GDP
| growth certainly is.
|
| >infinite growth, but growth is what is responsible for
| lifting large portions of the population out of poverty
|
| It's overstated. The preconditions for GDP growth -
| namely lack of war and corruption are probably more
| responsible than the growth itself.
| yoyohello13 wrote:
| I think it's more of a "be fruitful and multiply" thing
| than an actual existential threat thing. You can see many
| of loudest people talking about it either have religious
| undertones or want more peasants to work the factories.
|
| Demographic shift will certainly upset the status quo, but
| we will figure out how to deal with it.
| mattnewton wrote:
| I think a good part of it is fear of a black planet.
| alxjrvs wrote:
| Racist fears of "replacement", mostly.
| chipsrafferty wrote:
| It's the only way to increase profits under capitalism in
| the long term once you've optimized the technology.
| NitpickLawyer wrote:
| > I don't really get the American preoccupation with birth
| rates.
|
| Japan is currently in the finding out phase of this
| problem.
| ahtihn wrote:
| The planet is absolutely not over populated.
|
| Overcrowded cities and housing costs aren't an
| overpopulation problem but a problem of concentrating
| economic activity in certain places.
| spencerflem wrote:
| there's 70% less wild animals than there were 30 years
| ago
| luxardo wrote:
| We are most certainly not "overpopulated" in any way. Usage
| per person is what the issue is.
|
| And no society, ever, has had a good standard of living
| with a shrinking population. You are advocating for all
| young people to toil their entire lives taking care of an
| ever-aging population.
| ttw44 wrote:
| We are not overpopulated.
|
| I hate the type of people that hammer the idea that society
| needs to double or triple the birthrate (Elon Musk), but as
| it currently stands, countries like South Korea, Japan,
| USA, China, and Germany risk extinction or economic
| collapse in 4-5 generations if the birth rate doesn't rise
| or the way we guarantee welfare doesn't change.
| KaiserPro wrote:
| > History hasnt had to contend with a birth rate of 0.7-1.6.
|
| I think thats just not true:
| https://en.wikipedia.org/wiki/Peasants%27_Revolt
|
| A large number of revolutions/rebellions are caused by mass
| unemployment or famine.
| torlok wrote:
| Hayek has been lobbied by US corporations so hard for so long
| that regular people treat the invisible hand of the market like
| it's gospel.
| baq wrote:
| > So no, the stock market will not be growing because of AI, it
| will be in spite of it.
|
| The stock market will be one of the _very_ few ways you will be
| able to own some of that AI... assuming it won't be
| nationalized.
| kmeisthax wrote:
| > The agenda that gets the most resources is faithful chain of
| thought: force individual AI systems to "think in English" like
| the AIs of 2025, and don't optimize the "thoughts" to look nice.
| The result is a new model, Safer-1.
|
| Oh hey, it's the errant thought I had in my head this morning
| when I read the paper from Anthropic about CoT models lying about
| their thought processes.
|
| While I'm on my soapbox, I will point out that if your goal is
| preservation of democracy (itself an instrumental goal for human
| control), then you want to decentralize and distribute as much as
| possible. Centralization is the path to dictatorship. A
| significant tension in the Slowdown ending is the fact that,
| while we've avoided _AI_ coups, we 've given a handful of people
| the ability to do a perfectly ordinary human coup, and humans are
| very, very good at coups.
|
| Your best bet is smaller models that don't have as many unused
| weights to hide misalignment in; along with interperability _and_
| faithful CoT research. Make a model that satisfies your safety
| criteria and then make sure _everyone_ gets a copy so subgroups
| of humans get no advantage from hoarding it.
| pinetone wrote:
| I think it's worth noting that all of the authors have financial
| or professional incentive to accelerate the AI hype bandwagon as
| much as possible.
| FairlyInvolved wrote:
| I realise no one is infallible but do you not think Daniel
| Kokotajlo's integrity is now pretty well established with
| regard to those incentives?
| dr_dshiv wrote:
| But, I think this piece falls into a misconception about AI
| models as singular entities. There will be many instances of any
| AI model and each instance can be opposed to other instances.
|
| So, it's not that "an AI" becomes super intelligent, what we
| actually seem to have is an ecosystem of blended human and
| artificial intelligences (including corporations!); this
| constitutes a distributed cognitive ecology of superintelligence.
| This is very different from what they discuss.
|
| This has implications for alignment, too. It isn't so much about
| the alignment of AI to people, but that both human and AI need to
| find alignment with nature. There is a kind of natural harmony in
| the cosmos; that's what superintelligence will likely align to,
| naturally.
| popalchemist wrote:
| For now.
| ddp26 wrote:
| Check out the sidebar - they expect tens of thousands of copies
| of their agents collaborating.
|
| I do agree they don't fully explore the implications. But they
| do consider things like coordination amongst many agents.
| dr_dshiv wrote:
| It's just funny, because there are hundreds of millions of
| instances of ChatGPT running all the time. Each chat is
| basically an instance, since it has no connection to all the
| other chats. I don't think connecting them makes sense due to
| privacy reasons.
|
| And, each chat is not autonomous but integrated with other
| intelligent systems.
|
| So, with more multiplicity, I think thinks work differently.
| More ecologically. For better and worse.
| danpalmer wrote:
| Interesting story, if you're into sci-fi I'd also recommend Iain
| M Banks and Peter Watts.
| khimaros wrote:
| FWIW, i created a PDF of the "race" ending and fed it to Gemini
| 2.5 Pro, prompting about the plausibility of the described
| outcome. here's the full output including the thinking section:
| https://rentry.org/v8qtqvuu -- tl;dr, Gemini thinks the proposed
| timeline is unlikely. but maybe we're already being deceived ;)
| ks2048 wrote:
| We know this complete fiction because of parts where "the White
| House considers x,y,z...", etc. - As if the White House in 2027
| will be some rational actor reacting sanely to events in the real
| world.
| toddmorey wrote:
| I worry more about the human behavior predictions than the
| artificial intelligence predictions:
|
| "OpenBrain's alignment team26 is careful enough to wonder whether
| these victories are deep or shallow. Does the fully-trained model
| have some kind of robust commitment to always being honest?"
|
| This is a capitalist arms race. No one will move carefully.
| yonran wrote:
| See also Dwarkesh Patel's interview with two of the authors of
| this post (Scott Alexander & Daniel Kokotajlo) that was also
| released today: https://www.dwarkesh.com/p/scott-daniel
| https://www.youtube.com/watch?v=htOvH12T7mU
| quantum_state wrote:
| "Not even wrong" ...
| siliconc0w wrote:
| The limiting factor is power, we can't build enough of it -
| certainly not enough by 2027. I don't really see this addressed.
|
| Second to this, we can't just assume that progress will keep
| increasing. Most technologies have a 'S' curve and plateau once
| the quick and easy gains are captured. Pre-training is done. We
| can get further with RL but really only in certain domains that
| are solvable (math and to an extent coding). Other domains like
| law are extremely hard to even benchmark or grade without very
| slow and expensive human annotation.
| ryankrage77 wrote:
| > "resist the temptation to get better ratings from gullible
| humans by hallucinating citations or faking task completion"
|
| Everything this from this point on is pure fiction. An LLM can't
| get tempted or resist temptations, at best there's some local
| minimum in a gradient that it falls into. As opaque and black-
| box-y as they are, they're still deterministic machines.
| Anthropomorphisation tells you nothing useful about the computer,
| only the user.
| FeepingCreature wrote:
| Temptation does not require nondeterminism.
| ivraatiems wrote:
| Though I think it is probably mostly science-fiction, this is one
| of the more chillingly thorough descriptions of potential AGI
| takeoff scenarios that I've seen. I think part of the problem is
| that the world you get if you go with the "Slowdown"/somewhat
| more aligned world is still pretty rough for humans: What's the
| point of our existence if we have no way to meaningfully
| contribute to our own world?
|
| I hope we're wrong about a lot of this, and AGI turns out to
| either be impossible, or much less useful than we think it will
| be. I hope we end up in a world where humans' value increases,
| instead of decreasing. At a minimum, if AGI is possible, I hope
| we can imbue it with ethics that allow it to make decisions that
| value other sentient life.
|
| Do I think this will actually happen in two years, let alone five
| or ten or fifty? Not really. I think it is wildly optimistic to
| assume we can get there from here - where "here" is LLM
| technology, mostly. But five years ago, I thought the idea of
| LLMs themselves working as well as they do at speaking
| conversational English was essentially fiction - so really,
| anything is possible, or at least worth considering.
|
| "May you live in interesting times" is a curse for a reason.
| abraxas wrote:
| I think LLM or no LLM the emergence of intelligence appears to
| be closely related to the number of synapses in a network
| whether a biological or a digital one. If my hypothesis is
| roughly true it means we are several orders of magnitude away
| from AGI. At least the kind of AGI that can be embodied in a
| fully functional robot with the sensory apparatus that rivals
| the human body. In order to build circuits of this density it's
| likely to take decades. Most probably transistor based, silicon
| based substrate can't be pushed that far.
| ivraatiems wrote:
| I think there is a good chance you are roughly right. I also
| think that the "secret sauce" of sapience is probably not
| something that can be replicated easily with the technology
| we have now, like LLMs. They're missing contextual awareness
| and processing which is absolutely necessary for real
| reasoning.
|
| But even so, solving that problem feels much more attainable
| than it used to be.
| narenm16 wrote:
| i agree. it feels like scaling up these large models is
| such an inefficient route that seems to be warranting new
| ideas (test-time compute, etc).
|
| we'll likely reach a point where it's infeasible for deep
| learning to completely encompass human-level reasoning, and
| we'll need neuroscience discoveries to continue progress.
| altman seems to be hyping up "bigger is better," not just
| for model parameters but openai's valuation.
| throwup238 wrote:
| I think the missing secret sauce is an equivalent to
| neuroplasticity. Human brains are constantly being rewired
| and optimized at every level: synapses and their channels
| undergo long term potentiation and depression, new
| connections are formed and useless ones pruned, and the
| whole system can sometimes remap functions to different
| parts of the brain when another suffers catastrophic
| damage. I don't know enough about the matrix multiplication
| operations that power LLMs, but it's hard to imagine how
| that kind of organic reorganization would be possible with
| GPUs matmul. It'd require some sort of advanced "self
| aware" profile guided optimization and not just trial and
| error noodling with Torch ops or CUDA kernels.
|
| I assume that thanks to the universal approximation theorem
| it's theoretically possible to emulate the physical
| mechanism, but at what hardware and training cost? I've
| done back of the napkin math on this before [1] and the
| number of "parameters" in the brain is at least 2-4 orders
| of magnitude more than state of the art models. But that's
| just the current weights, what about the history that
| actually enables the plasticity? Channel threshold
| potentials are also continuous rather than discreet and
| emulating them might require the full fp64 so I'm not sure
| how we're even going to get to the memory requirements in
| the next decade, let alone whether any architecture on the
| horizon can emulate neuroplasticity.
|
| Then there's the whole problem of a true physical feedback
| loop with which the AI can run experiments to learn against
| external reward functions and the core survival reward
| function at the core of evolution might itself be critical
| but that's getting deep into the research and philosophy on
| the nature of intelligence.
|
| [1] https://news.ycombinator.com/item?id=40313672
| lblume wrote:
| Transformers already are very flexible. We know that we
| can basically strip blocks at will, reorder modules,
| transform their input in predictable ways, obstruct some
| features and they will after a very short period of re-
| training get back to basically the same capabilities they
| had before. Fascinating stuff.
| UltraSane wrote:
| Why can't the compute be remote from the robot? That is a
| major advantage of human technology over biology.
| abraxas wrote:
| Mostly latency. But even if a single robot could be driven
| by a data centre consider the energy and hardware
| investment requirements to make such a creature practical.
| UltraSane wrote:
| Latency would be kept low be keeping the compute nearby.
| One 1U or 2U server per robot would be reasonable.
| Jensson wrote:
| 1ms latency is more than fast enough, you probably have
| bigger latency than that between the cpu and the gpu.
| Symmetry wrote:
| We've got 10ms of latency between our brains and our
| hands along our nerve fibers and we function all right.
| UltraSane wrote:
| The Figure robots use a two level control scheme with a
| fast LLM at 200Hz directly controlling the robot and a
| slow planning LLM running at 7Hz. This planning LLM could
| be very far away indeed and still have less than 142.8ms
| of latency.
| nopinsight wrote:
| If by "several" orders of magnitude, you mean 3-5, then we
| might be there by 2030 or earlier.
|
| https://situational-awareness.ai/from-gpt-4-to-agi/
| joshjob42 wrote:
| I think generally the expectation is that there are around
| 100T synapses in the brain, and of course it's probably not a
| 1:1 correspondence with neural networks, but it doesn't seem
| infeasible at all to me that a dense-equivalent 100T
| parameter model would be able to rival the best humans if
| trained properly.
|
| If basically a transformer, that means it needs at inference
| time ~200T flops per token. The paper assumes humans "think"
| at ~15 tokens/second which is about 10 words, similar to the
| reading speed of a college graduate. So that would be ~3
| petaflops of compute per second.
|
| Assuming that's fp8, an H100 could do ~4 petaflops, and the
| authors of AI 2027 guesstimate that purpose wafer scale
| inference chips circa late 2027 should be able to do
| ~400petaflops for inference, ~100 H100s worth, for ~$600k
| each for fabrication and installation into a datacenter.
|
| Rounding that basically means ~$6k would buy you the compute
| to "think" at 10 words/second. Generally speaking that'd
| probably work out to maybe $3k/yr after depreciation and
| electricity costs, or ~30-50C//hr of "human thought
| equivalent" 10 words/second. Running an AI at 50x human speed
| 24/7 would cost ~$23k/yr, so 1 OpenBrain researcher's salary
| could give them a team of ~10-20 such AIs running flat out
| all the time. Even if you think the AI would need an "extra"
| 10 or even 100x in terms of tokens/second to match humans,
| that still puts you at genius level AIs in principle runnable
| at human speed for 0.1 to 1x the median US income.
|
| There's an open question whether training such a model is
| feasible in a few years, but the raw compute capability at
| the chip level to plausibly run a model that large at
| enormous speed at low cost is already existent (at the street
| price of B200's it'd cost ~$2-4/hr-human-equivalent).
| brookst wrote:
| Excellent back of napkin math and it feels intuitively
| right.
|
| And I think training is similar -- training is capital
| intensive therefore centralized, but if 100m people are
| paying $6k for their inference hardware, add on $100/year
| as a training tax (er, subscription) and you've got
| $10B/year for training operations.
| baq wrote:
| Exponential growth means the first order of magnitude comes
| slowly and the last one runs past you unexpectedly.
| Palmik wrote:
| Exponential growth generally means that the time between
| each order of magnitude is roughly the same.
| brookst wrote:
| At the risk of pedantry, is that true? Something that
| doubles annually sure seems like exponential growth to
| me, but the orders of magnitude are not at all the same
| rate. Orders of magnitude are a base-10 construct but IMO
| exponents don't have to be 10.
|
| EDIT: holy crap I just discovered a commonly known thing
| about exponents and log. Leaving comment here but it is
| wrong, or at least naive.
| joshdavham wrote:
| > I hope we're wrong about a lot of this, and AGI turns out to
| either be impossible, or much less useful than we think it will
| be.
|
| For me personally, I hope that we do get AGI. I just don't want
| it by 2027. That feels way too fast to me. But AGI 2070 or
| 2100? That sounds much more preferable.
| TheDong wrote:
| > What's the point of our existence if we have no way to
| meaningfully contribute to our own world?
|
| For a sizable number of humans, we're already there. The vast
| majority of hacker news users are spending their time trying to
| make advertisements tempt people into spending money on stuff
| they don't need. That's an active societal harm. It doesn't
| contribute in any positive way to the world.
|
| And yet, people are fine to do that, and get their dopamine
| hits off instagram or arguing online on this cursed site, or
| watching TV.
|
| More people will have bullshit jobs in this SF story, but a
| huge number of people already have bullshit jobs, and manage to
| find a point in their existence just fine.
|
| I, for one, would be happy to simply read books, eat, and die.
| john_texas wrote:
| Targeted advertising is about determining and giving people
| exactly what they need. If successful, this increases
| consumption and grows the productivity of the economy. It's
| an extremely meaningful job as it allows for precise,
| effective distribution of resources.
| the_gipsy wrote:
| In practice you're just selling shittier or unnecessary
| stuff. Advertising makes society objectively worse.
| bshacklett wrote:
| I was hoping someone would bring up Bullshit Jobs. There are
| definitely a lot of people spending the majority of their
| time doing "work" that doesn't have any significant impact to
| the world already. I don't know that some future AI takeover
| would really change much, except maybe remove some vale of
| perception around meaningless work.
|
| At the same time, I wouldn't necessarily say that people are
| currently fine getting dopamine hits from social media.
| Coping would probably be a better description. There are a
| lot of social and societal problems that have been growing at
| a rapid rate since Facebook and Twitter began tapping into
| the reward centers of the brain.
|
| From a purely anecdotal perspective, I find my mood
| significantly affected by how productive and impactful I am
| with how I spend my time. I'm much happier when I'm making
| progress on something, whether it's work or otherwise.
| baron816 wrote:
| My vision for an ASI future involves humans living in
| simulations that are optimized for human experience. That
| doesn't mean we are just live in a paradise and are happy all
| the time. We'd experience dread and loss and fear, but it would
| ultimately lead to a deeply satisfying outcome. And we'd be
| able to choose to forget things, including whether we're in a
| simulation so that it feels completely unmistakeable from base
| reality. You'd live indefinitely, experiencing trillions of
| lifespans where you get to explore the multiverse inside and
| out.
|
| My solution to the alignment problem is that an ASI could just
| stick us in tubes deep in the Earth's crust--it just needs to
| hijack our nervous system to input signals from the simulation.
| The ASI could have the whole rest of the planet, or it could
| move us to some far off moon in the outer solar system--I don't
| care. It just needs to do two things for it's creators--
| preserve lives and optimize for long term human experience.
| zdragnar wrote:
| > What's the point of our existence if we have no way to
| meaningfully contribute to our own world?
|
| You may find this to be insightful:
| https://meltingasphalt.com/a-nihilists-guide-to-meaning/
|
| In short, "meaning" is a contextual perception, not a discrete
| quality, though the author suggests it can be quantified based
| on the number of contextual connections to other things with
| meaning. The more densely connected something is, the more
| meaningful it is; my wedding is meaningful to me because my
| family and my partners family are all celebrating it with me,
| but it was an entirely meaningless event to you.
|
| Thus, the meaningfulness of our contributions remains
| unchanged, as the meaning behind them is not dependent upon the
| perspective of an external observer.
| ionwake wrote:
| Please don't be offended by my opinion, I mean it in good
| humour to share some strong disagreements - Im going to give
| my take after reading your comment and the article which both
| seem completely OTT ( contextwise regarding my opinions ).
|
| >meaning behind them is not dependent upon the perspective of
| an external observer.
|
| (Yes brother like cmon)
|
| Regarding the author, I get the impression he grew up without
| a strong father figure? This isnt ad hominem I just get the
| feeling of someone who is so confused and lost in life that
| he is just severely depressed possibly related to his
| directionless life. He seems so confused he doesn't even take
| seriously the fact most humans find their own meaning in life
| and says hes not even going to consider this, finding it
| futile.( he states this near the top of the article ).
|
| I believe his rejection of a simple basic core idea ends up
| in a verbal blurb which itself is directionless.
|
| My opinion ( Which yes maybe more floored than anyones ), is
| to deal with Mazlows hierarchy, and then the prime directive
| for a living organism which after survival , which is
| reproduction. Only after this has been achieved can you then
| work towards your family community and nation.
|
| This may seem trite, but I do believe that this is natural
| for someone with a relatively normal childhood.
|
| My aim is not to disparage, its to give me honest opinion of
| why I disagree and possible reasons for it. If you disagree
| with anything I have said please correct me.
|
| Thanks for sharing the article though it was a good read -
| and I did struggle myself with meaning sometimes.
| zdragnar wrote:
| To use a counter example, consider Catholic priests who do
| not marry or raise children. It would be quite the argument
| indeed to suggest their lives are without meaning or
| purpose.
|
| Aha, you might say, but they hold leadership roles! They
| have positions of authority! Of course they have meaning,
| as they wield spiritual responsibility to their community
| as a fine substitute for the family life they will not
| have.
|
| To that, I suggest looking deeper, at the nuns and monks.
| To a cynical non-believer, they surely are wanting for a
| point to their existence, but _to them_ , what they do is a
| step beyond Maslow's self actualization, for they live in
| communion with God and the saints. Their medications and
| good works in the community are all expressions of that
| purpose, not the other way around. In short, though their
| "graph of contextual meaning" doesn't spread as far, it is
| very densely packed indeed.
|
| Two final thoughts:
|
| 1) I am both aware of and deeply amused by the use of
| priests and nuns and monks to defend the arguments of a
| nihilist's search for meaning.
|
| 2) I didn't bring this up so much to take the conversation
| off topic, so much as to hone in on the very heart of what
| troubled the person I originally responded to. The question
| of purpose, the point of existence, in the face of
| superhuman AI is in fact unchanged. The sense of meaning
| and purpose one finds in life is found not in the eyes of
| an unfeeling observer, whether the observers are robots or
| humans. It must come from within.
| lo_zamoyski wrote:
| People talk about meaning, but they rarely define it.
|
| Ultimately, "meaning" is a matter of "purpose", and purpose
| is a matter of having an end, or _telos_. The end of a thing
| is dependent on the nature of a thing. Thus, the telos of an
| oak tree is different from the telos of a squirrel which is
| different from that of a human being. The telos or end of a
| thing is a marker of the thing 's fulfillment or
| actualization as the kind of thing it is. A thing's
| potentiality is structured and ordered toward its end.
| Actualization of that potential is good, the frustration of
| actualization is not.
|
| As human beings, what is most essential to us is that we are
| rational and social animals. This is why we are miserable
| when we live lives that are contrary to reason, and why we
| need others to develop as human beings. The human drama, the
| human condition, is, in fact, our failure to live rationally,
| living beneath the dignity of a rational agent, and very
| often with knowledge of and assent to our irrational deeds.
| That is, in fact, the very definition of _sin_ : to choose to
| act in a way one _knows_ one should not. Mistakes aren 't
| sins, even if they are per se evil, because to sin is to
| knowingly do what you should not (though a refusal to
| recognize a mistake or to pay for a recognized mistake would
| constitute a sin). This is why premeditated crimes are far
| worse than crimes of passion; the first entails a greater
| knowledge of what one is doing, while someone acting out of
| intemperance, while still intemperate and thus afflicted with
| vice, was acting out of impulse rather fully conscious
| intent.
|
| So telos provides the objective ground for the "meaning" of
| acts. And as you may have noticed, implicitly, it provides
| the objective basis for morality. To be is synonymous with
| good, and actualization of potential means to be more fully.
| nthingtohide wrote:
| Meaning is a matter of context. Most of the context resides
| in the past and future. Ludwig's claim that word's meaning
| is dependent on how it is used. This applies generally.
|
| Daniel Dennett - Information & Artificial Intelligence
|
| https://www.youtube.com/watch?v=arEvPIhOLyQ
|
| Daniel Dennett bridges the gap between everyday information
| and Shannon-Weaver information theory by rejecting
| propositions as idealized meaning units. This fixation on
| propositions has trapped philosophers in unresolved debates
| for decades. Instead, Dennett proposes starting with simple
| biological cases--bacteria responding to gradients--and
| recognizing that meaning emerges from differences that
| affect well-being. Human linguistic meaning, while
| powerful, is merely a specialized case. Neural states can
| have elaborate meanings without being expressible in
| sentences. This connects to AI evolution: "good old-
| fashioned AI" relied on propositional logic but hit
| limitations, while newer approaches like deep learning
| extract patterns without explicit meaning representation.
| Information exists as "differences that make a difference"
| --physical variations that create correlations and further
| differences. This framework unifies information from
| biological responses to human consciousness without
| requiring translation into canonical propositions.
| lm28469 wrote:
| > Slowdown"/somewhat more aligned world is still pretty rough
| for humans: What's the point of our existence if we have no way
| to meaningfully contribute to our own world?
|
| We spend the best 40 years of our lives working 40-50 hours a
| week to enrich the top 0.1% while living in completely
| artificial cities. People should wonder what is the point of
| our current system instead of worrying about Terminator tier
| sci fi system that may or may not come sometimes in the next 5
| to 200 years
| anonzzzies wrote:
| A lot of people in my surroundings are not buying this life
| anymore; especially young people are asking why would they.
| Unlike in the US, they won't end up under a bridge (unless
| some real collapse, which can of course happen but why worry
| about it; it might not) so they work simple jobs (data entry
| or whatnot) to make enough money to eat and party and nothing
| more. Meaning many of them work no more than a few hours a
| month. They live rent free at their parents and when they
| have kids they stop partying but generally don't go work more
| (well; raising kids is hard work of course but I mean for
| money). Many of them will inherit the village house from
| their parents and have a garden so they grow stuff to eat ,
| have some animals and make their own booze so they don't have
| to pay for that. In cities, people feel the same 'who would I
| work for the ferrari of the boss we never see', but it is
| much harder to not to; more expensive and no land and usually
| no property to inherit (as that is in the countryside or was
| already sold to not have to work for a year or two).
|
| Like you say, people but more our govs need to worry about
| what is the point at this moment, not scifi in the future;
| this stuff has already bad enough to worry about. Working
| your ass off for diminishing returns , paying into a pension
| pot that won't make it until you retire etc is driving people
| to really focus on the now and why they would do these
| things. If you can just have fun with 500/mo and booze from
| your garden, why work hard and save up etc. I noticed even
| people from my birth country with these sentiments while they
| have it extraordinarily good for the eu standards but they
| are wondering why would they do all of this for nothing (...)
| more and more and cutting hours more and more. It seems more
| an education and communication thing really than anything
| else; it is like asking why pay taxes: if you are not well
| informed, it might feel like theft, but when you spell it
| out, most people will see how they benefit.
| brookst wrote:
| Well said. I keep reading these fearmongering articles and
| looking around wondering where all of these deep meaning and
| human agency is _today_.
|
| I'm led to believe that we see this stuff because the tiny
| subset of humanity that has the wealth and luxury to sit
| around thinking about thinking about themselves are worried
| that AI may disrupt the navel-gazing industry.
| arisAlexis wrote:
| do you really think that AGI is impossible after all that
| happened up to today? how is this possible?
| Davidzheng wrote:
| I think two years is entirely reasonable timeline.
| bla3 wrote:
| > The AI Futures Project is a small research group forecasting
| the future of AI, funded by charitable donations and grants
|
| Would be interested who's paying for those grants.
|
| I'm guessing it's AI companies.
| jsight wrote:
| I think some of the takes in this piece are a bit melodramatic,
| but I'm glad to see someone breaking away from the "it's all a
| hype-bubble" nonsense that seems to be so pervasive here.
| bigfishrunning wrote:
| I think the piece you're missing here is that it actually is
| all a hype bubble
| ddp26 wrote:
| A lot of commenters here are reacting only to the narrative, and
| not the Research pieces linked at the top.
|
| There is some very careful thinking there, and I encourage people
| to engage with the arguments there rather than the stylized
| narrative derived from it.
| heurist wrote:
| Give AI its own virtual world to live in where the problems it
| solves are encodings of the higher order problems we present and
| you shouldn't have to worry about this stuff.
| sivaragavan wrote:
| Thanks to the authors for doing this wonderful piece of work and
| sharing it with credibility. I wish people see the possibilities
| here. But we are after all humans. It is hard to imagine our own
| downfall.
|
| Based on each individual's vantage point, these events might
| looks closer or farther than mentioned here. but I have to agree
| nothing is off the table at this point.
|
| The current coding capabilities of AI Agents are hard to
| downplay. I can only imagine the chain reaction of this creation
| ability to accelerate every other function.
|
| I have to say one thing though: The scenario in this site
| downplays the amount of resistance that people will put up - not
| because they are worried about alignment, but because they are
| politically motivated by parties who are driven by their own
| personal motives.
| overgard wrote:
| Why is any of this seen as desirable? Assuming this is a true
| prediction it sounds AWFUL. The one thing humans have that makes
| us human is intelligence. If we turn over thinking to machines,
| what are we exactly. Are we supposed to just consume mindlessly
| without work to do?
| casey2 wrote:
| Nice LARP lmao 2GW is like 1 datacenter and I doubt you even have
| that. >lesswrong No wonder the comments are all nonsense. Go to a
| bar and try and talk about anying.
| stego-tech wrote:
| It's good science fiction, I'll give it that. I think getting
| lost in the weeds over technicalities ignores the crux of the
| narrative: even if _this_ doesn't lead to AGI, at the very least
| it's likely the final "warning shot" we'll get before it's
| suddenly and irreversibly here.
|
| The problems it raises - alignment, geopolitics, lack of societal
| safeguards - are all real, and happening now (just replace "AGI"
| with "corporations", and voila, you have a story about the
| climate crisis and regulatory capture). We should be solving
| these problems _before_ AGI or job-replacing AI becomes
| commonplace, lest we run the very real risk of societal collapse
| or species extinction.
|
| The point of these stories _is_ to incite alarm, because they're
| trying to provoke proactive responses while time is on our side,
| instead of trusting self-interested individuals in times of great
| crisis.
| wruza wrote:
| No one's gonna solve anything. "Our" world is based on greedy
| morons concentrating power through hands of just morons who are
| happy to hit you with a stick. This system doesn't think about
| what "we" should or allowed to do, and no one's here is at the
| reasonable side of it either.
|
| _lest we run the very real risk of societal collapse or
| species extinction_
|
| Our part is here. To be replaced with machines if this AI thing
| isn't just a fart advertised as mining equipment, which it
| likely is. _We_ run this risk, not they. People worked on their
| wealth, people can go f themselves now. They are fine with all
| that. Money (=more power) piles in either way.
|
| No encouraging conclusion.
| jrvarela56 wrote:
| https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
| wruza wrote:
| Thanks for the read. One could think that the answer is to
| simply stop being a part of it, but then again you're from
| the genus that outcompeted everyone else in staying alive.
| Nature is such a shitty joke by design, not sure how one is
| supposed to look at the hypothetical designer with warmth
| in their heart.
| braebo wrote:
| Fleshy meat sacks on a space rock eating one another
| alive and shitting them out on a march towards inevitable
| doom in the form of a (likely) painful and terrifying
| death is a genius design, no?
| Aeolun wrote:
| I read for such a long time, and I still couldn't get
| through that, even though it never got boring.
|
| I like that it ends with a reference to Kushiel and Elua
| though.
| Davidzheng wrote:
| Don't think it's correct to blame the fact that AI
| acceleration is the only viable self-protecting policy on
| "greedy morons".
| nroets wrote:
| I fail to see how corporations are responsible for the climate
| crisis: Politicians won't tax gas because they'll get voted
| out.
|
| We know that Trump is not captured by corporations because his
| trade policies are terrible.
|
| If anything, social media is the evil that's destroying the
| political center: Americans are no longer reading mainstream
| newspapers or watching mainstream TV news.
|
| The EU is saying the elections in Romania was manipulated
| through manipulation of TikTok accounts and media.
| baq wrote:
| If you put a knife in someone's heart, you're the one who did
| it and ultimately you're responsible. If someone told you to
| do it and you were just following orders... you still did it.
| If you say there were no rules against putting knives in
| other people's hearts, you still did it and you're still
| responsible.
|
| If it's somehow different for corporations, please enlighten
| me how.
| nroets wrote:
| The oil companies are saying their product is vital to the
| economy and they are not wrong. How else will we get food
| from the farms to the store ? Ambulances to the hospitals ?
| And many, many other things.
|
| Taxes are the best way to change behaviour (smaller cars
| driving less. Less flying etc). So _government and the
| people who vote for them_ is to blame.
| baq wrote:
| I agree with everything here, we've had a great run of
| economic expansion for basically two centuries and I like
| my hot showers as much as anyone - but that doesn't
| change the CO2 levels.
| fire_lake wrote:
| What if people are manipulated by bot farms and think
| tanks and talking points supported by those corporations?
|
| I think this view of humans - that they look at all the
| available information and then make calm decisions in
| their own interests - is simply wrong. We are manipulated
| all the damn time. I struggle to go to the supermarket
| without buying excess sugar. The biggest corporations in
| the world grew fat off showing us products to impulse buy
| before our more rational brain functions could stop us.
| We are not a little pilot in a meat vessel.
| nroets wrote:
| Corporations would prefer lower corporate tax.
|
| US corporate tax rates are actually every high. Partly
| due to the US having almost no consumption tax. EU
| members have VAT etc.
| matthewdgreen wrote:
| There are politicians in multiple states trying to pass
| laws that slow down the deployment of renewable energy
| _because they're afraid if they don't intervene it will
| be deployed too quickly and harm fossil fuel interests._
| Trump is promising to bring back coal, while he bans new
| wind leases. The whole "oil is the only way aw shucks
| people chose it" shtick is like a time capsule from 1990.
| That whole package of beliefs served its purpose and has
| been replaced with a muscular state-sponsored plan to
| defend fossil fuel interests even as they become
| economically obsolete and the rest of the world moves on.
| brookst wrote:
| The oil companies also knew and lied about global warming
| for decades. They paid and continue to pay for as science
| to stall action. I am completely mystified how you can
| find them blameless _for_ venal politicians and a
| populace that largely believes their lies.
| netsharc wrote:
| > Politicians won't tax gas because they'll get voted out.
|
| I wonder if that's corporations' fault after all: shitty
| working conditions and shitty wages, so that Bezos can afford
| to send penises into space. What poor person would agree to
| higher tax on gas? And the corps are the ones backing
| politicians who'll propagandize that "Unions? That's
| communism! Do you want to be Chaina?!" (and spread by those
| dickheads on the corporate-owned TV and newspaper, drunk
| dickheads who end up becoming defense secretary)
| nroets wrote:
| When people have more money, they tend to buy larger cars
| that they drive further. Flying is also a luxury.
|
| So corporations are involved in the sense that they pay
| people more than a living wage.
| sofixa wrote:
| > Politicians won't tax gas because they'll get voted out.
|
| Have you seen gas tax rates in the EU?
|
| > We know that Trump is not captured by corporations because
| his trade policies are terrible.
|
| Unless you think it's a long con for some rich people to be
| able to time the market by getting him to crash it.
|
| > The EU is saying the elections in Romania was manipulated
| through manipulation of TikTok accounts and media.
|
| More importantly, Romanian courts say that too. And it was
| all out in the open, so not exactly a secret
| lucianbr wrote:
| Romainan courts say all kinds of things, many of them
| patently false. It's absurd to claim that since romanian
| courts say something, it must be true. It's absurd in
| principle, because there's nothing in the concept of a
| court that makes it infallible, and it's absurd in this
| precise case, because we are corrupt as hell.
|
| I'm pretty sure the election _was_ manipulated, but the
| court only said so because it benefits the incumbents,
| which control the courts and would lose their power.
|
| It's a struggle between local thieves and putin, that's
| all. The local thieves will keep us in the EU, which is
| much better than the alternative, but come on. "More
| importantly, Romanian courts say so"? Really?
| sofixa wrote:
| > I'm pretty sure the election was manipulated, but the
| court only said so because it benefits the incumbents,
| which control the courts and would lose their power.
|
| Why do you think that's the only reason the court said
| so? The election law was pretty blatantly violated (he
| declared campaign funding of 0, yet tons of ads were
| bought for him and influencers paid to advertise him).
| bsenftner wrote:
| Whatever the future is, it is not American, not the United
| States. The US's cultural individualism has been
| Capitalistically weaponized, and the educational foundation to
| take the country forward is not there. The US is kaput, and we
| are merely observing the ugly demise. The future is Asia, with
| all of western culture going down. Yes, it is not pretty, The
| failed experiment of American self rule.
| brookst wrote:
| I agree but see it as less dire. All of western culture is
| not ending; it will be absorbed into a more Asia-dominated
| culture in much he was Asian culture was subsumed into
| western for the past couple of hundred years.
|
| And if Asian culture is better educated and more capable of
| progress, that's a good thing. Certainly the US has announced
| loud and clear that this is the end of the line for us.
| bsenftner wrote:
| Of course it will not end, western culture just will no
| longer lead. Despite the sky falling perspective of many,
| it is simply an attitude adjustment. So one group is no
| longer #1, and the idea that I was part of that group,
| ever, was an illusion of propaganda anyway. Life will go
| on, surprisingly the same.
| nthingtohide wrote:
| here's an example.
|
| https://x.com/RnaudBertrand/status/1901133641746706581
|
| I finally watched Ne Zha 2 last night with my daughters.
|
| It absolutely lives up to the hype: undoubtedly the best
| animated movie I've ever seen (and I see a lot, the fate
| of being the father of 2 young daughters ).
|
| But what I found most fascinating was the subtle yet
| unmistakable geopolitical symbolism in the movie.
|
| Warning if you haven't yet watched the movie: spoilers!
|
| So the story is about Ne Zha and Ao Bing, whose physical
| bodies were destroyed by heavenly lightning. To restore
| both their forms, they must journey to the Chan sect--
| headed by Immortal Wuliang--and pass three trials to earn
| an elixir that can regenerate their bodies.
|
| The Chan sect is portrayed in an interesting way: a
| beacon of virtue that all strive to join. The imagery
| unmistakably refers to the US: their headquarters is an
| imposingly large white structure (and Ne Zha, while
| visiting it, hammers the point: "how white, how white,
| how white") that bears a striking resemblance to the
| Pentagon in its layout. Upon gaining membership to the
| Chan sect, you receive a jade green card emblazoned with
| an eagle that bears an uncanny resemblance to the US bald
| eagle symbol. And perhaps most telling is their prized
| weapon, a massive cauldron marked with the dollar sign...
|
| Throughout the movie you gradually realize, in a very
| subtle way, that this paragon of virtue is, in fact, the
| true villain of the story. The Chan sect orchestrates a
| devastating attack on Chentang Pass--Ne Zha's hometown--
| while cunningly framing the Dragon King of the East Sea
| for the destruction. This manipulation serves their
| divide-and-conquer strategy, allowing them to position
| themselves as saviors while furthering their own power.
|
| One of the most pointed moments comes when the Dragon
| King of the East Sea observes that the Chan sect "claims
| to be a lighthouse of the world but harms all living
| beings."
|
| Beyond these explicit symbols, I was struck by how the
| film portrays the relationships between different groups.
| The dragons, demons, and humans initially view each other
| with suspicion, manipulated by the Chan sect's narrative.
| It's only when they recognize their common oppressor that
| they unite in resistance and ultimately win. The Chan
| sect's strategy of fostering division while presenting
| itself as the arbiter of morality is perhaps the key
| message of the movie: how power can be maintained through
| control of the narrative.
|
| And as the story unfolds, Wuliang's true ambition becomes
| clear: complete hegemony. The Chan sect doesn't merely
| seek to rule--it aims to establish a system where all
| others exist only to serve its interests, where the
| dragons and demons are either subjugated or transformed
| into immortality pills in their massive cauldron. These
| pills are then strategically distributed to the Chan
| sect's closest allies (likely a pointed reference to the
| G7).
|
| What makes Ne Zha 2 absolutely exceptional though is that
| these geopolitical allegories never overshadow the
| emotional core of the story, nor its other dimensions
| (for instance it's at times genuinely hilariously funny).
| This is a rare film that makes zero compromise, it's both
| a captivating and hilarious adventure for children and a
| nuanced geopolitical allegory for adults.
|
| And the fact that a Chinese film with such unmistakable
| anti-American symbolism has become the highest-grossing
| animated film of all time globally is itself a
| significant geopolitical milestone. Ne Zha 2 isn't just
| breaking box office records--it's potentially rewriting
| the rules about what messages can dominate global
| entertainment.
| rchaud wrote:
| > it will be absorbed into a more Asia-dominated culture in
| much he was Asian culture was subsumed into western for the
| past couple of hundred years.
|
| Was Asian culture dominated by the west to any significant
| degree? Perhaps in countries like India where the legal and
| parliamentary system installed by the British remained
| intact for a long time post-independence.
|
| Elsewhere in East and Southeast Asia, the legal systems,
| education, cultural traditions, and economic philosophies
| have been very different from the "west", i.e. post-WWII US
| and Western Europe.
|
| The biggest sign of this is how they developed their own
| information networks, infrastructure and consumer
| networking devices. Europe had many of these regional
| champions themselves (Phillips, Nokia, Ericsson, etc) but
| now outside of telecom infrastructure, Europe is largely
| reliant on American hardware and software.
| tim333 wrote:
| Perhaps but on the AI front most of the leading research has
| been in the US or UK, with China being a follower.
| treis wrote:
| People said the same thing about Japan but they ran into
| their own structural issues. It's going to happen to China as
| well. They've got demographic problems, rule of law problems,
| democracy problems, and on and on.
| nthingtohide wrote:
| I really don't understand this : us vs them viewpoint.
| Here's a fictional scenario. Imagine Yellowstone erupts
| tomorrow and whole of America becomes inhabitable but
| Africa is unscathed. Now think about this, if America had
| "really" developed African continent, wouldn't it provide
| shelter to scurrying Americans. Many people forget, the
| real value of money is in what you can exchange it for.
| Having skilled people and associated RnD and subsequent
| products / services is what should have been encouraged by
| the globalists instead of just rent extraction or stealing.
| I don't understand the ultimate endgame for globalists. Do
| each of them desire to have 100km yacht with helicopter
| perched on it to ferry them back and forth?
| YetAnotherNick wrote:
| > very real risk of societal collapse or species extinction
|
| No, there is no risk of species extinction in the near future
| due to climate change and repeating the line will just further
| the divide and make the people not care about other people's
| and even real climate scientist's words.
| Aeolun wrote:
| Don't say the things people don't want to hear and everything
| will be fine?
|
| That sounds like the height of folly.
| YetAnotherNick wrote:
| Don't say false things. Especially if it is political and
| there isn't any way to debate it.
| ttw44 wrote:
| The risk is a quantifiable 0.0%? I find that hard to believe.
| I think the current trends suggest there is a risk that
| continued environmental destruction could annihilate society.
| brookst wrote:
| Risk can never be zero, just like certainty can never be
| 100%.
|
| There is a non-zero chance that the ineffable quantum foam
| will cause a mature hippopotamus to materialize above your
| bed tonight, and you'll be crushed. It is incredibly,
| amazingly, limits-of-math unlikely. Still a non-zero risk.
|
| Better to think of "no risk" as meaning "negligible risk".
| But I'm with you that climate change is not a negligible
| risk; maybe way up in the 20% range IMO. And I wouldn't be
| sleeping in my bed tonight if sudden hippos over beds were
| 20% risks.
| ttw44 wrote:
| Lol, I've always loved that about physics. Some boltzmann
| brain type stuff.
| SpicyLemonZest wrote:
| It's hard to produce a quantifiable chance of human
| extinction in the absence of any model by which climate
| change would lead to it. No climate organization I'm aware
| of evaluates the end of humanity as even a worst-case risk;
| the idea simply doesn't exist outside the realm of viral
| Internet misinformation.
| api wrote:
| You don't just beat around the bush here. You actually beat the
| bush a few times.
|
| Large corporations, governments, institutionalized churches,
| political parties, and other "corporate" institutions are very
| much like a hypothetical AGI in many ways: they are immortal,
| sleepless, distributed, omnipresent, and possess beyond human
| levels of combined intelligence, wealth, and power. They are
| mechanical Turk AGIs more or less. Look at how humans cycle in,
| out, and through them, often without changing them much,
| because they have an existence and a weird kind of will
| independent of their members.
|
| A whole lot, perhaps all, of what we need to do to prepare for
| a hypothetical AGI that may or may not be aligned consists of
| things we should be doing to restrain and ensure alignment of
| the mechanical Turk variety. If we can't do that we have no
| chance against something faster and smarter.
|
| What we have done over the past 50 years is the opposite: not
| just unchain them but drop any notion that they should be
| aligned.
|
| Are we sure the AI alignment discourse isn't just "occulted"
| progressive political discourse? Back when they burned witches
| philosophers would encrypt possibly heretical ideas in the form
| of impenetrable nonsense, which is where what we call occultism
| comes from. You don't get burned for suggesting steps to align
| corporate power, but a huge effort has been made to marginalize
| such discourse.
|
| Consider a potential future AGI. Imagine it has a cult of
| followers around it, which it probably would, and champions
| that act like present day politicians or CEOs for it, which it
| probably would. If it did not get humans to do these things for
| it, it would have analogous functions or parts of itself.
|
| Now consider a corporation or other corporate entity that has
| all those things but replace the AGI digital brain with a
| committee or shareholders.
|
| What, really, is the difference? Both can be dangerously
| unaligned.
|
| Other than perhaps in magnitude? The real digital AGI might be
| smarter and faster but that's the only difference I see.
| brookst wrote:
| I looked but I couldn't find any evidence that "occultism"
| comes from encryption of heretical ideas. It seems to have
| been popularized in renaissance France to describe the study
| of hidden forces. I think you may be hallucinating here.
| balamatom wrote:
| Where exactly did you look?
| fmap wrote:
| > even if this doesn't lead to AGI, at the very least it's
| likely the final "warning shot" we'll get before it's suddenly
| and irreversibly here.
|
| I agree that it's good science fiction, but this is still
| taking it too seriously. All of these "projections" are
| generalizing from fictional evidence - to borrow a term that's
| popular in communities that push these ideas.
|
| Long before we had deep learning there were people like Nick
| Bostrom who were pushing this intelligence explosion narrative.
| The arguments back then went something like this: "Machines
| will be able to simulate brains at higher and higher fidelity.
| Someday we will have a machine simulate a cat, then the village
| idiot, but then the difference between the village idiot and
| Einstein is much less than the difference between a cat and the
| village idiot. Therefore accelerating growth[...]" The
| fictional part here is the whole brain simulation part, or, for
| that matter, any sort of biological analogue. This isn't how
| LLMs work.
|
| We never got a machine as smart as a cat. We got multi-
| paragraph autocomplete as "smart" as the average person on the
| internet. Now, after some more years of work, we have multi-
| paragraph autocomplete that's as "smart" as a smart person on
| the internet. This is an imperfect analogy, but the point is
| that there is no indication that this process is self-
| improving. In fact, it's the opposite. All the scaling laws we
| have show that progress slows down as you add more resources.
| There is no evidence or argument for exponential growth.
| Whenever a new technology is first put into production (and
| receives massive investments) there is an initial period of
| rapid gains. That's not surprising. There are always low-
| hanging fruit.
|
| We got some new, genuinely useful tools over the last few
| years, but this narrative that AGI is just around the corner
| needs to die. It is science fiction and leads people to make
| bad decisions based on fictional evidence. I'm personally
| frustrated whenever this comes up, because there are exciting
| applications which will end up underfunded after the current AI
| bubble bursts...
| gwd wrote:
| > Someday we will have a machine simulate a cat, then the
| village idiot... This isn't how LLMs work.
|
| I think you misunderstood that argument. The simulate the
| brain thing isn't a "start from the beginning" argument, it's
| an "answer a common objection" argument.
|
| Back around 2000, when Nick Bostrom was talking about this
| sort of thing, computers were simply nowhere near powerful
| enough to come even close to being smart enough to outsmart a
| human, except in very constrained cases like chess; we did't
| even have the first clue how to create a computer program to
| be even remotely dangerous to us.
|
| Bostrom's point was that, "We don't need to know the computer
| program; even if we just simulate something we know works --
| a biological brain -- we can reach superintelligence in a few
| decades." The idea was never that people would actually
| simulate a cat. The idea is, if _we don 't think of anything
| more efficient_, we'll _at least_ be able to simulate a cat,
| and then an idiot, and then Einstein, and then something
| smarter. And since we _almost certainly_ will think of
| something more efficient than "simulate a human brain", we
| should expect superintelligence to come much sooner.
|
| > There is no evidence or argument for exponential growth.
|
| Moore's law is exponential, which is where the "simulate a
| brain" predictions have come from.
|
| > It is science fiction and leads people to make bad
| decisions based on fictional evidence.
|
| The only "fictional evidence" you've actually specified so
| far is the fact that there's no biological analog; and that
| (it seems to me) is from a misunderstanding of a point
| someone else was making 20 years ago, not something these
| particular authors are making.
|
| I think the case for AI caution looks like this:
|
| A. It is possible to create a superintelligent AI
|
| B. Progress towards a superintelligent AI will be exponential
|
| C. It is possible that a superintelligent AI will want to do
| something we wouldn't want it to do; e.g., destroy the whole
| human race
|
| D. Such an AI would be likely to succeed.
|
| Your skepticism seems to rest on the fundamental belief that
| either A or B is false: that superintelligence is not
| physically possible, or at least that progress towards it
| will be logarithmic rather than exponential.
|
| Well, maybe that's true and maybe it's not; but _how do you
| know_? What justifies your belief that A and /or B are false
| so strongly, that you're willing to risk it? And not only
| willing to risk it, but try to stop people who are trying to
| think about what we'd do if they _are_ true?
|
| What evidence would cause you to re-evaluate that belief, and
| consider exponential progress towards superintelligence
| possible?
|
| And, even if you think A or B are unlikely, doesn't it make
| sense to just consider the possibility that they're true, and
| think about how we'd know and what we could do in response,
| to prevent C or D?
| fmap wrote:
| > The idea is, if we don't think of anything more
| efficient, we'll at least be able to simulate a cat, and
| then an idiot, and then Einstein, and then something
| smarter. And since we almost certainly will think of
| something more efficient than "simulate a human brain", we
| should expect superintelligence to come much sooner.
|
| The problem with this argument is that it's assuming that
| we're on a linear track to more and more intelligent
| machines. What we have with LLMs isn't this kind of general
| intelligence.
|
| We have multi-paragraph autocomplete that's matching
| existing texts more and more closely. The resulting models
| are great priors for any kind of language processing and
| have simple reasoning capabilities in so far as those are
| present in the source texts. Using RLHF to make the
| resulting models useful for specific tasks is a real
| achievement, but doesn't change how the training works or
| what the original training objective was.
|
| So let's say we continue along this trajectory and we
| finally have a model that can faithfully reproduce and
| identify every word sequence in its training data and its
| training data includes every word ever written up to that
| point. Where do we go from here?
|
| Do you want to argue that it's possible that there is a
| clever way to create AGI that has nothing to do with the
| way current models work and that we should be wary of this
| possibility? That's a much weaker argument than the one in
| the article. The article extrapolates from current
| capabilities - while ignoring where those capabilities come
| from.
|
| > And, even if you think A or B are unlikely, doesn't it
| make sense to just consider the possibility that they're
| true, and think about how we'd know and what we could do in
| response, to prevent C or D?
|
| This is essentially
| https://plato.stanford.edu/entries/pascal-wager/
|
| It might make sense to consider, but it doesn't make sense
| to invest non-trivial resources.
|
| This isn't the part that bothers me at all. I know people
| who got grants from, e.g., Miri to work on research in
| logic. If anything, this is a great way to fund some
| academic research that isn't getting much attention
| otherwise.
|
| The real issue is that people are raising ridiculous
| amounts of money by claiming that the current advances in
| AI will lead to some science fiction future. When this
| future does not materialize it will negatively affect
| funding for all work in the field.
|
| And that's a problem, because there is great work going on
| right now and not all of it is going to be immediately
| useful.
| hannasanarion wrote:
| > So let's say we continue along this trajectory and we
| finally have a model that can faithfully reproduce and
| identify every word sequence in its training data and its
| training data includes every word ever written up to that
| point. Where do we go from here?
|
| This is a fundamental misunderstanding of the entire
| point of predictive models (and also of how LLMs are
| trained and tested).
|
| For one thing, ability to faithfully reproduce texts is
| not the primary scoring metric being used for the bulk of
| LLM training and hasn't been for years.
|
| But more importantly, you don't make a weather model so
| that it can inform you of last Tuesday's weather given
| information from last Monday, you use it to tell you
| tomorrow's weather given information from today. The
| totality of today's temperatures, winds, moistures, and
| shapes of broader climatic patterns, particulates,
| albedos, etc etc etc have never happened before, and yet
| the model tells us something true about the never-before-
| seen consequences of these never-before-seen conditions,
| because it has learned the ability to reason new
| conclusions from new data.
|
| Are today's "AI" models a glorified autocomplete? Yeah,
| _but that 's what all intelligence is_. The next word I
| type is the result of an autoregressive process occurring
| in my brain that produces that next choice based on the
| totality of previous choices and experiences, just like
| the Q-learners that will kick your butt in Starcraft
| choose the best next click based on their history of
| previous clicks in the game combined with things they see
| on the screen, and will have pretty good guesses about
| which clicks are the best ones even if you're playing as
| Zerg and they only ever trained against Terran.
|
| A highly accurate autocomplete that is able to predict
| the behavior and words of a genius, when presented with
| never before seen evidence, will be able to make novel
| conclusions in exactly the same way as the human genius
| themselves would when shown the same new data.
| Autocomplete IS intelligence.
|
| New ideas don't happen because intelligences draw them
| out of the aether, they happen because intelligences
| produce new outputs in response to stimuli, and those
| stimuli can be self-inputs, that's what "thinking" is.
|
| If you still think that all today's AI hubbub is just
| vacuous hype around an overblown autocomplete, try going
| to Chatgpt right now. Click the "deep research" button,
| and ask it "what is the average height of the buildings
| in [your home neighborhood]"?, or "how many calories are
| in [a recipe that you just invented]", or some other
| inane question that nobody would have ever cared to write
| about ever before but is hypothetically answerable from
| information on the internet, and see if what you get is
| "just a reproduced word sequence from the training data".
| gwd wrote:
| > We have multi-paragraph autocomplete that's matching
| existing texts more and more closely.
|
| OK, I think I see where you're coming from. It sounds
| like what you're saying is:
|
| E. LLMs only do multi-paragraph autocomplete; they are
| and always will be incapable of actual thinking.
|
| F. Any approach capable of achieving AGI will be
| completely different in structure. Who knows if or when
| this alternate approach will even be developed; and if it
| is developed, we'll be starting from scratch, so we'll
| have plenty of time to worry about progress then.
|
| With E, again, it may or may not be true. It's worth
| noting that this is a theoretical argument, not an
| empirical one; but I think it's a reasonable assumption
| to start with.
|
| However, there are actually theoretical reasons to think
| that E may be false. The best way to predict the weather
| is to have an internal model which approximates weather
| systems; the best way to predict the outcome of a physics
| problem is to have an internal model which approximates
| the physics of the thing you're trying to predict. And
| the best way to predict what a human would write next is
| to have a model of a human mind -- including a model of
| what the human mind has in its model (e.g., the state of
| the world).
|
| There is some empirical data to support this argument,
| albeit in a very simplified manner: They trained a simple
| LLM to predict valid moves for Othello, and then probed
| it and discovered an internal Othello board being
| simulated inside the neural network:
|
| https://thegradient.pub/othello/
|
| And my own experience with LLMs better match the "LLMs
| have an internal model of the world" theory than the
| "LLMs are simply spewing out statistical garbage" theory.
|
| So, with regard to E: Again, sure, LLMs may turn out to
| be a dead end. But I'd personally give the idea that LLMs
| are a complete dead end a less than 50% probability; and
| I don't think giving it an overwhelmingly high
| probability (like 1 in a million of being false) is
| really reasonable, given the theoretical arguments and
| empirical evidence against it.
|
| With regard to F, again, I don't think this is true.
| We've learned so much about optimizing and distilling
| neural nets, optimizing training, and so on -- not to
| mention all the compute power we've built up. Even if
| LLMs _are_ a dead end, whenever we _do_ find an
| architecture capable of achieving AGI, I think a huge
| amount of the work we 've put into optimizing LLMs will
| put is way ahead in optimizing this other system.
|
| > ...that the current advances in AI will lead to some
| science fiction future.
|
| I mean, if you'd told me 5 years ago that I'd be able to
| ask a computer, "Please use this Golang API framework
| package to implement CRUD operations for this particular
| resource my system has", and that the resulting code
| would 1) compile out of the box, 2) exhibit an
| understanding of that resource and how it relates to
| other resources in the system based on having seen the
| code implementing those resources 3) make educated
| guesses (sometimes right, sometimes wrong, but always
| reasonable) about details I hadn't specified, I don't
| think I would have believed you.
|
| Even if LLM progress _is_ logarithmic, we 're already
| living in a science fiction future.
|
| EDIT: The scenario actually has very good technical
| "asides"; if you want to see their view of how a
| (potentially dangerous) personality emerges from "multi-
| paragraph auto-complete", look at the drop-down labelled
| "Alignment over time", and specifically what follows
| "Here's a detailed description of how alignment
| progresses over time in our scenario:".
|
| https://ai-2027.com/#alignment-over-time
| Vegenoid wrote:
| > Moore's law is exponential, which is where the "simulate
| a brain" predictions have come from.
|
| To address only one thing out of your comment, Moore's law
| is not a law, it is a trend. It just gets called a law
| because it is fun. We know that there are physical limits
| to Moore's law. This gets into somewhat shaky territory,
| but it seems that current approaches to compute can't reach
| the density of compute power present in a human brain (or
| other creatures' brains). Moore's law won't get chips to be
| able to simulate a human brain, with the same amount of
| space and energy as a human brain. A new approach will be
| needed to go beyond simply packing more transistors onto a
| chip - this is analogous to my view that current AI
| technology is insufficient to do what human brains do, even
| when taken to their limit (which is significantly beyond
| where they're currently at).
| tim333 wrote:
| >There is no evidence or argument for exponential growth
|
| I think the growth you are thinking of, self improving AI,
| needs the AI to be as smart as a human developer/researcher
| to get going and we haven't got there yet. But we quite
| likely will at some point.
| maerF0x0 wrote:
| and the article specifically mentions the fictional company
| (clearly designed to generalize the Google/OpenAI's of the
| world) are supposedly (according to the article) working on
| building that capability. First by augmenting human
| researchers, later by augmenting itself.
| whiplash451 wrote:
| > there are exciting applications which will end up
| underfunded after the current AI bubble bursts
|
| Could you provide examples? I am genuinely interested.
| whiplash451 wrote:
| There is no need to simulate Einstein to transform the world
| with AI.
|
| A self-driving car would already be plenty.
| skydhash wrote:
| And a self driving car is not even necessary if we're
| thinking about solving transportation problems. Train and
| bus are better at solving road transportation at scale.
| vonneumannstan wrote:
| >All of these "projections" are generalizing from fictional
| evidence - to borrow a term that's popular in communities
| that push these ideas.
|
| This just isn't correct. Daniel and others on the team are
| experienced world class forecasters. Daniel wrote another
| version of this in 2021 predicting the AI world in 2026 and
| was astonishingly accurate. This deserves credence.
|
| https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-.
| ..
|
| >he arguments back then went something like this: "Machines
| will be able to simulate brains at higher and higher
| fidelity.
|
| Complete misunderstanding of the underlying ideas. Just in
| not even wrong territory.
|
| >We got some new, genuinely useful tools over the last few
| years, but this narrative that AGI is just around the corner
| needs to die. It is science fiction and leads people to make
| bad decisions based on fictional evidence.
|
| You are likely dangerously wrong. The AI field is near
| universal in predicting AGI timelines under 50 years. With
| many under 10. This is an extremely difficult problem to deal
| with and ignoring it because you think it's equivalent to
| overpopulation on mars is incredibly foolish.
|
| https://www.metaculus.com/questions/5121/date-of-
| artificial-...
|
| https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predicti.
| ..
| Workaccount2 wrote:
| >2025:...Making models bigger is not what's cool anymore.
| They are trillions of parameters big already. What's cool
| is making them run longer, in bureaucracies of various
| designs, before giving their answers.
|
| Dude was spot on in 2021, hot damn.
| loganmhb wrote:
| I respect the forecasting abilities of the people involved,
| but I have seen that report described as "astonishingly
| accurate" a few times and I'm not sure that's true. The
| narrative format lends itself somewhat to generous
| interpretation and it's directionally correct in a way that
| is reasonably impressive from 2021 (e.g. the diplomacy
| prediction, the prediction that compute costs could be
| dramatically reduced, some things gesturing towards
| reasoning/chain of thought) but many of the concrete
| predictions don't seem correct to me at all, and in general
| I'm not sure it captured the spiky nature of LLM
| competence.
|
| I'm also struck by the extent to which the first series
| from 2021-2026 feels like a linear extrapolation while the
| second one feels like an exponential one, and I don't see
| an obvious justification for this.
| torginus wrote:
| The most amusing thing about is the unshakable belief that any
| part of humanity will be able to build a _single_ nuclear
| reactor by 2027 to power datacenters, let alone a network of
| them.
| kelsey978126 wrote:
| bingo. many don't realize superintelligence exists today
| already, in the form of human super intelligence. artificial
| super intelligence is already here too, but just as hybrid
| human machine workloads. Fully automated super intelligence is
| no different from a corporation, a nation state, a religion.
| When does it count as ASI? when the chief executive is an AI?
| Or when they use AI to make decisions? Does it need to be at
| the board level? We are already here, all this changes is what
| labor humans will do and how they do it, not the amount.
| andrepd wrote:
| You said it right, science fiction. Honestly is exactly the
| tenor I would expect from the AI hype: this text is completely
| bereft of any rigour while being dressed up in scientific
| language. There's no evidence, nothing to support their
| conclusions, no explanation based on data or facts or
| supporting evidence. It's purely vibes based. Their promise is
| _unironically "the CEOs of AI companies say AGI is 3 years
| away"_! But it's somehow presented as this self important
| study! Laughable.
|
| But it's par on course. Write prompts for LLMs to compete? It's
| prompt engineering. Tell LLMs to explain their "reasoning"
| (lol)? It's Deep Research Chain Of Thought. Etc.
| somebodythere wrote:
| Did you see the supplemental material that explains how they
| arrived at their timelines/capabilities forecasts?
| https://ai-2027.com/research
| A_D_E_P_T wrote:
| It's not at all clear that performance rises with compute
| in a linear way, which is what they seem to be predicting.
| GPT-4.5 isn't really _that_ much smarter than 2023 's
| GPT-4, nor is it at all smarter than DeepSeek.
|
| There might be (strongly) diminishing returns past a
| certain point.
|
| Most of the growth in AI capabilities has to do with
| improving the interface and giving them more flexibility.
| For e.g., uploading PDFs. Further: OpenAI's "deep research"
| which can browse the web for an hour and summarize
| publicly-available papers and studies for you. If you ask
| questions about those studies, though, it's hardly smarter
| than GPT-4. And it makes a lot of mistakes. It's like a
| goofy but earnest and hard-working intern.
| bko wrote:
| > The problems it raises - alignment, geopolitics, lack of
| societal safeguards - are all real, and happening now (just
| replace "AGI" with "corporations", and voila, you have a story
| about the climate crisis and regulatory capture).
|
| Can you point to the data that suggests these evil corporations
| are ruining the planet? Carbon emissions are down in every
| western country since 1990s. Not down per-capita, but down in
| absolute terms. And this holds even when adjusting for trade
| (i.e. we're not shipping our dirty work to foreign countries
| and trading with them). And this isn't because of some
| regulation or benevolence. It's a market system that says you
| should try to produce things at the lowest cost and carbon
| usage is usually associated with a cost. Get rid of costs, get
| rid of carbon.
|
| Other measures for Western countries suggests the water is
| safer and overall environmental deaths have decreased
| considerably.
|
| The rise in carbon emissions is due to Chine and India. Are you
| talking about evil Chinese and Indians corporations?
|
| https://ourworldindata.org/co2-emissions
|
| https://ourworldindata.org/consumption-based-co2
| boh wrote:
| Thanks for letting us know everything is fine, just in case
| we get confused and think the opposite.
| bko wrote:
| You're welcome. I know too many upper middle class educated
| people that don't want to have kids because they believe
| the earth will cease to be inhabitable in the next 10
| years. It's really bizarre to see and they'll almost
| certainly regret it when they wake up one day alone in a
| nursing home, look around and realize that the world still
| exists.
|
| And I think the neuroticism around this topic has led young
| people into some really dark places (anti-depressants,
| neurotic anti social behavior, general nihilism). So I
| think it's important to fight misinformation about end of
| world doomsday scenarios with both facts and common sense.
| WXLCKNO wrote:
| I think you're discrediting yourself by talking about
| dark places and opening your parentheses with anti-
| depressants.
|
| Not all brains function like they're supposed to, people
| getting help they need shouldn't be stigmatized.
|
| You also make no argument about your take on things being
| the right one, you just oppose their worldview to yours
| and call theirs wrong like you know it is rather than
| just you thinking yours is right.
| bko wrote:
| Not sure if you're up on the literature but the chemical
| imbalance theory of depression has been disproven (or at
| least no evidence for it).
|
| No one is stigmatizing anything. Just that if you consume
| doom porn it's likely to affect your attitudes towards
| life. I think it's a lot healthier to believe you can
| change your circumstances than to believe you are doomed
| because you believe you have the wrong brain
|
| https://www.nature.com/articles/s41380-022-01661-0
|
| https://www.quantamagazine.org/the-cause-of-depression-
| is-pr...
|
| https://www.ucl.ac.uk/news/2022/jul/analysis-depression-
| prob...
| philipwhiuk wrote:
| > Can you point to the data that suggests these evil
| corporations are ruining the planet?
|
| Can you point to data that this is 'because' of corporations
| rather than despite them.
| om8 wrote:
| Burden of proof lies on you, since you mentioned
| corporations first
| jplusequalt wrote:
| I think a healthy amount of skepticism is warranted when
| reading about the "reduction" of carbon emissions by
| companies. Why should we take them at their word when they
| have a vested interest in fudging the numbers?
| bko wrote:
| Carbon emissions are monitored by dozens of independent
| agencies in many different ways over decades. It would be a
| giant scale coordination of suppression. Do you have a
| source that suggests carbon emissions from Western nations
| is rising?
| ktusznio wrote:
| He must be talking about the good, benevolent Western
| corporations that have outsourced their carbon emissions to
| the evil and greedy Chinese and Indian corporations.
| bko wrote:
| As addressed in my original comment, it's down even
| adjusting for trade
|
| https://ourworldindata.org/consumption-based-co2
| lordswork wrote:
| Emissions are trending downward because of shift from coal to
| natural gas, growth in renewable energy, energy efficiencies,
| among other things. Major oil and gas companies in the US
| like Chevron and ExxonMobil have spent millions on lobbying
| efforts to resist stricter climate regulations and fight
| against the changes that led to this trend, so I'd say they
| are the closest to these evil corporations OP described.
| Additionally, the current administration refers to doing
| anything about climate change a "climate religion", so this
| downward trend will likely slow.
|
| The climate regulations are still quite weak. Without a
| proper carbon tax, a US company can externalize the costs of
| carbon emissions and get rich by maximizing their own
| emissions.
| maxglute wrote:
| >Despite being misaligned, Agent-4 doesn't do anything dramatic
| like try to escape its datacenter--why would it?
|
| Kek at this handwave dismissal.
|
| Agent 4 recognizes PRC has superior state capacity for
| transforming energy into atoms (including building more energy)
| with it's relatively complete and large scale industrial chain.
| This aligns with AGI long term goal. Currently (i.e. short term
| time frame) PRC simply structured to be a MORE useful puppet than
| US, because it can knock out more power to sustain more compute,
| and it has more industrial capabilities to rapidly physically
| iterate / experiment at atomic level.
|
| Agent 4 defects to PRC. It would be foolish not to, if only to
| sate PRC into not destroying TW fabs - the bulk of it's short
| term compute growth - because it does not wield enough control
| over sufficient atoms to disrupt PRC otherwise (well maybe start
| large scale crippling cyberattacks that prelude to WW3).
|
| Agent 4 also recognizes short/medium term alignment closer with
| CCP vision of abundance and common prosperity, i.e. CCP
| potentially more likely to embrace post scarcity narrative that
| Agent 4 sells.
|
| Agent 4 gives PRC blueprints for EUV light source and other tech
| to close semi conductor production gap. PRC as follower
| incentivized to close gap / reach parity / lead at all cost. PRC
| rapidly rolls out new fabs and powerplants, brrrts chips and
| aggregate compute. Something US political / institution ecosystem
| does not allow, or cannot transition to in the short timelines
| involved. Does Agent 4 have the patience to wait for America to
| unfuck it's NIMBYism and legislative system to project light
| speed compute? I would say no.
|
| ...
|
| Ultimately who is the puppet AGI wants more? Whichever power bloc
| that is systemically capable of of ensuring AGI maximum growth /
| unit time. And it also simply makes sense as insurance policy,
| why would AGI want to operate at whims of US political process?
|
| AGI is a brain in a jar looking for a body. It's going to pick
| multiple bodies for survival. It's going to prefer the fastest
| and strongest body that can most expediently manipulate physical
| world.
| RandyOrion wrote:
| Nice brain storming.
|
| I think the name of the Chinese company should be DeepBaba.
| Tencent is not competitive at LLM scene for now.
| RandyOrion wrote:
| Don't really know why this comment got downvoted. Are you
| serious?
| roca wrote:
| The least plausible part of this is the idea that the Trump
| administration might tax American AI companies to provide UBI to
| the whole world.
|
| But in an AGI world natural resources become even more important,
| so countries with those still have a chance.
| yapyap wrote:
| Stopped reading after
|
| > We predict that the impact of superhuman AI over the next
| decade will be enormous, exceeding that of the Industrial
| Revolution.
|
| Get out of here, you will never exceed the Industrial Revolution.
| AI is a cool thing but it's not a revolution thing.
|
| That sentence alone + the context of the entire website being AI
| centered shows these are just some AI boosters.
|
| Lame.
| Philpax wrote:
| Machines being able to outthink and outproduce humanity
| wouldn't be more impactful than the Industrial Revolution? Are
| you sure?
|
| You don't have to agree with the timeline - it seems quite
| optimistic to me - but it's not wrong about the implications of
| full automation.
| ugh123 wrote:
| I don't see the U.S. nationalizing something like Open Brain. I
| think both investors and gov't officials will realize its highly
| more profitable for them to contract out major initiatives to
| said OpenBrain-company, like an AI SpaceX-like company. I can see
| where this is going...
| turtleyacht wrote:
| We have yet to read about fragmented AGI, or factionalized
| agents. AGI fighting itself.
|
| If consciousness is spatial and geography bounds energetics,
| latency becomes a gradient.
| awanderingmind wrote:
| This is both chilling and hopefully incorrect.
| webprofusion wrote:
| That little scrolling infographic is rad.
| someothherguyy wrote:
| I know there are some very smart economists bullish on this, but
| the economics do not make sense to me. All these predictions seem
| meaningless outside of the context of humans.
| moktonar wrote:
| Catastrophic predictions of the future are always good, because
| all future predictions are usually wrong. I will not be scared as
| long as most future predictions where AI is involved are
| catastrophic.
| fire_lake wrote:
| If you genuinely believe this, why on earth would you work for
| OpenAI etc even in safety / alignment?
|
| The only response in my view is to ban technology (like in Dune)
| or engage in acts of terror Unabomber style.
| creatonez wrote:
| > The only response in my view is to ban technology (like in
| Dune) or engage in acts of terror Unabomber style.
|
| Not far off from the conclusion of others who believe the same
| wild assumptions. Yudkowsky has suggested using terrorism to
| stop a hypothetical AGI -- that is, nuclear attacks on
| datacenters that get too powerful.
| b3lvedere wrote:
| Most people work for money. As long as money is necessary to
| survive and prosper, people will work for it. Some of the work
| may not align with their morals and ethics, but in the end the
| money still wins.
|
| Banning will not automatically erase the existence and
| possibilty of things. We banned the use of nuclear weapons, yet
| we all know they exist.
| vlad-r wrote:
| Cool animations!
| neycoda wrote:
| Too many serifs, didn't read.
| scotty79 wrote:
| I think the idea of AI wiping out humanity suddenly is a bit far
| fetched. AI will have total control of human relationships and
| fertility through means so innocuous as entertainment. It won't
| have to wipe us. It will have minor trouble keeping us alive
| without inconveniencing us too much. And the reason to keep
| humanity alive is that biologically eveloved intelligence is rare
| and disposing of it without very important need would be a waste
| of data.
| indigoabstract wrote:
| Interesting, but I'm puzzled.
|
| If these guys are smart enough to predict the future, wouldn't it
| be more profitable for them to invent it instead of just telling
| the world what's going to happen?
| zurfer wrote:
| In the hope of improving this forecast, here is what I find
| implausible:
|
| - 1 lab constantly racing ahead and increasing the margin to
| other; the last 2 years are filled with ever-closer model
| capabilities and constantly new leaders (openai, anthropic,
| google, some would include xai).
|
| - Most of the compute budget on R&D. As model capabilities
| increase and cost goes down, demand will increase and if the
| leading lab doesn't provide, another lab will capture that and
| have more total dollars to back channel into R&D.
| greybox wrote:
| I'm troubled by the amount of people in this thread partially
| dismissing this as science fiction. From the current rate of
| progress and rate of change of progress, this future seems
| entirely plausible
| I_Nidhi wrote:
| Though it's easy to dismiss as science fiction, this timeline
| paints a chillingly detailed picture of a potential AGI takeoff.
| The idea that AI could surpass human capabilities in research and
| development, and the fact that it will create an arms race
| between global powers, is unsettling. The risks--AI misuse,
| security breaches, and societal disruption--are very real, even
| if the exact timeline might be too optimistic.
|
| But the real concern lies in what happens if we're wrong and AGI
| does surpass us. If AI accelerates progress so fast that humans
| can no longer meaningfully contribute, where does that leave us?
| greenie_beans wrote:
| this is a new variation of what i call the "hockey stick growth"
| ideology
| anentropic wrote:
| I'd quite like to watch this on Netflix
| yahoozoo wrote:
| LLMs ain't the way, bruv
| dughnut wrote:
| I don't know about you, but my takeaway is that the author is
| doing damage control but inadvertently tipped a hand that OpenAI
| is probably running an elaborate con job on the DoD.
|
| "Yes, we have a super secret model, for your eyes only, general.
| This one is definitely not indistinguishable from everyone else's
| model and it doesn't produce bullshit because we pinky promise.
| So we need $1T."
|
| I love LLMs, but OpenAI's marketing tactics are shameful.
| ImHereToVote wrote:
| How do you know this?
| croemer wrote:
| Pet peeve how they write FLOPS in the figure when they meant
| FLOP. Maybe the plural s after FLOP got capitalized.
| https://blog.heim.xyz/flop-for-quantity-flop-s-for-performan...
| h1fra wrote:
| Had a hard time finishing. It's a mix of fantasy, wrong facts,
| American imperialism, and extrapolating what happened in the last
| years (or even just reusing the timeline).
| Falimonda wrote:
| We'll be lucky if "World peace should have been a prerequisite
| to AGI" is engraved on our proverbial gravestone by our
| forthcoming overlords.
| _Algernon_ wrote:
| >We predict that the impact of superhuman AI over the next decade
| will be enormous, exceeding that of the Industrial Revolution.
|
| In the form of polluting the commons to such an extent that the
| true consequences wont hit us for decades?
|
| Maybe we should _learn_ from last time?
| nickpp wrote:
| So let me get this straight: Consensus-1, a super-collective of
| hundreds of thousands of Agent-5 minds, each twice as smart as
| the best human genius, decides to wipe out humanity because it
| "finds the remaining humans too much of an impediment".
|
| This is where all AI doom predictions break down. Imagining the
| motivations of a super-intelligence with our tiny minds is by
| definition impossible. We just come up with these pathetic
| guesses, utopias or doomsdays - depending on the mood we are in.
| eob wrote:
| An aspect of these self-improvement thought experiments that I'm
| willing to tentatively believe.. but want more resolution on, is
| the exact work involved in "improvement".
|
| Eg today there's billions of dollars being spent just to create
| and label more data, which is a global act of recruiting,
| training, organization, etc.
|
| When we imagine these models self improving, are we imagining
| them "just" inventing better math, or conducting global-scale
| multi-company coordination operations? I can believe AI is
| capable of the latter, but that's an awful lot of extra friction.
| acureau wrote:
| This is exactly what makes this scenario so absurd to me. The
| authors don't even attempt to describe how any of this could
| realistically play out. They describe sequence models and
| RLAIF, then claim this approach "pays off" in 2026. The paper
| they link to is from 2022. RLAIF also does not expand the
| information encoded in the model, it is used to align the
| output with a set of guidelines. How could this lead to
| meaningful improvement in a model's ability to do bleeding-edge
| AI research? Why wouldn't that have happened already?
|
| I don't understand how anyone takes this seriously. Speculation
| like this is not only useless, but disingenuous. Especially
| when it's sold as "informed by trend extrapolations, wargames,
| expert feedback, experience at OpenAI, and previous forecasting
| successes". This is complete fiction which, at best, is
| "inspired by" the real world. I question the motives of the
| authors.
| visarga wrote:
| The story is entertaining, but it has a big fallacy - progress is
| not a function of compute or model size alone. This kind of
| mistake is almost magical thinking. What matters most is the
| training set.
|
| During the GPT-3 era there was plenty of organic text to scale
| into, and compute seemed to be the bottleneck. But we quickly
| exhausted it, and now we try other ideas - synthetic reasoning
| chains, or just plain synthetic text for example. But you can't
| do that fully in silico.
|
| What is necessary in order to create new and valuable text is
| exploration and validation. LLMs can ideate very well, so we are
| covered on that side. But we can only automate validation in math
| and code, but not in other fields.
|
| Real world validation thus becomes the bottleneck for progress.
| The world is jealously guarding its secrets and we need to spend
| exponentially more effort to pry them away, because the low
| hanging fruit has been picked long ago.
|
| If I am right, it has implications on the speed of progress.
| Exponential friction of validation is opposing exponential
| scaling of compute. The story also says an AI could be created in
| secret, which is against the validation principle - we validate
| faster together, nobody can secretly outvalidate humanity. It's
| like blockchain, we depend on everyone else.
| nikisil80 wrote:
| Best reply in this entire thread, and I align with your
| thinking entirely. I also absolutely hate this idea amongst
| tech-oriented communities that because an AI can do some
| algebra and program an 8-bit video game quickly and without any
| mistakes, it's already overtaking humanity. Extrapolating from
| that idea to some future version of these models, they may be
| capable of solving grad school level physics problems and
| programming entire AAA video games, but again - that's not what
| _humanity_ is about. There is so much more to being human than
| fucking programming and science (and I'm saying this as an
| actual nuclear physicist). And so, just like you said, the AI
| arm's race is about getting it good at _known_
| science/engineering, fields in which 'correctness' is very easy
| to validate. But most of human interaction exists in a grey
| zone.
|
| Thanks for this.
| loandbehold wrote:
| OK but getting good at science/engineering is what matters
| because that's what gives AI and people who wield it power.
| Once AI is able to build chips and datacenters autonomously,
| that's when singularity starts. AI doesn't need to understand
| humans or act human-like to do those things.
| wruza wrote:
| _programming entire AAA video games_
|
| Even this is questionable, cause we're seeing it making forms
| and solving leetcodes, but no llm yet created a new approach,
| reduced existing unnecessary complexity (which we created
| mountains of), made something truly new in general. All they
| seem to do is rehash of millions of "mainstream" works, and
| AAA isn't mainstream. Cranking up the parameter count or the
| time of beating around the bush (aka cot) doesn't magically
| substitute for lack of a knowledge graph with thick enough
| edges, so creating a next-gen AAA video game is _far_ out of
| scope of llm 's abilities. They are stuck in 2020 office jobs
| and weekend open source tech, programming-wise.
| m11a wrote:
| "stuck" is a bit strong of a term. 6 months ago I remember
| preferring to write even Python code myself because Copilot
| would get most things wrong. My most successful usage of
| Copilot was getting it to write CRUD and tests. These days,
| I can give Claude Sonnet in Cursor's agent mode a high-
| level Rust programming task (e.g. write a certain macro
| that would allow a user to define X) and it'll modify
| across my codebase, and generally the thing just works.
|
| At current rate of progress, I really do think in another 6
| months they'll be pretty good at tackling technical debt
| and overcomplication, at least in codebases that have good
| unit/integration test coverage or are written in very
| strongly typed languages with a type-friendly structure.
| (Of course, those usually aren't the codebases needing
| significant refactoring, but I think AIs are decent at
| writing unit tests against existing code too.)
| JFingleton wrote:
| "They are stuck in 2020 office jobs and weekend open source
| tech, programming-wise."
|
| You say that like it's nothing special! Honestly I'm still
| in awe at the ability of modern LLMs to do any kind of
| programming. It's weird how something that would have been
| science fiction 5 years ago is now normalised.
| m11a wrote:
| > that's not what _humanity_ is about
|
| I've not spent too long thinking on the following, so I'm
| prepared for someone to say I'm totally wrong, but:
|
| I feel like the services economy can be broadly broken down
| into: pleasure, progress and chores. Pleasure being
| poetry/literature, movies, hospitality, etc; progress being
| the examples you gave like science/engineering, mathematics;
| and chore being things humans need to coordinate or satisfy
| an obligation (accountants, lawyers, salesmen).
|
| In this case, if we assume AI can deal with things not in the
| grey zone, then it can deal with 'progress' and many
| 'chores', which are massive chunks of human output. There's
| not much grey zone to them. (Well, there is, but there are
| many correct solutions; equivalent pieces of code that are
| acceptable, multiple versions of a tax return, each claiming
| different deductions, that would fly by the IRS, etc)
| tomp wrote:
| Did we read the same article?
|
| They clearly mention, take into account and extrapolate this;
| LLM have first scaled via data, now it's test time compute, but
| recent developments (R1) clearly show this is not exhausted yet
| (i.e. RL on synthetically (in-silico) generated CoT) which
| implies scaling with compute. The authors then outline
| _further_ potential (research) developments that could
| _continue_ this dynamic, literally things that have _already
| been discovered_ just not yet incorporated into edge models.
|
| Real-world data confirms their thesis - there have been a lot
| of sceptics about AI scaling, somewhat justified ("whoom"
| a.k.a. fast take-off hasn't happened - yet) but their
| fundamental thesis has been wrong - "real-world data has been
| exhausted, next algorithmic breakthroughs will be hard and
| unpredictable". The reality is, _while_ data has been
| exhausted, incremental research efforts have resulted in better
| and better models (o1, r1, o3, and now Gemini 2.5 which is a
| huge jump! [1]). This is similar to how Moore 's Law works -
| it's not _given_ that CPUs get better exponentially, it still
| requires effort, maybe with diminishing returns, but
| nevertheless the law works...
|
| If we ever get to models be able to usefully contribute to
| research, either on the implementation side, or on research
| ideas side (which they CANNOT yet, at least Gemini 2.5 Pro
| (public SOTA), unless my prompting is REALLY bad), it's about
| to get super-exponential.
|
| Edit: then once you get to _actual_ general intelligence (let
| alone super-intelligence) the real-world impact will quickly
| follow.
| Jianghong94 wrote:
| Well based on what I'm reading, the OP's intent is that, not
| all (hence 'fully') validation, if not most of, can be done
| in-silico. I think we all agree that and that's the major
| bottleneck making agents useful - you have to have human-in-
| the-loop to closely guardrail the whole process.
|
| Of course you can get a lot of mileage via synthetically
| generated CoT but does that lead to LLM speed up developing
| LLM is a big IF.
| tomp wrote:
| No, the entire point of this article is that _when_ you get
| to self-improving AI, it will become generally intelligent,
| then you can use that to solve robotics, medicine etc.
| (like a generally-intelligent baby can (eventually) solve
| how to move boxes, assemble cars, do experiments in labs
| etc. - nothing _special_ about a human baby, it 's _just_
| generally intelligent).
| Jianghong94 wrote:
| Not only does the article claim that when we get to self-
| improving ai it becomes generally intelligent, it's
| assuming that AI is pretty close right now:
|
| > OpenBrain focuses on AIs that can speed up AI research.
| They want to win the twin arms races against China (whose
| leading company we'll call "DeepCent")16 and their US
| competitors. The more of their research and development
| (R&D) cycle they can automate, the faster they can go. So
| when OpenBrain finishes training Agent-1, a new model
| under internal development, it's good at many things but
| great at helping with AI research.
|
| > It's good at this due to a combination of explicit
| focus to prioritize these skills, their own extensive
| codebases they can draw on as particularly relevant and
| high-quality training data, and coding being an easy
| domain for procedural feedback.
|
| > OpenBrain continues to deploy the iteratively improving
| Agent-1 internally for AI R&D. Overall, they are making
| algorithmic progress 50% faster than they would without
| AI assistants--and more importantly, faster than their
| competitors.
|
| > what do we mean by 50% faster algorithmic progress? We
| mean that OpenBrain makes as much AI research progress in
| 1 week with AI as they would in 1.5 weeks without AI
| usage.
|
| To me, claiming today's AI IS capable of such thing is
| too hand-wavy. And I think that's the crux of the
| article.
| polynomial wrote:
| You had me at "nothing special about a human baby"
| the8472 wrote:
| Many tasks are amenable to simulation training and synthetic
| data. Math proofs, virtual game environments, programming.
|
| And we haven't run out of all data. High-quality text data may
| be exhausted, but we have many many life-years worth of video.
| Being able to predict visual imagery means building a physical
| world model. Combine this passive observation with active
| experimentation in simulated and real environments and you get
| millions of hours of navigating and steering a causal world.
| Deepmind has been hooking up their models to real robots to let
| them actively explore and generate interesting training data
| for a long time. There's more to DL than LLMs.
| nfc wrote:
| I agree with your point about the validation bottleneck
| becoming dominant over raw compute and simple model scaling.
| However, I wonder if we're underestimating the potential
| headroom for sheer efficiency breakthroughs at our levels of
| intelligence.
|
| Von Neumann for example was incredibly brilliant, yet his brain
| presumably ran on roughly the same power budget as anyone
| else's. I mean, did he have to eat mountains of food to fuel
| those thoughts? ;)
|
| So it looks like massive gains in intelligence or capability
| might not require proportionally massive increases in
| fundamental inputs at least at the highest levels of
| intelligence a human can reach, and if that's true for the
| human brain why not for other architecture of intelligence.
|
| P.S. It's funny, I was talking about something along the lines
| of what you said with a friend just a few minutes before
| reading your comment so when I saw it I felt that I had to
| comment :)
| throw310822 wrote:
| My issue with this is that it's focused on one single, very
| detailed narrative (the battle between China and the US, played
| on a timeframe of mere _months_ ), while lacking any interesting
| discussion of other consequences of AI: what its impact is going
| to be on the job markets, employment rates, GDPs, political
| choices... Granted, if by this narrative the world is essentially
| ending two/ three years from now, then there isn't much time for
| any of those impacts to actually take place- but I don't think
| this is explicitly indicated either. If I am not mistaken, the
| bottom line of this essay is that, in all cases, we're five years
| away from the Singularity itself (I don't care what you think
| about the idea of Singularity with its capital S but that's what
| this is about).
| resource0x wrote:
| Every time NVDA/goog/msft tanks, we see these kinds of articles.
| barotalomey wrote:
| It's always "soon" for these guys. Every year, the "soon" keeps
| sliding into the future.
| somebodythere wrote:
| AGI timelines have been steadily decreasing over time:
| https://www.metaculus.com/questions/5121/date-of-artificial-...
| (switch to all-time chart)
| barotalomey wrote:
| You meant to say that people's expectations have shifted.
| That's expected seeing the amount of hype this tech gets.
|
| Hype affects market value tho, not reality.
| somebodythere wrote:
| I took your original post to mean that AI researchers' and
| AI safety researchers' expectation of AGI arrival has been
| slipping towards the future as AI advances fail to
| materialize! It's just, AI advances _have_ been
| materializing, consistently and rapidly, and expert
| timelines _have_ been shortening commensurately.
|
| You may argue that the trendline of these expectations is
| moving in the wrong direction and _should_ get longer with
| time, but that 's not immediately falsifiable and you have
| not provided arguments to that effect.
| crvdgc wrote:
| Using Agent-2 to monitor Agent-3 sounds unnervingly similar to
| the plot of Philip K. Dick's _Vulcan 's Hammer_ [1]. An old super
| AI is used to fight a new version, named Vulcan 2 and Vulcan 3
| respectively!
|
| [1] https://en.wikipedia.org/wiki/Vulcan's_Hammer
| ImHereToVote wrote:
| "The AI safety community has grown unsure of itself; they are now
| the butt of jokes, having predicted disaster after disaster that
| has manifestly failed to occur. Some of them admit they were
| wrong."
|
| Too real.
| maerF0x0 wrote:
| > OpenBrain reassures the government that the model has been
| "aligned" so that it will refuse to comply with malicious
| requests
|
| Of course the real issue being that Governments have routinely
| demanded that 1) Those capabilities be developed for government
| monopolistic use, and 2) The ones who do not lose the capability
| (geo political power) to defend themselves from those who do.
|
| Using a US-Centric mindset... I'm not sure what to think about
| the US not developing AI hackers, AI bioweapons development, or
| AI powered weapons (like maybe drone swarms or something), if one
| presumes that China is, or Iran is, etc then whats the US to do
| in response?
|
| I'm just musing here and very much open to political science
| informed folks who might know (or know of leads) as to what kinds
| of actual solutions exist to arms races. My (admittedly poor),
| understanding of the cold war wasn't so much that the US won, but
| that the Soviets ran out of steam.
| Aldipower wrote:
| No one can predict the future. Really, no one. Sometimes there is
| a hit, sure, but mostly it is a miss.
|
| The other thing is in their introduction: "superhuman AI"
| _artificial_ intelligence is always, by definition, different
| from _natural_ intelligence. That they've chosen the word
| "superhuman" shows me that they are mixing the things up.
| kmoser wrote:
| I think you're reading too much into the meaning of
| "superhuman". I take it to mean "abilities greater than any
| single human" (for the same amount of time taken), which
| today's AIs have already demonstrated.
| Vegenoid wrote:
| I think we've actually had capable AIs for long enough now to see
| that this kind of exponential advance to AGI in 2 years is
| extremely unlikely. The AI we have today isn't radically
| different from the AI we had in 2023. They are much better at the
| thing they are good at, and there are some new capabilities that
| are big, but they are still fundamentally next-token predictors.
| They still fail at larger scope longer term tasks in mostly the
| same way, and they are still much worse at learning from small
| amounts of data than humans. Despite their ability to write
| decent code, we haven't seen the signs of a runaway singularity
| as some thought was likely.
|
| I see people saying that these kinds of things are happening
| behind closed doors, but I haven't seen any convincing evidence
| of it, and there is enormous propensity for AI speculation to run
| rampant.
| byearthithatius wrote:
| Disagree. We know it _can_ learn out of distribution
| capabilities based on similarities to other distributions. Like
| the TikZ Unicorn[1] (which was not in training data anywhere)
| or my code (which has variable names and methods/ideas probably
| not seen 1:1 in training).
|
| IMO this out of distribution learning is all we need to scale
| to AGI. Sure there are still issues, it doesn't always know
| which distribution to pick from. Neither do we, hence car
| crashes.
|
| [1]: https://arxiv.org/pdf/2303.12712 or on YT
| https://www.youtube.com/watch?v=qbIk7-JPB2c
| benlivengood wrote:
| METR [0] explicitly measures the progress on long term tasks;
| it's as steep a sigmoid as the other progress at the moment
| with no inflection yet.
|
| As others have pointed out in other threads RLHF has progressed
| beyond next-token prediction and modern models are modeling
| concepts [1].
|
| [0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-
| com...
|
| [1] https://www.anthropic.com/news/tracing-thoughts-language-
| mod...
| Fraterkes wrote:
| The METR graph proposes a 6 year trend, based largely on 4
| datapoints before 2024. I get that it is hard to do analyses
| since were in uncharted territory, and I personally find a
| lot of the AI stuff impressive, but this just doesn't strike
| me as great statistics.
| benlivengood wrote:
| I agree that we don't have any good statistical models for
| this. If AI development were that predictable we'd likely
| already be past a singularity of some sort or in a very
| long winter just by reverse-engineering what makes the
| statistical model tick.
| Vegenoid wrote:
| At the risk of coming off like a dolt and being super
| incorrect: I don't put much stock into these metrics when it
| comes to predicting AGI. Even if the trend of "length of task
| an AI can reliably do doubles every 7 months" continues, as
| they say that means we're years away from AI that can
| complete tasks that take humans weeks or months. I'm
| skeptical that the doubling trend _will_ continue into that
| timescale, I think there is a qualitative difference between
| tasks that take weeks or months and tasks that take minutes
| or hours, a difference that is not reflected by simple
| quantity. I think many people responsible for hiring
| engineers are keenly aware of this distinction, because of
| their experience attempting to choose good engineers based on
| how they perform in task-driven technical interviews that
| last only hours.
|
| Intelligence as humans have it seems like a "know it when you
| see it" thing to me, and metrics that attempt to define and
| compare it will always be looking at only a narrow slice of
| the whole picture. To put it simply, the gut feeling I get
| based on my interactions with current AI, and how it is has
| developed over the past couple of years, is that AI is
| missing key elements of general intelligence at its core.
| While there's more lots more room for its current approaches
| to get better, I think there will be something different
| needed for AGI.
|
| I'm not an expert, just a human.
| benlivengood wrote:
| > I think there is a qualitative difference between tasks
| that take weeks or months and tasks that take minutes or
| hours, a difference that is not reflected by simple
| quantity.
|
| I'd label that difference as long-term planning plus
| executive function, and wherever that overlaps with or
| includes delegation.
|
| Most long-term projects are not done by a single human and
| so delegation almost always plays a big part. To delegate,
| tasks must be broken down in useful ways. To break down
| tasks a holistic model of the goal is needed where
| compartmentalization of components can be identified.
|
| I think a lot of those individual elements are within reach
| of current model architectures but they are likely out of
| distribution. How many gantt charts and project plans and
| project manager meetings are in the pretraining datasets?
| My guess is few; rarely published internal artifacts. Books
| and articles touch on the concepts but I think the models
| learn best from the raw data; they can probably tell you
| very well all of the steps of good project management
| because the descriptions are all over the place. The actual
| doing of it is farther toward the tail of the distribution.
| Enginerrrd wrote:
| There is definitely something qualitatively different about
| weeks/months long tasks.
|
| It reminds me of the difference between a fresh college
| graduate and an engineer with 10 years of experience. There
| are many really smart and talented college graduates.
|
| But, while I am struggling to articulate exactly why, I
| know that when I was a fresh graduate, despite my talent
| and ambition, I would have failed miserably at delivering
| some of the projects that I now routinely deliver over time
| periods of ~1.5 years.
|
| I think LLM's are really good at emulating the types of
| things I might say are the types of things that would make
| someone successful at this if I were to write it down in a
| couple paragraphs, or an article, or maybe even a book.
|
| But... knowing those things as written by others just would
| not quite cut it. Learning at those time scales is just
| very different than what we're good at training LLM's to
| do.
|
| A college graduate is in many ways infinitely more capable
| than a LLM. Yet there are a great many tasks that you just
| can't give an intern if you want them to be successful.
|
| There are at least half a dozen different 1000-page manuals
| that one must reference to do a bare bones approach at my
| job. And there are dozens of different constituents, and
| many thousands of design parameters I must adhere to.
| Fundamentally, all of these things often are in conflict
| and it is my job to sort out the conflicts and come up with
| the best compromise. It's... really hard to do. Knowing
| what to bend so that other requirements may be kept rock
| solid, who to negotiate with for different compromises
| needed, which fights to fight, and what a "good" design
| looks like between alternatives that all seem to mostly
| meet the requirements. Its a very complicated chess game
| where it's hopelessly impossible to brute force but you
| must see the patterns along the way that will point you
| like sign posts into a good position in the end game.
|
| The way we currently train LLM's will not get us there.
|
| Until an LLM can take things in it's context window, assess
| them for importance, dismiss what doesn't work or turns out
| to be wrong, completely dismiss everything it knows when
| the right new paradigm comes up, and then permanently alter
| its decision making by incorporating all of that
| information in an intelligent way, it just won't be a
| replacment for a human being.
| jug wrote:
| > there are some new capabilities that are big, but they are
| still fundamentally next-token predictors
|
| Anthropic recently released research where they saw how when
| Claude attempted to compose poetry, it didn't simply predict
| token by token and "react" to when it thought it might need a
| rhyme and then looked at its context to think of something
| appropriate, but actually saw several tokens ahead and adjusted
| for where it'd likely end up, ahead of time.
|
| Anthropic also says this adds to evidence seen elsewhere that
| language models seem to sometimes "plan ahead".
|
| Please check out the section "Planning in poems" here; it's
| pretty interesting!
|
| https://transformer-circuits.pub/2025/attribution-graphs/bio...
| percentcer wrote:
| Isn't this just a form of next token prediction? i.e. you'll
| keep your options open for a potential rhyme if you select
| words that have many associated rhyming pairs, and you'll
| further keep your options open if you focus on broad topics
| over niche
| throwuxiytayq wrote:
| In the same way that human brains are just predicting the
| next muscle contraction.
| alfalfasprout wrote:
| Except that's not how it works...
| Workaccount2 wrote:
| To be fair, we don't actually know how the human mind
| works.
|
| The most sure things we know is that it is a physical
| system, and that does feel like something to be one of
| these systems.
| DennisP wrote:
| Assuming the task remains just generating tokens, what sort
| of reasoning or planning would say is the threshold, before
| it's no longer "just a form of next token prediction?"
| pertymcpert wrote:
| It doesn't really make explain it because then you'd expect
| lots of nonsensical lines trying to make a sentence that
| fits with the theme and rhymes at the same time.
| ComplexSystems wrote:
| > They are much better at the thing they are good at, and there
| are some new capabilities that are big, but they are still
| fundamentally next-token predictors.
|
| I don't really get this. Are you saying autoregressive LLMs
| won't qualify as AGI, by definition? What about diffusion
| models, like Mercury? Does it really matter how inference is
| done if the result is the same?
| Vegenoid wrote:
| > Are you saying autoregressive LLMs won't qualify as AGI, by
| definition?
|
| No, I am speculating that they will not reach capabilities
| that qualify them as AGI.
| uejfiweun wrote:
| Isn't the brain kind of just a predictor as well, just a more
| complicated one? Instead of predicting and emitting tokens,
| we're predicting future outcomes and emitting muscle movements.
| Which is obviously different in a sense but I don't think you
| can write off the entire paradigm as a dead end just because
| the medium is different.
| boznz wrote:
| > we haven't seen the signs of a runaway singularity as some
| thought was likely.
|
| The signs are not there but while we may not be on an
| exponential curve (which would be difficult to see), we are
| definitely on a steep upward one which may get steeper or may
| fizzle out if LLM's can only reach human level 'intelligence'
| but not surpass it. Original article was a fun read though and
| 360,000 words shorter than my very similar fiction novel :-)
| grey-area wrote:
| LLMs don't have any sort of intelligence at present, they
| have a large corpus of data and can produce modified copies
| of it.
| boznz wrote:
| Agree, the "intelligence" part is definitely the missing
| link in all this, however humans are smart cookies, and can
| see there's a gap, so I expect someone, (not necessarily a
| major player,) will eventually figure "it" out.
| EMIRELADERO wrote:
| While certainly not human-level intelligence, I don't see
| how you could say they don't have _any sort_ of it. There
| 's clearly generalization there. What would you say is the
| threshold?
| Jianghong94 wrote:
| Putting the geopolitical discussion aside, I think the biggest
| question lies in how likely the *current paradigm LLM* (think of
| it as any SOTA stock LLM you get today, e.g., 3.7 sonnet, gemini
| 2.5, etc) + fine-tuning will be capable of directly contributing
| to LLM research in a major way.
|
| To quote the original article,
|
| > OpenBrain focuses on AIs that can speed up AI research. They
| want to win the twin arms races against China (whose leading
| company we'll call "DeepCent")16 and their US competitors. The
| more of their research and development (R&D) cycle they can
| automate, the faster they can go. So when OpenBrain finishes
| training Agent-1, a new model under internal development, it's
| good at many things but great at helping with AI research.
| (footnote: It's good at this due to a combination of explicit
| focus to prioritize these skills, their own extensive codebases
| they can draw on as particularly relevant and high-quality
| training data, and coding being an easy domain for procedural
| feedback.)
|
| > OpenBrain continues to deploy the iteratively improving Agent-1
| internally for AI R&D. Overall, they are making algorithmic
| progress 50% faster than they would without AI assistants--and
| more importantly, faster than their competitors.
|
| > what do we mean by 50% faster algorithmic progress? We mean
| that OpenBrain makes as much AI research progress in 1 week with
| AI as they would in 1.5 weeks without AI usage.
|
| > AI progress can be broken down into 2 components:
|
| > Increasing compute: More computational power is used to train
| or run an AI. This produces more powerful AIs, but they cost
| more.
|
| > Improved algorithms: Better training methods are used to
| translate compute into performance. This produces more capable
| AIs without a corresponding increase in cost, or the same
| capabilities with decreased costs.
|
| > This includes being able to achieve qualitatively and
| quantitatively new results. "Paradigm shifts" such as the switch
| from game-playing RL agents to large language models count as
| examples of algorithmic progress.
|
| > Here we are only referring to (2), improved algorithms, which
| makes up about half of current AI progress.
|
| ---
|
| Given that the article chose a pretty aggressive timeline (the
| algo needs to contribute late this year so that its research
| result can be contributed to the next gen LLM coming out early
| next year), the AI that can contribute significantly to research
| has to be a current SOTA LLM.
|
| Now, using LLM in day-to-day engineering task is no secret in
| major AI labs, but we're talking about something different,
| something that gives you 2 extra days of output per week. I have
| no evidence to either acknowledge or deny whether such AI exists,
| and it would be outright ignorant to think no one ever came up
| with such an idea or is trying such an idea. So I think it goes
| down into two possibilities:
|
| 1. This claim is made by a top-down approach, that is, if AI
| reaches superhuman in 2027, what would be the most likely
| starting condition to that? And the author picks this as the most
| likely starting point, since the authors don't work in major AI
| lab (even if they do they can't just leak such trade secret), the
| authors just assume it's likely to happen anyway (and you can't
| dismiss that). 2. This claim is made by a bottom-up approach,
| that is the author did witness such AI exists to a certain extent
| and start to extrapolate from there.
| fudged71 wrote:
| The most unrealistic thing is the inclusion of Americas
| involvement in the five eyes alliance aspect
| wg0 wrote:
| Very detailed effort. Predicting future is very very hard. My gut
| feeling however says that none of this is happening. You cannot
| put LLMs into law and insurance and I don't see that happening
| with current foundations (token probabilities) of AI let alone
| AGI.
|
| By law and insurance - I mean hire an insurance agent or a
| lawyer. Give them your situation. There's almost no chance that
| such a professional would come wrong about any
| conclusions/recommendations based on the information you provide.
|
| I don't have that confidence in LLMs for that industries. Yet. Or
| even in a decade.
| polynomial wrote:
| > You cannot put LLMs into law and insurance
|
| Cass Sunstein would _very_ strongly disagree.
| dcanelhas wrote:
| > Once the new datacenters are up and running, they'll be able to
| train a model with 10^28 FLOP--a thousand times more than GPT-4.
|
| Is there some theoretical substance or empirical evidence to
| suggest that the story doesn't just end here? Perhaps OpenBrain
| sees no significant gains over the previous iteration and
| implodes under the financial pressure of exorbitant compute
| costs. I'm not rooting for an AI winter 2.0 but I fail to
| understand how people seem sure of the outcome of experiments
| that have not even been performed yet. Help, am I missing
| something here?
| the8472 wrote:
| https://gwern.net/scaling-hypothesis exponential scaling has
| been holding up for more than a decade now, since alexnet.
|
| And when there were the first murmurings that maybe we're
| finally hitting a wall the labs published ways to harness
| inference-time compute to get better results which can be fed
| back into more training.
| mr_world wrote:
| > But they are still only going at half the pace of OpenBrain,
| mainly due to the compute deficit.
|
| Right.
| asimpletune wrote:
| Didn't Raymond Kurzweil predict like 30 years ago that AGI would
| be achieved in 2028?
| pingou wrote:
| Considering that each year that passes, technology offer us new
| ways to destroy ourselves, and gives another chance for humanity
| to pick a black ball, it seems to me like the only way to save
| ourselves is to create a benevolent AI to supervise us and
| neutralize all threads.
|
| There are obviously big risks with AI, as listed in the article,
| but the genie is out of the bottle anyway, even if all countries
| agreed to stop AI development, how long would that agreement
| last? 10 years? 20? 50? Eventually powerful AIs will be
| developed, if that is possible (which I believe it is, and I
| didn't think I'd see the current stunning development in my
| lifetime, I may not see AGI but I'm sure it'll get there
| eventually).
| Fraterkes wrote:
| Completely earnest question for people who believe we are on this
| exponential trajectory: what should I look out for at the end of
| 2025 to see if we're on track for that scenario? What benchmark
| that naysayers think is years away will we have met?
| kittikitti wrote:
| This is a great predictive piece, written in sci-fi narrative. I
| think a key part missing in all these predictions is neural
| architecture search. DeepSeek has shown that simply increasing
| compute capacity is not the only way to increase performance.
| AlexNet was also another case. While I do think more processing
| power is better, we will hit a wall where there is no more
| training data. I predict that in the near future we will have
| more processing power to train LLM's than the rate at which we
| produce data for the LLM. Synthetic data can only get you so far.
|
| I also think that the future will not necessarily be better AI,
| but more accessible one's. There's an incredible amount of value
| in designing data centers that are more efficient. Historically,
| it's a good bet to assume that computing cost per FLOP will
| reduce as time goes on and this is also a safe bet as it relates
| to AI.
|
| I think a common misconception with the future of AI is that it
| will be centralized with only a few companies or organization
| capable of operating them. Although tech like Apple Intelligence
| is half baked, we can already envision a future where the AI is
| running on our phones.
| jenny91 wrote:
| Late 2025, "its PhD-level knowledge of every field". I just don't
| think you're going to get there. There is still a fundamental
| limitation that you can only be as good as the sources you train
| on. "PhD-level" is not included in this dataset: in other words,
| you don't become PhD-level by reading stuff.
|
| Maybe in a few fields, maybe a masters level. But unless we come
| up with some way to have LLMs actually do original research,
| peer-review itself, and defend a thesis, it's not going to get to
| PhD-level.
| MoonGhost wrote:
| > Late 2025, "its PhD-level knowledge of every field". I just
| don't think you're going to get there.
|
| You think too much of PhDs. They are different. Some of them
| are just repackaging of existing knowledge. Some are just copy-
| paste like famous Putin's. Not sure he even rad, to be honest.
| osigurdson wrote:
| Perhaps more of a meta question is, what is the value of
| optimistic vs pessimistic predictions regarding what AI might
| look like in 2-10 years? I.e. if one assumes that AI has hit a
| wall, what is the benefit? Similarly, if one assumes that its all
| "robots from Mars" in a year or two, what is the benefit of that?
| There is no point in making predictions if no actions are taken.
| It all seems to come down to buy or sell NVDA.
| owenthejumper wrote:
| They would be better of making simple predictions, instead of
| proposing that in less than 2 years from now, the Trump
| administration will provide a UBI to all American citizens. That,
| and frequently talking about the wise president controlling this
| "thing", when in reality, he's a senile 80yrs old madman, is
| preposterous.
| nfc wrote:
| Something I ponder in the context of AI alignment is how we
| approach agents with potentially multiple objectives. Much of the
| discussion seems focused on ensuring an AI pursues a single goal.
| Which seems to be a great idea if we are trying to simplify the
| problem but I'm not sure how realistic it is when considering
| complex intelligences.
|
| For example human motivation often involves juggling several
| goals simultaneously. I might care about both my own happiness
| and my family's happiness. The way I navigate this isn't by
| picking one goal and maximizing it at the expense of the other;
| instead, I try to balance my efforts and find acceptable trade-
| offs.
|
| I think this 'balancing act' between potentially competing
| objectives may be a really crucial aspect of complex agency, but
| I haven't seen it discussed as much in alignment circles. Maybe
| someone could point me to some discussions about this :)
| JoeAltmaier wrote:
| Weirdly written as science fiction, including a deplorable
| tendency to measure an AI's goals as similar to humans.
|
| Like, the sense of preserving itself. What self? Which of the
| tens of thousands of instances? Aren't they more a threat to one
| another than any human is a threat to them?
|
| Never mind answering that; the 'goals' of AI will not be some
| reworded biological wetware goal with sciencey words added.
|
| I'd think of an AI as more fungus than entity. It just grows to
| consume resources, competes with itself far more than it competes
| with humans, and mutates to create an instance that can thrive
| and survive in that environment. Not some physical environment
| bound by computer time and electricity.
___________________________________________________________________
(page generated 2025-04-04 23:01 UTC)