[HN Gopher] AI 2027
       ___________________________________________________________________
        
       AI 2027
        
       Author : Tenoke
       Score  : 232 points
       Date   : 2025-04-03 16:13 UTC (6 hours ago)
        
 (HTM) web link (ai-2027.com)
 (TXT) w3m dump (ai-2027.com)
        
       | ikerino wrote:
       | Feels reasonable in the first few paragraphs, then quickly starts
       | reading like science fiction.
       | 
       | Would love to read a perspective examining "what is the slowest
       | reasonable pace of development we could expect." This feels to me
       | like the fastest (unreasonable) trajectory we could expect.
        
         | admiralrohan wrote:
         | No one knows what will happen. But these thought experiments
         | can be useful as a critical thinking practice.
        
         | layer8 wrote:
         | The slowest is a sudden and permanent plateau, where all
         | attempts at progress turn out to result in serious downsides
         | that make them unworkable.
        
       | ahofmann wrote:
       | Ok, I'll bite. I predict that everything in this article is horse
       | manure. AGI will not happen. LLMs will be tools, that can
       | automate away stuff, like today and they will get slightly, or
       | quite a bit better at it. That will be all. See you in two years,
       | I'm excited what will be the truth.
        
         | Tenoke wrote:
         | That seems naive in a status quo bias way to me. Why and where
         | do you expect AI progress to stop? It sounds like somewhere
         | very close to where we are at in your eyes. Why do you think
         | there won't be many further improvements?
        
           | ahofmann wrote:
           | I write bog-standard PHP software. When GPT-4 came out, I was
           | very frightened that my job could be automated away soon,
           | because for PHP/Laravel/MySQL there must exist a lot of
           | training data.
           | 
           | The reality now is, that the current LLMs still often create
           | stuff, that costs me more time to fix, than to do it myself.
           | So I still write a lot of code myself. It is very impressive,
           | that I can think about stopping writing code myself. But my
           | job as a software developer is, very, very secure.
           | 
           | LLMs are very unable to build maintainable software. They are
           | unable to understand what humans want and what the codebase
           | need. The stuff they build is good-looking garbage. One
           | example I've seen yesterday: one dev committed code, where
           | the LLM created 50 lines of React code, complete with all
           | those useless comments and for good measure a setTimeout()
           | for something that should be one HTML DIV with two tailwind
           | classes. They can't write idiomatic code, because they write
           | code, that they were prompted for.
           | 
           | Almost daily I get code, commit messages, and even issue
           | discussions that are clearly AI-generated. And it costs me
           | time to deal with good-looking but useless content.
           | 
           | To be honest, I hope that LLMs get better soon. Because right
           | now, we are in an annoying phase, where software developers
           | bog me down with AI-generated stuff. It just looks good but
           | doesn't help writing usable software, that can be deployed in
           | production.
           | 
           | To get to this point, LLMs need to get maybe a hundred times
           | faster, maybe a thousand or ten thousand times. They need a
           | much bigger context window. Then they can have an inner
           | dialogue, where they really "understand" how some feature
           | should be built in a given codebase. That would be very
           | useful. But it will also use so much energy that I doubt that
           | it will be cheaper to let a LLM do those "thinking" parts
           | over, and over again instead of paying a human to build the
           | software. Perhaps this will be feasible in five or eight
           | years. But not two.
           | 
           | And this won't be AGI. This will still be a very, very fast
           | stochastic parrot.
        
           | AnimalMuppet wrote:
           | ahofmann didn't expect AI progress to _stop_. They expected
           | it to continue, but not lead to AGI, that will not lead to
           | superintelligence, that will not lead to a self-accelerating
           | process of improvement.
           | 
           | So the question is, do you think the current road leads to
           | AGI? _How far_ down the road is it? As far as I can see,
           | there is not a  "status quo bias" answer to those questions.
        
           | PollardsRho wrote:
           | It seems to me that much of recent AI progress has not
           | changed the fundamental scaling principles underlying the
           | tech. Reasoning models are more effective, but at the cost of
           | more computation: it's more for more, not more for less. The
           | logarithmic relationship between model resources and model
           | quality (as Altman himself has characterized it), phrased a
           | different way, means that you need exponentially more energy
           | and resources for each marginal increase in capabilities.
           | GPT-4.5 is unimpressive in comparison to GPT-4, and at least
           | from the outside it seems like it cost an awful lot of money.
           | Maybe GPT-5 is slightly less unimpressive and significantly
           | more expensive: is that the through-line that will lead to
           | the singularity?
           | 
           | Compare the automobile. Automobiles today are a lot nicer
           | than they were 50 years ago, and a lot more efficient. Does
           | that mean cars that never need fuel or recharging are coming
           | soon, just because the trend has been higher efficiency? No,
           | because the fundamental physical realities of drag still
           | limit efficiency. Moreover, it turns out that making 100%
           | efficient engines with 100% efficient regenerative brakes is
           | really hard, and "just throw more research at it" isn't a
           | silver bullet. That's not "there won't be many future
           | improvements", but it is "those future improvements probably
           | won't be any bigger than the jump from GPT-3 to o1, which
           | does not extrapolate to what OP claims their models will do
           | in 2027."
           | 
           | AI in 2027 might be the metaphorical brand-new Lexus to
           | today's beat-up Kia. That doesn't mean it will drive ten
           | times faster, or take ten times less fuel. Even if high-end
           | cars can be significantly more efficient than what average
           | people drive, that doesn't mean the extra expense is actually
           | worth it.
        
         | jstummbillig wrote:
         | When is the earliest that you would have predicted where we are
         | today?
        
           | rdlw wrote:
           | Same as everybody else. Today.
        
         | mitthrowaway2 wrote:
         | What's an example of an intellectual task that you don't think
         | AI will be capable of by 2027?
        
           | coolThingsFirst wrote:
           | programming
        
             | lumenwrites wrote:
             | Why would it get 60-80% as good as human programmers (which
             | is what the current state of things feels like to me, as a
             | programmer, using these tools for hours every day), but
             | stop there?
        
               | boringg wrote:
               | Because ewe still haven't figured out fusion but its been
               | promised for decades. Why would everything thats been
               | promised by people with highly vested interests pan out
               | any different?
               | 
               | One is inherently a more challenging physics problem.
        
               | kody wrote:
               | It's 60-80% as good as Stack Overflow copy-pasting
               | programmers, sure, but those programmers were already
               | providing questionable value.
               | 
               | It's nowhere near as good as someone actually building
               | and maintaining systems. It's barely able to vomit out an
               | MVP and it's almost never capable of making a meaningful
               | change to that MVP.
               | 
               | If your experiences have been different that's fine, but
               | in my day job I am spending more and more time just
               | fixing crappy LLM code produced and merged by STAFF
               | engineers. I really don't see that changing any time
               | soon.
        
               | lumenwrites wrote:
               | I'm pretty good at what I do, at least according to
               | myself and the people I work with, and I'm comparing its
               | capabilities (the latest version of Claude used as an
               | agent inside Cursor) to myself. It can't fully do things
               | on its own and makes mistakes, but it can do a lot.
               | 
               | But suppose you're right, it's 60% as good as
               | "stackoverflow copy-pasting programmers". Isn't that a
               | pretty insanely impressive milestone to just dismiss?
               | 
               | And why would it just get to this point, and then stop?
               | Like, we can all see AIs continuously beating the
               | benchmarks, and the progress feels very fast in terms of
               | experience of using it as a user.
               | 
               | I'd need to hear a pretty compelling argument to believe
               | that it'll suddenly stop, something more compelling than
               | "well, it's not very good yet, therefore it won't be any
               | better", or "Sam Altman is lying to us because
               | incentives".
               | 
               | Sure, it can slow down somewhat because of the
               | exponentially increasing compute costs, but that's
               | assuming no more algorithmic progress, no more compute
               | progress, and no more increases in the capital that flows
               | into this field (I find that hard to believe).
        
               | kody wrote:
               | I appreciate your reply. My tone was a little dismissive;
               | I'm currently deep deep in the trenches trying to unwind
               | a tremendous amount of LLM slop in my team's codebase so
               | I'm a little sensitive.
               | 
               | I use Claude every day. It is definitely impressive, but
               | in my experience only marginally more impressive than
               | ChatGPT was a few years ago. It hallucinates less and
               | compiles more reliably, but still produces really poor
               | designs. It really is an overconfident junior developer.
               | 
               | The real risk, and what I am seeing daily, is colleagues
               | falling for the "if you aren't using Cursor you're going
               | to be left behind" FUD. So they learn Cursor, discover
               | that it's an easy way to close tickets without using your
               | brain, and end up polluting the codebase with very
               | questionable designs.
        
               | lumenwrites wrote:
               | Oh, sorry to hear that you have to deal with that!
               | 
               | The way I'm getting a sense of the progress is using AI
               | for what AI is currently good at, using my human brain to
               | do the part AI is currently bad at, and comparing it to
               | doing the same work without AI's help.
               | 
               | I feel like AI is pretty close to automating 60-80% of
               | the work I would've had to do manually two years ago (as
               | a full-stack web developer).
               | 
               | It doesn't mean that the remaining 20-40% will be
               | automated very quickly, I'm just saying that I don't see
               | the progress getting any slower.
        
               | burningion wrote:
               | So I think there's an assumption you've made here, that
               | the models are currently "60-80% as good as human
               | programmers".
               | 
               | If you look at code being generated by non-programmers
               | (where you would expect to see these results!), you don't
               | see output that is 60-80% of the output of domain experts
               | (programmers) steering the models.
               | 
               | I think we're extremely imprecise when we communicate in
               | natural language, and this is part of the discrepancy
               | between belief systems.
               | 
               | Will an LLM model read a person's mind about what they
               | want to build better than they can communicate?
               | 
               | That's already what recommender systems (like the TikTok
               | algorithm) do.
               | 
               | But will LLMs be able to orchestrate and fill in the
               | blanks of imprecision in our requests on their own, or
               | will they need human steering?
               | 
               | I think that's where there's a gap in (basically) belief
               | systems of the future.
               | 
               | If we truly get post human-level intelligence everywhere,
               | there is no amount of "preparing" or "working with" the
               | LLMs ahead of time that will save you from being rendered
               | economically useless.
               | 
               | This is mostly a question about how long the moat of
               | human judgement lasts. I think there's an opportunity to
               | work together to make things better than before, using
               | these LLMs as tools that work _with_ us.
        
               | coolThingsFirst wrote:
               | Try this, launch Cursor.
               | 
               | Type: print all prime numbers which are divisible by 3 up
               | to 1M
               | 
               | The result is that it will do a sieve. There's no need
               | for this, it's just 3.
        
               | mysfi wrote:
               | Just tried this with Gemini 2.5 Pro. Got it right with
               | meaningful thought process.
        
             | mitthrowaway2 wrote:
             | Can you phrase this in a concrete way, so that in 2027 we
             | can all agree whether it's true or false, rather than
             | circling a "no true scotsman" argument?
        
               | abecedarius wrote:
               | Good question. I tried to phrase a concrete-enough
               | prediction 3.5 years ago, for 5 years out at the time:
               | https://news.ycombinator.com/item?id=29020401
               | 
               | It was surpassed around the beginning of this year, so
               | you'll need to come up with a new one for 2027. Note that
               | the other opinions in that older HN thread almost all
               | expected less.
        
           | kubb wrote:
           | It won't be able to write a compelling novel, or build a
           | software system solving a real-world problem, or operate
           | heavy machinery, create a sprite sheet or 3d models, design a
           | building or teach.
           | 
           | Long term planning and execution and operating in the
           | physical world is not within reach. Slight variations of
           | known problems should be possible (as long as the size of the
           | solution is small enough).
        
             | lumenwrites wrote:
             | I'm pretty sure you're wrong for at least 2 of those:
             | 
             | For 3D models, check out blender-mcp:
             | 
             | https://old.reddit.com/r/singularity/comments/1joaowb/claud
             | e...
             | 
             | https://old.reddit.com/r/aiwars/comments/1jbsn86/claude_cre
             | a...
             | 
             | Also this:
             | 
             | https://old.reddit.com/r/StableDiffusion/comments/1hejglg/t
             | r...
             | 
             | For teaching, I'm using it to learn about tech I'm
             | unfamiliar with every day, it's one of the things it's the
             | most amazing at.
             | 
             | For the things where the tolerance for mistakes is
             | extremely low and the things where human oversight is
             | extremely importamt, you might be right. It won't have to
             | be perfect (just better than an average human) for that to
             | happen, but I'm not sure if it will.
        
               | kubb wrote:
               | Just think about the delta of what the LLM does and what
               | a human does, or why can't the LLM replace the human,
               | e.g. in a game studio.
               | 
               | If it can replace a teacher or an artist in 2027, you're
               | right and I'm wrong.
        
               | esafak wrote:
               | It's already replacing artists; that's why they're up in
               | arms. People don't need stock photographers or graphic
               | designers as much as they used to.
               | 
               | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=46029
               | 44
        
             | programd wrote:
             | Does a fighter jet count as "heavy machinery"?
             | 
             | https://apnews.com/article/artificial-intelligence-
             | fighter-j...
        
             | pixl97 wrote:
             | > or operate heavy machinery
             | 
             | What exactly do you mean by this one?
             | 
             | In large mining operations we already have human assisted
             | teleoperation AI equipment. Was watching one recently where
             | the human got 5 or so push dozers lined up with a
             | (admittedly simple) task of cutting a hill down and then
             | just got them back in line if they ran into anything
             | outside of their training. The push and backup operations
             | along with blade control were done by the AI/dozer itself.
             | 
             | Now, this isn't long term planning, but it is operating in
             | the real world.
        
           | jdauriemma wrote:
           | Being accountable for telling the truth
        
             | myhf wrote:
             | accountability sinks are all you need
        
         | bayarearefugee wrote:
         | I predict AGI will be solved 5 years after full self driving
         | which itself is 1 year out (same as it has been for the past 10
         | years).
        
           | ahofmann wrote:
           | Well said!
        
         | kristopolous wrote:
         | People want to live their lives free of finance and centralized
         | personal information.
         | 
         | If you think most people like this stuff you're living in a
         | bubble. I use it every day but the vast majority of people have
         | no interest in using these nightmares of philip k dick imagined
         | by silicon dreamers.
        
       | WhatsName wrote:
       | This is absurd, like taking any trend and drawing a straight line
       | to interpolate the future. If I would do this with my tech stock
       | portfolio, we would probably cross the zero line somewhere late
       | 2025...
       | 
       | If this article were a AI model, it would be catastrophically
       | overfit.
        
         | AnimalMuppet wrote:
         | It's worse. It's not drawing a straight line, it's drawing one
         | that curves up, _on a log graph_.
        
       | Lionga wrote:
       | AI now even got it's own fan fiction porn. It is so stupid not
       | sure whether it is worse if it is written by AI or by a human.
        
       | the_cat_kittles wrote:
       | "we demand to be taken seriously!"
        
       | beklein wrote:
       | Older and related article from one of the authors titled "What
       | 2026 looks like", that is holding up very well against time.
       | Written in mid 2021 (pre ChatGPT)
       | 
       | https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
       | 
       | //edit: remove the referral tags from URL
        
         | dkdcwashere wrote:
         | > The alignment community now starts another research agenda,
         | to interrogate AIs about AI-safety-related topics. For example,
         | they literally ask the models "so, are you aligned? If we made
         | bigger versions of you, would they kill us? Why or why not?"
         | (In Diplomacy, you can actually collect data on the analogue of
         | this question, i.e. "will you betray me?" Alas, the models
         | often lie about that. But it's Diplomacy, they are literally
         | trained to lie, so no one cares.)
         | 
         | ...yeah?
        
         | motoxpro wrote:
         | That's incredible how much it broadly aligns with what has
         | happened. Especially because it was before ChatGPT.
        
           | reducesuffering wrote:
           | Will people finally wake up that the AGI X-Risk people have
           | been right and we're rapidly approaching a really fucking big
           | deal?
           | 
           | This forum has been so behind for too long.
           | 
           | Sama has been saying this a decade now: "Development of
           | Superhuman machine intelligence is probably the greatest
           | threat to the continued existence of humanity" 2015
           | https://blog.samaltman.com/machine-intelligence-part-1
           | 
           | Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders.
           | They all get it, which is why they're the smart cookies in
           | those positions.
           | 
           | First stage is denial, I get it, not easy to swallow the
           | gravity of what's coming.
        
             | ffsm8 wrote:
             | People have been predicting the singularity to occur
             | sometimes around 2030 and 2045 waaaay further back then
             | 2015. And not just by enthusiasts, I dimly remember an
             | interview with Richard Darkins from back in the day...
             | 
             | Though that doesn't mean that the current version of
             | language models will ever achieve AGI, and I sincerely
             | doubt they will. They'll likely be a component in the AI,
             | but likely not the thing that "drives"
        
               | neural_thing wrote:
               | Vernor Vinge as much as anyone can be credited with the
               | concept of the singularity. In his 1993 essay on it, he
               | said he'd be surprised if it happened before 2005 or
               | after 2030
               | 
               | https://edoras.sdsu.edu/~vinge/misc/singularity.html
        
             | archagon wrote:
             | And why are Altman's words worth anything? Is he some sort
             | of great thinker? Or a leading AI researcher, perhaps?
             | 
             | No. Altman is in his current position because he's highly
             | effective at consolidating power and has friends in high
             | places. That's it. Everything he says can be seen as
             | marketing for the next power grab.
        
               | skeeter2020 wrote:
               | well, he did also have a an early (failed) YC startup -
               | does that add cred?
        
             | hn_throwaway_99 wrote:
             | > Will people finally wake up that the AGI X-Risk people
             | have been right and we're rapidly approaching a really
             | fucking big deal?
             | 
             | OK, say I totally believe this. What, pray tell, are we
             | supposed to do about it?
             | 
             | Don't you at least see the irony of quoting Sama's dire
             | warnings about the development of AI, without at least
             | mentioning that he is at the absolute forefront of the push
             | to build this technology that can destroy all of humanity.
             | It's like he's saying "This potion can destroy all of
             | humanity if we make it" as he works faster and faster to
             | figure out how to make it.
             | 
             | I mean, I get it, "if we don't build it, someone else
             | will", but all of the discussion around "alignment" seems
             | just blatantly laughable to me. If on one hand your goal is
             | to build "super intelligence", i.e. way smarter than any
             | human or group of humans, how do you expect to control that
             | super intelligence when you're just acting at the middling
             | level of human intelligence?
             | 
             | While I'm skeptical on the timeline, if we do ever end up
             | building super intelligence, the idea that we can control
             | it is a pipe dream. We may not be toast (I mean, we're
             | smarter than dogs, and we keep them around), but we won't
             | be in control.
             | 
             | So if you truly believe super intelligent AI is coming, you
             | may as well enjoy the view now, because there ain't nothing
             | you or anyone else will be able to do to "save humanity" if
             | or when it arrives.
        
               | achierius wrote:
               | Political organization to force a stop to ongoing
               | research? Protest outside OAI HQ? There are lots of thing
               | we could, and many of us _would_ , do if more people were
               | actually convinced their life were in danger.
        
               | hn_throwaway_99 wrote:
               | > Political organization to force a stop to ongoing
               | research? Protest outside OAI HQ?
               | 
               | Come on, be real. Do you honestly think that would make a
               | lick of difference? _Maybe_ , at best, delay things by a
               | couple months. But this is a worldwide phenomenon, and
               | humans have shown time and time again that they are not
               | able to self organize globally. How successful do you
               | think that political organization is going to be in
               | slowing China's progress?
        
             | pixl97 wrote:
             | >This forum has been so behind for too long.
             | 
             | There is a strong financial incentive for a lot of people
             | on this site to deny they are at risk from it, or to deny
             | what they are building has risk and they should have
             | culpability from that.
        
             | samr71 wrote:
             | It's not something you need to worry about.
             | 
             | If we get the Singularity, it's overwhelmingly likely Jesus
             | will return concurrently.
        
             | goatlover wrote:
             | > "Development of Superhuman machine intelligence is
             | probably the greatest threat to the continued existence of
             | humanity"
             | 
             | If that's really true, why is there such a big push to
             | rapidly improve AI? I'm guessing OpenAI, Google, Anthropic,
             | Apple, Meta, Boston Dynamics don't really believe this.
             | They believe AI will make them billions. What is OpenAI's
             | definition of AGI? A model that makes $100 billion?
        
               | AgentME wrote:
               | Because they also believe the development of superhuman
               | machine intelligence will probably be the greatest
               | invention for humanity. The possible upsides and
               | downsides are both staggeringly huge and uncertain.
        
         | smusamashah wrote:
         | How does it talk about GPT-1 or 3 if it was before ChatGPT?
        
           | dragonwriter wrote:
           | GPT-3 (and, naturally, all prior versions even farther back)
           | was released ~2 years before ChatGPT (whose launch model was
           | GPT-3.5)
           | 
           | The publication date on this article is about halfway between
           | GPT-3 and ChatGPT releases.
        
           | Tenoke wrote:
           | GPT-2 for example came out in 2019. _Chat_ GPT wasn't the
           | start of GPT.
        
         | botro wrote:
         | This is damn near prescient, I'm having a hard time believing
         | it was written in 2021.
         | 
         | He did get this part wrong though, we ended up calling them
         | 'Mixture of Experts' instead of 'AI bureaucracies'.
        
           | stavros wrote:
           | I think the bureaucracies part is referring more to Deep
           | Research than to MoE.
        
           | robotresearcher wrote:
           | We were calling them 'Mixture of Experts' ~30 years before
           | that.
           | 
           | https://ieeexplore.ieee.org/document/6215056
        
         | dingnuts wrote:
         | nevermind, I hate this website :D
        
           | comp_throw7 wrote:
           | Surely you're familiar with
           | https://ai.meta.com/research/cicero/diplomacy/ (2022)?
           | 
           | > I wonder who pays the bills of the authors. And your bills,
           | for that matter.
           | 
           | Also, what a weirdly conspiratorial question. There's a
           | prominent "Who are we?" button near the top of the page and
           | it's not a secret what any of the authors did or do for a
           | living.
        
             | dingnuts wrote:
             | hmmm I apparently confused it with an RTS, oops.
             | 
             | also it's not conspiratorial to wonder if someone in
             | silicon valley today receives funding through the AI
             | industry lol like half the industry is currently propped up
             | by that hype, probably half the commenters here are paid
             | via AI VC investments
        
         | samth wrote:
         | I think it's not holding up that well outside of predictions
         | about AI research itself. In particular, he makes a lot of
         | predictions about AI impact on persuasion, propaganda, the
         | information environment, etc that have not happened.
        
           | madethisnow wrote:
           | something you can't know
        
             | elicksaur wrote:
             | This doesn't seem like a great way to reason about the
             | predictions.
             | 
             | For something like this, saying "There is no evidence
             | showing it" is a good enough refutation.
             | 
             | Counterpointing that "Well, there could be a lot of this
             | going on, but it is in secret." - that could be a
             | justification for any kooky theory out there. Bigfoot,
             | UFOs, ghosts. Maybe AI has already replaced all of us and
             | we're Cylons. Something we couldn't know.
             | 
             | The predictions are specific enough that they are
             | falsifiable, so they should stand or fall based on the
             | clear material evidence supporting or contradicting them.
        
           | LordDragonfang wrote:
           | Could you give some specific examples of things you feel
           | definitely did not come to pass? Because I see a lot of
           | people here talking about how the article missed the mark on
           | propaganda; meanwhile I can tab over to twitter and see a
           | substantial portion of the comment section of every high-
           | engagement tweet being accused of being Russia-run LLM
           | propaganda bots.
        
         | LordDragonfang wrote:
         | > (2025) Making models bigger is not what's cool anymore. They
         | are trillions of parameters big already. What's cool is making
         | them run longer, in bureaucracies of various designs, before
         | giving their answers.
         | 
         | Holy shit. That's a hell of a called shot from 2021.
        
         | cavisne wrote:
         | This article was prescient enough that I had to check in
         | wayback machine. Very cool.
        
         | torginus wrote:
         | I'm not seeing the prescience here - I don't wanna go through
         | the specific points but the main gist here seems to be that
         | chatbots will become very good at pretending to be human and
         | influencing people to their own ends.
         | 
         | I don't think much has happened on these fronts (owning to a
         | lack of interest, not technical difficulty). AI
         | boyfriends/roleplaying etc. seems to have stayed a very niche
         | interest, with models improving very little over GPT3.5, and
         | the actual products are seemingly absent.
         | 
         | It's very much the product of the culture war era, where one of
         | the scary scenarios show off, is a chatbot riling up a set of
         | internet commenters and goarding them lashing out against
         | modern leftist orthodoxy, and then cancelling them.
         | 
         | With all thestrongholds of leftist orthodoxy falling into
         | Trump's hands overnight, this view of the internet seems
         | outdated.
         | 
         | Troll chatbots still are a minor weapon in information warfare/
         | The 'opinion bubbles' and manipulation of trending topics on
         | social media (with the most influential content still written
         | by humans), to change the perception of what's the popular
         | concensus still seem to hold up as primary tools of influence.
         | 
         | Nowadays, when most people are concerned about stuff like 'will
         | the US go into a shooting war against NATO' or 'will they
         | manage to crash the global economy', just to name a few of the
         | dozen immediately pressing global issues, I think people are
         | worried about different stuff nowadays.
         | 
         | At the same time, there's very little mention of 'AI will take
         | our jobs and make us poor' in both the intellectual and
         | physical realms, something that's driving most people's anxiety
         | around AI nowadays.
         | 
         | It also puts the 'superintelligent unaligned AI will kill us
         | all' argument very often presented by alignment people as a
         | primary threat rather than the more plausible 'people
         | controlling AI are the real danger'.
        
       | amarcheschi wrote:
       | I just spent some time trying to make claude and gemini make a
       | violin plot of some polar dataframe. I've never used it and it's
       | just for prototyping so i just went "apply a log to the values
       | and make a violin plot of this polars dataframe". ANd had to
       | iterate with them for 4/5 times each. Gemini got it right but
       | then used deprecated methods
       | 
       | I might be doing llm wrong, but i just can't get how people might
       | actually do something not trivial just by vibe coding. And it's
       | not like i'm an old fart either, i'm a university student
        
         | VOIPThrowaway wrote:
         | You're asking it to think and it can't.
         | 
         | It's spicy auto complete. Ask it to create a program that can
         | create a violin plot from a CVS file. Because this has been
         | "done before", it will do a decent job.
        
           | suddenlybananas wrote:
           | But this blog post said that it's going to be God in like 5
           | years?!
        
         | pydry wrote:
         | all tech hype cycles are a bit like this. when you were born
         | people were predicting the end of offline shops.
         | 
         | The trough of disillusionment will set in for everybody else in
         | due time.
        
         | dinfinity wrote:
         | Yes, you're most likely doing it wrong. I would like to add
         | that "vibe coding" is a dreadful term thought up by someone who
         | is arguably not very good at software engineering, as talented
         | as he may be in other respects. The term has become a
         | misleading and frankly pejorative term. A better, more neutral
         | one is AI assisted software engineering.
         | 
         | This is an article that describes a pretty good approach for
         | that: https://getstream.io/blog/cursor-ai-large-projects/
         | 
         | But do skip (or at least significantly postpone) enabling the
         | 'yolo mode' (sigh).
        
           | amarcheschi wrote:
           | You see, the issue I get petty about is that Ai is advertised
           | as the one ring to rule them all software. VCs creaming
           | themselves at the thought of not having to pay developers and
           | using natural language. But then, you have to still adapt to
           | the Ai, and not vice versa. "you're doing it wrong". This is
           | not the idea that VCs bros are selling
           | 
           | Then, I absolutely love being aided by llms for my day to day
           | tasks. I'm much more efficient when studying and they can be
           | a game changer when you're stuck and you don't know how to
           | proceed. You can discuss different implementation ideas as if
           | you had a colleague, perhaps not a PhD smart one but still
           | someone with a quite deep knowledge of everything
           | 
           | But, it's no miracle. That's the issue I have with the way
           | the idea of Ai is sold to the c suites and the general public
        
             | pixl97 wrote:
             | >But, it's no miracle.
             | 
             | All I can say to this is fucking good!
             | 
             | Lets imagine we got AGI at the start of 2022. I'm talking
             | about human level+ as good as you coding and reasoning AI
             | that works well on the hardware from that age.
             | 
             | What would the world look like today? Would you still have
             | your job. With the world be in total disarray? Would
             | unethical companies quickly fire most their staff and
             | replace them with machines? Would their be mass riots in
             | the streets by starving neo-luddites? Would automated
             | drones be shooting at them?
             | 
             | Simply put people and our social systems are not ready for
             | competent machine intelligence and how fast it will change
             | the world. We should feel lucky we are getting a ramp up
             | period, and hopefully one that draws out a while longer.
        
         | hiq wrote:
         | > had to iterate with them for 4/5 times each. Gemini got it
         | right but then used deprecated methods
         | 
         | How hard would it be to automate these iterations?
         | 
         | How hard would it be to automatically check and improve the
         | code to avoid deprecated methods?
         | 
         | I agree that most products are still underwhelming, but that
         | doesn't mean that the underlying tech is not already enough to
         | deliver better LLM-based products. Lately I've been using LLMs
         | more and more to get started with writing tests on components
         | I'm not familiar with, it really helps.
        
           | henryjcee wrote:
           | > How hard would it be to automate these iterations?
           | 
           | The fact that we're no closer to doing this than we were when
           | chatgpt launched suggests that it's really hard. If anything
           | I think it's _the_ hard bit vs. building something that
           | generates plausible text.
           | 
           | Solving this for the general case is imo a completely
           | different problem to being able to generate plausible text in
           | the general case.
        
             | HDThoreaun wrote:
             | This is not true. The chain of logic models are able to
             | check their work and try again given enough compute.
        
               | lelandbatey wrote:
               | They can check their work and try again an infinite
               | number of times, but the rate at which they _succeed_
               | seems to just get worse and worse the further from the
               | beaten path (of existing code from existing solutions)
               | that they stray.
        
           | jaccola wrote:
           | How hard can it be to create a universal "correctness"
           | checker? Pretty damn hard!
           | 
           | Our notion of "correct" for most things is basically derived
           | from a very long training run on reality with the loss
           | function being for how long a gene propagated.
        
         | juped wrote:
         | You pretty much just have to play around with them enough to be
         | able to intuit what things they can do and what things they
         | can't. I'd rather have another underling, and not just because
         | they grow into peers eventually, but LLMs are useful with a bit
         | of practice.
        
       | moab wrote:
       | > "OpenBrain (the leading US AI project) builds AI agents that
       | are good enough to dramatically accelerate their research. The
       | humans, who up until very recently had been the best AI
       | researchers on the planet, sit back and watch the AIs do their
       | jobs, making better and better AI systems."
       | 
       | I'm not sure what gives the authors the confidence to predict
       | such statements. Wishful thinking? Worst-case paranoia? I agree
       | that such an outcome is possible, but on 2--3 year timelines?
       | This would imply that the approach everyone is taking right now
       | is the _right_ approach and that there are no hidden conceptual
       | roadblocks to achieving AGI /superintelligence from DFS-ing down
       | this path.
       | 
       | All of the predictions seem to ignore the possibility of such
       | barriers, or at most acknowledge the possibility but wave it away
       | by appealing to the army of AI researchers and industry funding
       | being allocated to this problem. IMO it is the onus of the
       | proposers of such timelines to argue why there are no such
       | barriers and that we will see predictable scaling in the 2--3
       | year horizon.
        
         | throwawaylolllm wrote:
         | It's my belief (and I'm far from the only person who thinks
         | this) that many AI optimists are motivated by an essentially
         | religious belief that you could call Singularitarianism. So
         | "wishful thinking" would be one answer. This document would
         | then be the rough equivalent of a Christian fundamentalist
         | outlining, on the basis of tangentially related news stories,
         | how the Second Coming will come to pass in the next few years.
        
           | pixl97 wrote:
           | Eh, not sure if the second coming is a great analogy. That
           | wholly depends on the whims of a fictional entity performing
           | some unlikely actions.
           | 
           | Instead think of them saying a crusade occurring in the next
           | few years. When the group saying the crusade is coming is
           | spending billions of dollars to trying to make just that
           | occur you no longer have the ability to say it's not going to
           | happen. You are now forced to examine the risks of their
           | actions.
        
         | barbarr wrote:
         | It also ignores the possibility of plateau... maybe there's a
         | maximum amount of intelligence that matter can support, and it
         | doesn't scale up with copies or speed.
        
           | AlexandrB wrote:
           | Or scales sub-linearly with hardware. When you're in the
           | rising portion of an S-curve[1] you can't tell how much
           | longer it will go on before plateauing.
           | 
           | A lot of this resembles post-war futurism that assumed we
           | would all be flying around in spaceships and personal flying
           | cars within a decade. Unfortunately the rapid pace of
           | transportation innovation slowed due to physical and cost
           | constraints and we've made little progress (beyond cost
           | optimization) since.
           | 
           | [1] https://en.wikipedia.org/wiki/Sigmoid_function
        
           | pixl97 wrote:
           | Eh, these mathematics still don't work out in humans favor...
           | 
           | Lets say intelligence caps out at the maximum smartest person
           | that's ever lived. Well, the first thing we'd attempt to do
           | is build machines up to that limit that 99.99999 percent of
           | us would never get close to. Moreso the thinking parts of
           | humans is only around 2 pounds of mush in side of our heads.
           | On top of that you don't have to grow them for 18 years first
           | before they start outputting something useful. That and they
           | won't need sleep. Oh and you can feed them with solar panels.
           | And they won't be getting distracted by that super sleek
           | server rack across the aisle.
           | 
           | We do know 'hive' or societal intelligence does scale over
           | time especially with integration with tooling. The amount of
           | knowledge we have and the means of which we can apply it
           | simply dwarf previous generations.
        
       | zvitiate wrote:
       | There's a lot to potentially unpack here, but idk, the idea that
       | humanity entering hell (extermination) or heaven (brain
       | uploading; aging cure) is whether or not we listen to AI safety
       | researchers for a few months makes me question whether it's
       | really worth unpacking.
        
         | amelius wrote:
         | If _we_ don 't do it, someone else will.
        
           | itishappy wrote:
           | Which? Exterminate humanity or cure aging?
        
             | ethersteeds wrote:
             | Yes
        
             | amelius wrote:
             | The thing whose outcome can go either way.
        
               | itishappy wrote:
               | I honestly can't tell what you're trying to say here. I'd
               | argue there's some pretty significant barriers to each.
        
           | layer8 wrote:
           | I'm okay if someone else unpacks it.
        
           | achierius wrote:
           | That's obviously not true. Before OpenAI blew the field open,
           | multiple labs -- e.g. Google -- were _intentionally holding
           | back_ their research from the public eye because they thought
           | the world was not ready. Investors were not pouring billions
           | into capabilities. China did not particularly care to focus
           | on this one research area, among many, that the US is still
           | solidly ahead in.
           | 
           | The only reason timelines are as short as they are is
           | _because_ of people at OpenAI and thereafter Anthropic
           | deciding that  "they had no choice". They had a choice, and
           | they took the one which has chopped at the very least _years_
           | off of the time we would otherwise have had to handle all of
           | this. I can barely begin to describe the magnitude of the
           | crime that they have committed -- and so I suggest that you
           | consider that before propagating the same destructive lies
           | that led us here in the first place.
        
             | pixl97 wrote:
             | The simplicity of the statement "If we don't do it, someone
             | else will." and thinking behind it eventually means someone
             | will do just that unless otherwise prevented by some
             | regulatory function.
             | 
             | Simply put, with the ever increasing hardware speeds we
             | were dumping out for other purposes this day would have
             | come sooner than later. We're talking about only a year or
             | two really.
        
       | Q6T46nT668w6i3m wrote:
       | This is worse than the mansplaining scene from Annie Hall.
        
       | qwertox wrote:
       | That is some awesome webdesign.
        
       | IshKebab wrote:
       | This is hilariously over-optimistic on the timescales. Like on
       | this timeline we'll have a Mars colony in 10 years, immortality
       | drugs in 15 and Half Life 3 in 20.
        
         | sva_ wrote:
         | You forgot fusion energy
        
           | klabb3 wrote:
           | Quantum AI powered by cold fusion and blockchain when?
        
         | zvitiate wrote:
         | No, sooner lol. We'll have aging cures and brain uploading by
         | late 2028. Dyson Swarms will be "emerging tech".
        
         | mchusma wrote:
         | I like that the "slowdown" scenario has by 2030 we have a robot
         | economy, cure for aging, brain uploading, and are working on a
         | Dyson Sphere.
        
         | ctoth wrote:
         | Can you share your detailed projection of what you expect the
         | future to look like so I can compare?
        
           | Gud wrote:
           | Slightly slower web frameworks by 2026. By 2030, a lot
           | slower.
        
           | IshKebab wrote:
           | Sure
           | 
           | 5 years: AI coding assistants are a lot better than they are
           | now, but still can't actually replace junior engineers (at
           | least ones that aren't shit). AI fraud is rampant, with faked
           | audio commonplace. Some companies try replacing call centres
           | with AI, but it doesn't really work and everyone hates it.
           | 
           | Tesla's robotaxi won't be available, but Waymo will be in
           | most major US cities.
           | 
           | 10 years: AI assistants are now useful enough that you can
           | use them in the ways that Apple and Google really wanted you
           | to use Siri/Google Assistant 5 years ago. "What have I got
           | scheduled for today?" will give useful results, and you'll be
           | able to have a natural conversation and take actions that you
           | trust ("cancel my 10am meeting; tell them I'm sick").
           | 
           | AI coding assistants are now _very_ good and everyone will
           | use them. Junior devs will still exist. Vibe coding will
           | actually work.
           | 
           | Most AI Startups will have gone bust, leaving only a few
           | players.
           | 
           | Art-based AI will be very popular and artists will use it all
           | the time. It will be part of their normal workflow.
           | 
           | Waymo will become available in Europe.
           | 
           | Some receptionists and PAs have been replaced by AI.
           | 
           | 15 years: AI researchers finally discover how to do on-line
           | learning.
           | 
           | Humanoid robots are robust and smart enough to survive in the
           | real world and start to be deployed in controlled
           | environments (e.g. factories) doing simple tasks.
           | 
           | Driverless cars are "normal" but not owned by individuals and
           | driverful cars are still way more common.
           | 
           | Small light computers become fast enough that autonomous
           | slaughter it's become reality (i.e. drones that can do their
           | own navigation and face recognition etc.)
           | 
           | 20 years: Valve confirms no Half Life 3.
        
             | archagon wrote:
             | > _Small light computers become fast enough that autonomous
             | slaughter it 's become reality_
             | 
             | This is the real scary bit. I'm not convinced that AI will
             | _ever_ be good enough to think independently and create
             | novel things without some serious human supervision, but
             | none of that matters when applied to machines that are
             | destructive by design and already have expectations of
             | collateral damage. Slaughterbots are going to be the new
             | WMDs -- and corporations are salivating at the prospect of
             | being first movers.
             | https://www.youtube.com/watch?v=UiiqiaUBAL8
        
               | dontlikeyoueith wrote:
               | Zero Dawn future confirmed.
        
               | Trumpion wrote:
               | Why do you believe that?
               | 
               | The lowest estimations of how much compute our brain
               | represents was already achieved with the last chip from
               | Nvidia (Blackwell).
               | 
               | The newest gpu cluster from Google, Microsoft, Facebook,
               | iax, and co have added so crazy much compute it's absurd.
        
               | pixl97 wrote:
               | >I'm not convinced that AI will ever be good enough to
               | think independently a
               | 
               | and
               | 
               | >Why do you believe that?
               | 
               | What takes less effort, time to deploy, and cost? I mean
               | there is at least some probability we kill ourselves off
               | with dangerous semi-thinking war machines leading to
               | theater scale wars to the point society falls apart and
               | we don't have the expensive infrastructure to make AI as
               | envisioned in the future.
               | 
               | With that said, I'm in the camp that we can create AGI as
               | nature was able to with a random walk, we'll be able to
               | reproduce it with intelligent design.
        
             | Quarrelsome wrote:
             | you should add a bit where AI is pushed really hard in
             | places where the subjects have low political power, like
             | management of entry level workers, care homes or education
             | and super bad stuff happens.
             | 
             | Also we need a big legal event to happen where (for
             | example) autonomous driving is part of a really big
             | accident where lots of people die or someone brings a
             | successful court case that an AI mortgage underwriter is
             | discriminating based on race or caste. It won't matter if
             | AI is actually genuinely responsible for this or not, what
             | will matter is the push-back and the news cycle.
             | 
             | Maybe more events where people start successfully gaming
             | deployed AI at scale in order to get mortgages they
             | shouldn't or get A-grades when they shouldn't.
        
             | 9dev wrote:
             | It's soothing to read a realistic scenario amongst all of
             | the ludicrous hype on here.
        
         | Trumpion wrote:
         | We currently don't see any ceiling if this continues in this
         | speed, we will have cheaper, faster and better models every
         | quarter.
         | 
         | Therewas never something progressing so fast
         | 
         | It would be very ignorant not to keep a very close eye on it
         | 
         | There is still a a chance that it will happen a lot slower and
         | the progression will be slow enough that we adjust in time.
         | 
         | But besides AI we also now get robots. The impact for a lot of
         | people will be very real
        
         | turnsout wrote:
         | IMO they haven't even predicted mid-2025.                 >
         | Coding AIs increasingly look like autonomous agents rather than
         | mere assistants: taking instructions via Slack or Teams and
         | making substantial code changes on their own, sometimes saving
         | hours or even days.
         | 
         | Yeah, we are _so_ not there yet.
        
       | noncoml wrote:
       | 2015: We will have FSD(full autonomy) by 2017
        
       | porphyra wrote:
       | Seems very sinophobic. Deepseek and Manus have shown that China
       | is legitimately an innovation powerhouse in AI but this article
       | makes it sound like they will just keep falling behind without
       | stealing.
        
         | princealiiiii wrote:
         | Stealing model weights isn't even particularly useful long-
         | term, it's the training + data generation recipes that have
         | value.
        
         | MugaSofer wrote:
         | That whole section seems to be pretty directly based on
         | DeepSeek's "very impressive work" with R1 being simultaneously
         | very impressive, and several months behind OpenAI. (They more
         | or less say as much in footnote 36.) They blame this on US chip
         | controls just barely holding China back from the cutting edge
         | by a few months. I wouldn't call that a knock on Chinese
         | innovation.
        
         | ugh123 wrote:
         | Don't confuse innovation with optimisation.
        
           | pixl97 wrote:
           | Don't confuse designing the product with winning the market.
        
         | a3w wrote:
         | How so? Spoiler: US dooms mankind, China is the saviour in the
         | two endings.
        
       | disambiguation wrote:
       | Amusing sci-fi, i give it a B- for bland prose, weak story
       | structure, and lack of originality - assuming this isn't all AI
       | gen slop which is awarded an automatic F.
       | 
       | >All three sets of worries--misalignment, concentration of power
       | in a private company, and normal concerns like job loss--motivate
       | the government to tighten its control.
       | 
       | A private company becoming "too powerful" is a non issue for
       | governments, unless a drone army is somewhere in that timeline.
       | Fun fact the former head of the NSA sits on the board of Open AI.
       | 
       | Job loss is a non issue, if there are corresponding economic
       | gains they can be redistributed.
       | 
       | "Alignment" is too far into the fiction side of sci-fi.
       | Anthropomorphizing today's AI is tantamount to mental illness.
       | 
       | "But really, what if AGI?" We either get the final say or we
       | don't. If we're dumb enough to hand over all responsibility to an
       | unproven agent and we get burned, then serves us right for being
       | lazy. But if we forge ahead anyway and AGI becomes something
       | beyond review, we still have the final say on the power switch.
        
       | atemerev wrote:
       | What is this, some OpenAI employee fan fiction? Did Sam himself
       | write this?
       | 
       | OpenAI models are not even SOTA, except that new-ish style
       | transfer / illustration thing that made all us living in Ghibli
       | world for a few days. R1 is _better_ than o1, and open-weights.
       | GPT-4.5 is disappointing, except for a few narrow areas where it
       | excels. DeepResearch is impressive though, but the moat is in
       | tight web search / Google Scholar search integration, not
       | weights. So far, I'd bet on open models or maybe Anthropic, as
       | Claude 3.7 is the current SOTA for most tasks.
       | 
       | As of the timeline, this is _pessimistic_. I already write 90%
       | code with Claude, so are most of my colleagues. Yes, it does
       | errors, and overdoes things. Just like a regular human middle-
       | stage software engineer.
       | 
       | Also fun that this assumes relatively stable politics in the US
       | and relatively functioning world economy, which I think is crazy
       | optimistic to rely on these days.
       | 
       | Also, superpersuasion _already works_, this is what I am
       | researching and testing. It is not autonomous, it is human-
       | assisted by now, but it is a superpower for those who have it,
       | and it explains some of the things happening with the world right
       | now.
        
         | achierius wrote:
         | > superpersuasion _already works_
         | 
         | Is this demonstrated in any public research? Unless you just
         | mean something like "good at persuading" -- which is different
         | from my understanding of the term -- I find this hard to
         | believe.
        
           | atemerev wrote:
           | No, I meant "good at persuading", it is not 100% efficiency
           | of course.
        
       | infecto wrote:
       | Could not get through the entire thing. It's mostly a bunch of
       | fantasy intermingled with bits of possible interesting discussion
       | points. The whole right side metrics are purely a distraction
       | because entirely fiction.
        
         | archagon wrote:
         | Website design is nice, though.
        
       | Willingham wrote:
       | - October 2027 - 'The ability to automate most white-collar jobs'
       | 
       | I wonder which jobs would not be automated? Therapy? HR?
        
         | hsuduebc2 wrote:
         | Board of directors
        
       | Joshuatanderson wrote:
       | This is extremely important. Scott Alexander's earlier
       | predictions are holding up extremely well, at least on image
       | progress.
        
       | dingnuts wrote:
       | how am I supposed to take articles like this seriously when they
       | say absolutely false bullshit like this
       | 
       | > the AIs can do everything taught by a CS degree
       | 
       | no, they fucking can't. not at all. not even close. I feel like
       | I'm taking crazy pills. Does anyone really think this?
       | 
       | Why have I not seen -any- complete software created via vibe
       | coding yet?
        
         | ladberg wrote:
         | It doesn't claim it's possible now, it's a fictional short
         | story claiming "AIs can do everything taught by a CS degree" by
         | the end of 2026.
        
       | vagab0nd wrote:
       | Bad future predictions: short-sighted guesses based on current
       | trends and vibe. Often depend on individuals or companies. Made
       | by free-riders. Example: Twitter.
       | 
       | Good future predictions: insights into the fundamental principles
       | that shape society, more law than speculation. Made by
       | visionaries. Example: Vernor Vinge.
        
       | dalmo3 wrote:
       | "1984 was set in 1984."
       | 
       | https://youtu.be/BLYwQb2T_i8?si=JpIXIFd9u-vUJCS4
        
       | pera wrote:
       | _From the same dilettantes who brought you the Zizians and other
       | bizarre cults..._ thanks but I rather read Nostradamus
        
       | soupfordummies wrote:
       | The "race" ending reads like Universal Paperclips fan fiction :)
        
       | 827a wrote:
       | Readers should, charitably, interpret this as "the sequence of
       | events which need to happen in order for OpenAI to justify the
       | inflow of capital necessary to survive".
       | 
       | Your daily vibe coding challenge: Get GPT-4o to output functional
       | code which uses Google Vertex AI to generate a text embedding. If
       | they can solve that one by July, then maybe we're on track for
       | "curing all disease and aging, brain uploading, and colonizing
       | the solar system" by 2030.
        
       | MaxfordAndSons wrote:
       | As someone who's fairly ignorant of how AI actually works at a
       | low level, I feel incapable of assessing how realistic any of
       | these projections are. But the "bad ending" was certainly
       | chilling.
       | 
       | That said, this snippet from the bad ending nearly made me spit
       | my coffee out laughing:
       | 
       | > There are even bioengineered human-like creatures (to humans
       | what corgis are to wolves) sitting in office-like environments
       | all day viewing readouts of what's going on and excitedly
       | approving of everything, since that satisfies some of Agent-4's
       | drives.
        
       | Jun8 wrote:
       | ACT post where Scott Alexander provides some additional info:
       | https://www.astralcodexten.com/p/introducing-ai-2027.
       | 
       | Manifold currently predicts 30%:
       | https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
        
         | crazystar wrote:
         | 47% now soo a coin toss
        
           | layer8 wrote:
           | 32% again now.
        
           | elicksaur wrote:
           | Note the market resolves by:
           | 
           | > Resolution will be via a poll of Manifold moderators. If
           | they're split on the issue, with anywhere from 30% to 70% YES
           | votes, it'll resolve to the proportion of YES votes.
           | 
           | So you should really read it as "Will >30% of Manifold
           | moderators in 2027 think the 'predictions seem to have been
           | roughly correct up until that point'?"
        
       | nmilo wrote:
       | The whole thing hinges on the fact that AI will be able to help
       | with AI research
       | 
       | How will it come up with the theoretical breakthroughs necessary
       | to beat the scaling problem GPT-4.5 revealed when it hasn't been
       | proven that LLMs can come up with novel research in any field at
       | all?
        
         | cavisne wrote:
         | Scaling transformers has been basically alchemy, the
         | breakthroughs aren't from rigorous science they are from trying
         | stuff and hoping you don't waste millions of dollars in
         | compute.
         | 
         | Maybe the company that just tells an AI to generate 100s of
         | random scaling ideas, and tries them all is the one that will
         | win. That company should probably be 100 percent committed to
         | this approach also, no FLOPs spent on ghibli inference.
        
       | acje wrote:
       | 2028 human text is too ambiguous a data source to get to AGI.
       | 2127 AGI figures out flying cars and fusion power.
        
       | suddenlybananas wrote:
       | https://en.wikipedia.org/wiki/Great_Disappointment
       | 
       | I suspect something similar will come for the people who actually
       | believe this.
        
       | panic08 wrote:
       | LOL
        
       | superconduct123 wrote:
       | Why are the biggest AI predictions always made by people who
       | aren't deep in the tech side of it? Or actually trying to use the
       | models day-to-day...
        
         | ZeroTalent wrote:
         | People who are skilled fiction writers might lack technical
         | expertise. In my opinion, this is simply an interesting piece
         | of science fiction.
        
         | AlphaAndOmega0 wrote:
         | Daniel Kokotajlo released the (excellent) 2021 forecast. He was
         | then hired by OpenAI, and not at liberty to speak freely, until
         | he quit in 2024. He's part of the team making this forecast.
         | 
         | The others include:
         | 
         | Eli Lifland, a superforecaster who is ranked first on RAND's
         | Forecasting initiative. You can read more about him and his
         | forecasting team here. He cofounded and advises AI Digest and
         | co-created TextAttack, an adversarial attack framework for
         | language models.
         | 
         | Jonas Vollmer, a VC at Macroscopic Ventures, which has done its
         | own, more practical form of successful AI forecasting: they
         | made an early stage investment in Anthropic, now worth $60
         | billion.
         | 
         | Thomas Larsen, the former executive director of the Center for
         | AI Policy, a group which advises policymakers on both sides of
         | the aisle.
         | 
         | Romeo Dean, a leader of Harvard's AI Safety Student Team and
         | budding expert in AI hardware.
         | 
         | And finally, Scott Alexander himself.
        
           | kridsdale3 wrote:
           | TBH, this kind of reads like the pedigrees of the former
           | members of the OpenAI board. When the thing blew up, and
           | people started to apply real scrutiny, it turned out that
           | about half of them had no real experience in pretty much
           | anything at all, except founding Foundations and instituting
           | Institutes.
           | 
           | A lot of people (like the Effective Altruism cult) seem to
           | have made a career out of selling their Sci-Fi content as
           | policy advice.
        
             | flappyeagle wrote:
             | c'mon man, you don't believe that, let's have a little less
             | disingenuousness on the internet
        
           | superconduct123 wrote:
           | I mean either researchers creating new models or people
           | building products using the current models
           | 
           | Not all these soft roles
        
         | Tenoke wrote:
         | ..The first person listed is ex-OpenAI.
        
         | torginus wrote:
         | Because these people understand human psychology and how to
         | play on fears (of doom, or missing out) and insecurities of
         | people, and write compelling narratives while sounding smart.
         | 
         | They are great at selling stories - they sold the story of the
         | crypto utopia, now switching their focus to AI.
         | 
         | This seems to be another appeal to enforce AI regulation in the
         | name of 'AI safetyiism', which was made 2 years ago but the
         | threats in it haven't really panned out.
         | 
         | For example an oft repeated argument is the dangerous ability
         | of AI to design chemical and biological weapons, I wish some
         | expert could weigh in on this, but I believe the ability to
         | theorycraft pathogens effective in the real world is absolutely
         | marginal - you need actual lab work and lots of physical
         | experiments to confirm your theories.
         | 
         | Likewise the dangers of AI systems to exfiltrate themselves to
         | multi-million dollar AI datacenter GPU systems everyone
         | supposedly just has lying about, is ... not super realistc.
         | 
         | The ability of AIs to hack computer systems is much less
         | theoretical - however as AIs will get better at black-hat
         | hacking, they'll get better at white-hat hacking as well - as
         | there's literally no difference between the two, other than
         | intent.
         | 
         | And here in lies a crucial limitation of alignment and
         | safetyism - sometimes there's no way to tell apart harmful and
         | harmless actions, other than whether the person undertaking
         | them means well.
        
         | bpodgursky wrote:
         | Because you can't be a full time blogger and also a full time
         | engineer. Both take all your time, even ignoring time taken to
         | build talent. There is simply a tradeoff of what you do with
         | your life.
         | 
         | There _are_ engineers with AI predictions, but you aren 't
         | reading them, because building an audience like Scott Alexander
         | takes decades.
        
         | rglover wrote:
         | Aside from the other points about understanding human
         | psychology here, there's also a deep well they're trying to
         | fill inside themselves. That of being someone who can't create
         | things without shepherding others and see AI as the "great
         | equalizer" that will finally let them taste the positive
         | emotions associated with creation.
         | 
         | The funny part, to me, is that it won't. They'll continue to
         | toil and move on to the next huck just as fast as they jumped
         | on this one.
         | 
         | And I say this from observation. Nearly all of the people I've
         | seen pushing AI hyper-sentience are smug about it and,
         | coincidentally, have never built anything on their own (besides
         | a company or organization of others).
         | 
         | Every single one of the rational "we're on the right path but
         | not quite there" takes have been from seasoned engineers who at
         | least have _some_ hands-on experience with the underlying tech.
        
         | ohgr wrote:
         | In the path to self value people explain their worth by what
         | they say not what they know. If what they say is horse dung, it
         | is irrelevant to their ego if there is someone dumber than they
         | are listening.
         | 
         | This bullshit article is written for that audience.
         | 
         | Say bullshit enough times and people will invest.
        
       | fire_lake wrote:
       | > OpenBrain still keeps its human engineers on staff, because
       | they have complementary skills needed to manage the teams of
       | Agent-3 copies
       | 
       | Yeah, sure they do.
       | 
       | Everyone seems to think AI will take someone else's jobs!
        
       | mlsu wrote:
       | https://xkcd.com/605/
        
       | mullingitover wrote:
       | These predictions are made without factoring in the trade version
       | of the Pearl Harbor attack the US just initiated on its allies
       | (and itself, by lobotomizing its own research base and decimating
       | domestic corporate R&D efforts with the aforementioned trade
       | war).
       | 
       | They're going to need to rewrite this from scratch in a quarter
       | unless the GOP suddenly collapses and congress reasserts control
       | over tariffs.
        
       | torginus wrote:
       | Much has been made in its article about autonomous agents ability
       | to do research via browsing the web - the web is 90% garbage by
       | weight (including articles on certain specialist topics).
       | 
       | And it shows. When I used GPT's deep research to research the
       | topic, it generated a shallow and largely incorrect summary of
       | the issue, owning mostly to its inability to find quality
       | material, instead it ended up going for places like Wikipedia,
       | and random infomercial listicles found on Google.
       | 
       | I have a trusty Electronics textbook written in the 80s, I'm sure
       | generating a similarly accurate, correct and deep analysis on
       | circuit design using only Google to help would be 1000x harder
       | than sitting down and working through that book and understanding
       | it.
        
         | somerandomness wrote:
         | Agreed. However, source curation and agents are two different
         | parts of Deep Research. What if you provided that textbook to a
         | reliable agent?
         | 
         | Plug: We built https://RadPod.ai to allow you to do that, i.e.
         | Deep Research on your data.
        
           | preommr wrote:
           | So, once again, we're in the era of "There's an [AI] app for
           | that".
        
           | skeeter2020 wrote:
           | that might solve your sourcing problem, but now you need to
           | have faith it will draw conclusions and parallels from the
           | material accurately. That seems even harder than the original
           | problem; I'll stick with decent search on quality source
           | material.
        
       | KaiserPro wrote:
       | > AI has started to take jobs, but has also created new ones.
       | 
       | Yeah nah, theres a key thing missing here, the number of jobs
       | created needs to be more than the ones it's destroyed, _and_ they
       | need to be better paying _and_ happen in time.
       | 
       | History says that actually when this happens, an entire
       | generation is yeeted on to the streets (see powered looms,
       | Jacquard machine, steam powered machine tools) All of that cheap
       | labour needed to power the new towns and cities was created by
       | automation of agriculture and artisan jobs.
       | 
       | Dark satanic mills were fed the decedents of once reasonably
       | prosperous crafts people.
       | 
       | AI as presented here will kneecap the wages of a good proportion
       | of the decent paying jobs we have now. This will cause huge
       | economic disparities, and probably revolution. There is a reason
       | why the royalty of Europe all disappeared when they did...
       | 
       | So no, the stock market will not be growing because of AI, it
       | will be in spite of it.
       | 
       | Plus china knows that unless they can occupy most of its
       | population with some sort of work, they are finished. AI and
       | decent robot automation are an existential threat to the CCP, as
       | much as it is to what ever remains of the "west"
        
         | OgsyedIE wrote:
         | Unfortunately the current system is doing a bad job of finding
         | replacements for dwindling crucial resources such as petroleum
         | basins, new generations of workers, unoccupied orbital
         | trajectories, fertile topsoil and copper ore deposits. Either
         | the current system gets replaced with a new system or it
         | doesn't.
        
         | kypro wrote:
         | > and probably revolution
         | 
         | I theorise that revolution would be near-impossible in post-AGI
         | world. If people consider where power comes from it's
         | relatively obvious that people will likely suffer and die on
         | mass if we ever create AGI.
         | 
         | Historically the general public have held the vast majority of
         | power in society. 100+ years ago this would have been physical
         | power - the state has to keep you happy or the public will come
         | for them with pitchforks. But in an age of modern weaponry the
         | public today would be pose little physical threat to the state.
         | 
         | Instead in todays democracy power comes from the publics
         | collective labour and purchasing power. A government can't risk
         | upsetting people too much because a government's power today is
         | not a product of its standing army, but the product of its
         | economic strength. A government needs workers to create
         | businesses and produce goods and therefore the goals of
         | government generally align with the goals of the public.
         | 
         | But in an post-AGI world neither businesses or the state need
         | workers or consumers. In this world if you want something you
         | wouldn't pay anyone for it or workers to produce it for you,
         | instead you would just ask your fleet of AGIs to get you the
         | resource.
         | 
         | In this world people become more like pests. They offer no
         | economic value yet demand that AGI owners (wherever publicly or
         | privately owned) share resources with them. If people revolted
         | any AGI owner would be far better off just deploying a
         | bioweapon to humanely kill the protestors rather than sharing
         | resources with them.
         | 
         | Of course, this is assuming the AGI doesn't have it's own goals
         | and just sees the whole of humanely as nuance to be stepped
         | over in the same way humans will happy step over animals if
         | they interfere with our goals.
         | 
         | Imo humanity has 10-20 years left max if we continue on this
         | path. There can be no good outcome of AGI because it would even
         | make sense for the AGI or those who control the AGI to be
         | aligned with goals of humanity.
        
         | pydry wrote:
         | >History says that actually when this happens, an entire
         | generation is yeeted on to the streets
         | 
         | History hasnt had to contend with a birth rate of 0.6-1.4.
        
       | kmeisthax wrote:
       | > The agenda that gets the most resources is faithful chain of
       | thought: force individual AI systems to "think in English" like
       | the AIs of 2025, and don't optimize the "thoughts" to look nice.
       | The result is a new model, Safer-1.
       | 
       | Oh hey, it's the errant thought I had in my head this morning
       | when I read the paper from Anthropic about CoT models lying about
       | their thought processes.
       | 
       | While I'm on my soapbox, I will point out that if your goal is
       | preservation of democracy (itself an instrumental goal for human
       | control), then you want to decentralize and distribute as much as
       | possible. Centralization is the path to dictatorship. A
       | significant tension in the Slowdown ending is the fact that,
       | while we've avoided _AI_ coups, we 've given a handful of people
       | the ability to do a perfectly ordinary human coup, and humans are
       | very, very good at coups.
       | 
       | Your best bet is smaller models that don't have as many unused
       | weights to hide misalignment in; along with interperability _and_
       | faithful CoT research. Make a model that satisfies your safety
       | criteria and then make sure _everyone_ gets a copy so subgroups
       | of humans get no advantage from hoarding it.
        
       | pinetone wrote:
       | I think it's worth noting that all of the authors have financial
       | or professional incentive to accelerate the AI hype bandwagon as
       | much as possible.
        
       | dr_dshiv wrote:
       | But, I think this piece falls into a misconception about AI
       | models as singular entities. There will be many instances of any
       | AI model and each instance can be opposed to other instances.
       | 
       | So, it's not that "an AI" becomes super intelligent, what we
       | actually seem to have is an ecosystem of blended human and
       | artificial intelligences (including corporations!); this
       | constitutes a distributed cognitive ecology of superintelligence.
       | This is very different from what they discuss.
       | 
       | This has implications for alignment, too. It isn't so much about
       | the alignment of AI to people, but that both human and AI need to
       | find alignment with nature. There is a kind of natural harmony in
       | the cosmos; that's what superintelligence will likely align to,
       | naturally.
        
         | popalchemist wrote:
         | For now.
        
       ___________________________________________________________________
       (page generated 2025-04-03 23:00 UTC)