[HN Gopher] Why I'm Betting Against the AGI Hype
       ___________________________________________________________________
        
       Why I'm Betting Against the AGI Hype
        
       Author : flail
       Score  : 27 points
       Date   : 2025-12-01 17:09 UTC (5 hours ago)
        
 (HTM) web link (www.notesfromthecircus.com)
 (TXT) w3m dump (www.notesfromthecircus.com)
        
       | FrankWilhoit wrote:
       | "...a philosophical confusion about the nature of intelligence
       | itself...."
       | 
       | That is how it is done today. One asks one's philosophical priors
       | what one's experiments _must_ find.
        
       | JuniperMesos wrote:
       | Interesting that in an article entitled "Why I'm betting against
       | AGI hype", the author doesn't actually say what bet he is making
       | - i.e. what specific decisions is he making, based on his
       | prediction that AGI is much less likely to arise from LLMs than
       | the probability the market is implicitly pricing in suggests.
       | What assets is he investing in or shorting? What life decisions
       | is he making differently than he otherwise would?
       | 
       | I say this not because I think his prediction as stated here is
       | necessarily wrong or unreasonable, but because I myself might
       | want to make investment decisions based upon this prediction, and
       | translating a prediction about the future into the correct
       | executions today is not trivial.
       | 
       | Without addressing his argument about AGI-from-LLMs - because I
       | don't have any better information myself than listening to
       | Sutskever on Dwarkesh's podcast - I am somewhat skeptical that
       | the current market price of AI-related assets is actually pricing
       | in a "60-80%" chance of AGI from LLMs specifically, rather than
       | all the useful applications of LLMs that are not AGI. But this
       | isn't a prediction I'm very confident in myself.
        
         | karmakaze wrote:
         | Armchair commentary.
         | 
         | > I've listened to the optimists--the researchers and
         | executives claiming [...]
         | 
         | Actually researchers close to the problem are the first ones to
         | give farther out target dates. And Yann LeCun is very vocal
         | about LLMs being a dead end.
        
           | klysm wrote:
           | He is starting a business that depends on them being a dead
           | end
        
             | techblueberry wrote:
             | Sounds like he's putting his money where his mouth is.
        
           | nomel wrote:
           | > farther out target dates
           | 
           | And, that's why there's so much investment. It's more of a
           | "when" question, not an "if" question (although I have seen
           | people claim that only meat can think).
        
           | arisAlexis wrote:
           | Same guy that predicted LLMs couldn't do something in 5000
           | years and they did it next year? (Google this, seriously)
        
       | m463 wrote:
       | I don't think there's a lot of "AGI hype".
       | 
       | I think all the hype is more about ai replacing human effort in
       | more ambiguous tasks than computers helped with before.
       | 
       | A more interesting idea would be - what would the world do with
       | AGI anyway?
        
         | fragmede wrote:
         | Hire digital employees rather than human ones. When all your
         | interaction is digital, replacing the human on the other end
         | with a theoretically just as capable AI is one possibility.
         | Then, have the AI write docs for your AI employee, spin up
         | additional employees like EC2 instances on AWS. Spin up 30 to
         | clear out your Trello/Monday.com/Jira board, then spin them
         | back down as soon as they've finished, with no remorse, because
         | they're just AI robots. That's what you could do with such a
         | technology anyway.
         | 
         | That's for regular human-level AGI. The issue becomes more
         | start for ASI, artificial super intelligence. If the AI
         | employee is smarter than most, if not all humans, why hire
         | humans at all?
         | 
         | Of course, this is all theoretical. We don't have the
         | technology yet, and have no idea what it would even cost
         | if/when we reach that.
        
         | arisAlexis wrote:
         | Can't you think what a world with a species smarter than humans
         | could be like? Yeah, it's difficult
        
       | drpixie wrote:
       | Summary of the current situation...
       | 
       | LLMs have shown us just how easily we are fooled.
       | 
       | AGI has shown us just how little we understand about
       | "intelligence".
       | 
       | Standby for more of the same.
        
       | arisAlexis wrote:
       | Contrarianism as a mental property of humans
        
       ___________________________________________________________________
       (page generated 2025-12-01 23:02 UTC)