[HN Gopher] The next chapter of the Microsoft-OpenAI partnership
       ___________________________________________________________________
        
       The next chapter of the Microsoft-OpenAI partnership
        
       Author : meetpateltech
       Score  : 312 points
       Date   : 2025-10-28 13:05 UTC (9 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | meetpateltech wrote:
       | Microsoft's announcement:
       | 
       | https://blogs.microsoft.com/blog/2025/10/28/the-next-chapter...
       | 
       | Also: Built to Benefit Everyone -- by Bret Taylor, Chair of the
       | OpenAI Board of Directors
       | 
       | https://openai.com/index/built-to-benefit-everyone
        
         | justinbaker84 wrote:
         | Microsoft is saying they have a 27% stake in the company after
         | this deal closes.
        
         | blitzar wrote:
         | > Built to Benefit Everyone
         | 
         | Whats my share then?
        
           | wiseowise wrote:
           | You get to contribute your data for the Moloch.
        
       | moralestapia wrote:
       | >OpenAI has contracted to purchase an incremental $250B of Azure
       | services, and Microsoft will no longer have a right of first
       | refusal to be OpenAI's compute provider.
       | 
       | I have no idea what @sama is doing but he's doing it quite well.
        
         | respondo2134 wrote:
         | He mayy be out front because he's the best PR face for this,
         | but make no mistake there is massive collusion amongst all the
         | players to inflte this bubble. Across MS, Oracle, AWS, OpenAI,
         | Anthropic, NVidia and more all I see is a pair on conjoined
         | snakes eating their own tail.
        
       | cjbarber wrote:
       | > Once AGI is declared by OpenAI, that declaration will now be
       | verified by an independent expert panel.
       | 
       | I wonder what criteria that panel will use to define/resolve
       | this.
        
         | conartist6 wrote:
         | This. This sentence reached off the page and hit me in the
         | face.
         | 
         | It only just then became obvious to me that to them it's a
         | question of when, in large part because of the MS deal.
         | 
         | Their next big move in the chess game will be to "declare" AGI.
        
           | baconbrand wrote:
           | This is phenomenally conceited on both companies' parts. Wow.
        
             | jdiff wrote:
             | Don't worry, I'm sure we can just keep handing out subprime
             | mortgages like candy forever. Infinite growth, here we
             | come!
        
           | TheCraiggers wrote:
           | I think some of this is just the typical bluster of company
           | press releases / earnings reports. Can't ever show weakness
           | or the shareholders will leave. Can't ever show doubt or the
           | stock price will drop.
           | 
           | Nevertheless, I've been wondering of late. How will we know
           | when AGI is accomplished? In the books or movies, it's always
           | been handwaved or described in a way that made it seem like
           | it was obvious to all. For example, in The Matrix there's the
           | line "We marveled at our own magnificence as we gave birth to
           | AI." It was a very obvious event that nobody could question
           | in that story. In reality though? I'm starting to think it's
           | just going to be more of a gradual thing, like increasing the
           | resolution of our TVs until you can't tell it's not a window
           | any longer.
        
             | marcosdumay wrote:
             | > How will we know when AGI is accomplished?
             | 
             | It's certainly not an specific thing that can be
             | accomplished. AGI is a useful name for a badly defined
             | concept, but any objective application of it (like in a
             | contract) is just stupid things done by people that could
             | barely be described as having the natural variety of GI.
        
           | port3000 wrote:
           | "We are now confident we know how to build AGI as we have
           | traditionally understood it." - Sam Altman, Jan 2025
           | 
           | 'as we have traditionally understood it' is doing a lot of
           | heavy lifting there
           | 
           | https://blog.samaltman.com/reflections#:~:text=We%20believe%.
           | ..
        
         | healsdata wrote:
         | > The two companies reportedly signed an agreement [in 2023]
         | stating OpenAI has only achieved AGI when it develops AI
         | systems that can generate at least $100 billion in profits.
         | 
         | https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
        
           | conartist6 wrote:
           | So what, there just won't be a word for general intelligence
           | anymore, you know, in the philosophical sense?
        
             | cogman10 wrote:
             | lol, this is "autopilot" and "full self driving" all over
             | again.
             | 
             | Just redefine the terms into something that's easy to
             | accomplish but far from the definition of the
             | terms/words/promises.
        
               | conartist6 wrote:
               | I know, they could get a big banner that says MISSION
               | ACCOMPLISHED.
        
               | cogman10 wrote:
               | Apparently the US military is for sale, so they probably
               | could hang it up on a battleship even.
        
             | nonethewiser wrote:
             | Well this is why it's framed that way:
             | 
             | >This is an important detail because Microsoft loses access
             | to OpenAI's technology when the startup reaches AGI, a
             | nebulous term that means different things to everyone.
             | 
             | Not sure how OpenAI feels about that.
        
           | jplusequalt wrote:
           | A sufficiently large profit margin is what constitutes AGI?
           | What a fucking joke.
        
             | joomla199 wrote:
             | The real AGI was the money we siphoned along the way.
        
             | layer8 wrote:
             | "Only" means that it is a necessary condition, not a
             | sufficient one.
        
               | nerevarthelame wrote:
               | That's true, but the $100 billion requirement is the only
               | hard qualification defined in earlier agreements. The
               | rest of the condition was left to the "reasonable
               | discretion" of the board of OpenAI.
               | (https://archive.is/tMJoG)
        
               | layer8 wrote:
               | The "reasonableness" is something they could go to court
               | over if necessary, whereas the $100 billion is a hard
               | requirement.
        
           | afavour wrote:
           | It's all so unfathomably stupid. And it's going to bring down
           | an economy.
        
             | coldpie wrote:
             | I'm honestly starting to feel embarrassed to even be
             | employed in the software industry now.
        
               | PyWoody wrote:
               | I've been telling people I do "computer stuff" since the
               | NFT days.
        
               | coldpie wrote:
               | Five straight years of having to tell everyone who asks
               | about your job that the hottest thing in your industry is
               | a scam sure does wear on a person.
        
               | joomla199 wrote:
               | I quit Google last year because I was just done with the
               | incessant push for "AI" in everything (AI exclusively
               | means LLMs of course). I still believe in the company as
               | a whole, the work culture just took a hard right towards
               | kafkaville. Nowadays when my relatives say "AI will
               | replace X" or whatever I just nod along. People are
               | incredibly naive and unbelievably ignorant, but that's
               | about as new as eating wheat.
        
             | tclancy wrote:
             | Hey, don't forget the climate effects too!
        
             | sekai wrote:
             | > It's all so unfathomably stupid. And it's going to bring
             | down an economy.
             | 
             | Dot-com bubble all over again
        
               | walleeee wrote:
               | Way bigger and deeper than that, there was some slack in
               | the energy situation remaining at that point. Not any
               | more.
        
               | AvAn12 wrote:
               | with extra stoopid
        
             | DavidPiper wrote:
             | It's kind of sad, but I've found myself becoming more and
             | more this guy whenever someone "serious" brings up AI in
             | conversation: https://www.instagram.com/p/DOELpzRDR-4/
        
           | yahoozoo wrote:
           | So they can just introduce ads in ChatGPT responses, make
           | $100 billion, and call that AGI?
        
             | vntok wrote:
             | No. When you're thinking about questions like these, it is
             | useful to remember that multiple (probably dozens)
             | professional A-grade lawyers have been paid considerable
             | sums of actual money, by both sides, to think about
             | possible loopholes and fix them.
        
               | yahoozoo wrote:
               | What would you consider valid methods of generating $100
               | billion? Enough Max/Pro subscribers?
        
             | hylaride wrote:
             | Don't worry, it'll be relevant ads, just like google.
             | You're going to love when code output is for proprietary
             | libraries and databases and getting things the way you want
             | will involve annoying levels of "clarification" that'll be
             | harder and harder to use.
             | 
             | I kind of meant this as a joke as I typed this, but by the
             | end almost wanted to quit the tech industry all together.
        
               | vntok wrote:
               | Just download a few SOTA (free) open-weights models well
               | ahead of that moment and either run them from inside your
               | living-room or store them onto a (cheap) 2TB external
               | hard drive until consumer compute makes it affordable to
               | run them from your living room.
        
           | Overpower0416 wrote:
           | So if their erotic bot reaches $100b in profit, they will
           | declare AGI? lol
        
             | ml-anon wrote:
             | Wait until they announce that they've been powering
             | OnlyFans accounts this whole time.
        
             | DavidPiper wrote:
             | Given the money involved, they may be contractually obliged
             | to?
        
           | Mistletoe wrote:
           | This is the most sick implementation of Goodhart's Law I've
           | ever seen.
           | 
           | >"When a measure becomes a target, it ceases to be a good
           | measure"
           | 
           | What appalls me is that companies are doing this stuff in
           | plain sight. In the 1920s before the crash, were companies
           | this brazen or did they try to hide it better?
        
           | phito wrote:
           | Wow that is so dumb. Can these addicts think about anything
           | else than profits?
        
           | sigmar wrote:
           | that's very different from OpenAI's previous definition
           | (which was "autonomous systems that surpass humans in most
           | economically valuable tasks") for at least one big reason:
           | This new definition likely only triggers if OpenAI's AI is
           | substantially different or better than other companies' AI.
           | Because in a world where 2+ companies have similar AGI, both
           | would have huge income but the competition would mean their
           | profit margins might not be as large. The only reason their
           | profit would soar to 100B+ would be because of no
           | competition, right?
        
             | charlie-83 wrote:
             | It doesn't seem to say 100B a year. So presumably a
             | business selling spoons will also eventually achieve AGI.
             | Also good to know that the US could achieve AGI at any time
             | by just printing more money until hyperinflation lets
             | openai hit their target.
        
               | airspresso wrote:
               | Nice unlock to hyperinflate their way to $100B. I'd buy
               | an AGI spoon but preferably before hyperinflation hits.
               | I'd expect forks to outcompete the spoons though.
        
         | empath75 wrote:
         | It's quite possible that GI and thus AGI does not actually
         | exist. Though now the paper the other day by all those heavy
         | hitters in the industry makes more sense in this context.
        
           | aeve890 wrote:
           | >It's quite possible that GI and thus AGI does not actually
           | exist.
           | 
           | Aren't we humans supposed to have GI? Maybe you're conflating
           | AGI and ASI.
        
             | mr_toad wrote:
             | > Aren't we humans supposed to have GI?
             | 
             | Supposed by humans, who might not be aware of their own
             | limitations.
        
           | llelouch wrote:
           | what paper?
        
         | rvz wrote:
         | The criteria changes more times than the weather forecast as it
         | depends on the definition of "AGI".
        
         | skepticATX wrote:
         | I think the more interesting question is who will be on the
         | panel?
         | 
         | A group of ex frontier lab employees? You could declare AGI
         | today. A more diverse group across academia and industry might
         | actually have some backbone and be able to stand up to OpenAI.
        
         | qgin wrote:
         | This makes me feel that the extremely short AGI timelines might
         | be less likely.
         | 
         | To sign this deal today, presumably you wouldn't bother if AGI
         | is just around the corner?
         | 
         | Maybe I'm reading too much into it.
        
           | qnleigh wrote:
           | Or if one party has a different timeline than the other...
        
         | adonese wrote:
         | Obligatory the office line:
         | 
         | "I just wanted you to know that you can't just say the word
         | "AGI" and expect anything to happen.
         | 
         | - Michael Scott: I didn't say it. I declared it
        
         | mossTechnician wrote:
         | If I remember correctly, Microsoft was previously promised
         | ownership of every pre-AGI asset created by OpenAI. Now they
         | are being promised ownership of things post-AGI as well:
         | 
         |  _Microsoft's IP rights for both models and products are
         | extended through 2032 and now includes models post-AGI..._
         | 
         | To me, this suggests a further dilution of the term "AGI."
        
           | ViscountPenguin wrote:
           | To be honest, I think this is somewhat assymetric, and kind
           | of implies that openai are truer "Believers" than Microsoft.
           | 
           | If you believe in a hard takeoff, than ownership of assets
           | post agi is pretty much meaningless, however, it protects
           | Microsoft from an early declaration of agi by openai.
        
       | anonymous908213 wrote:
       | > AGI AGI AGI AGI AGI AGI AGI AGI AGI
       | 
       | Spare me. Sam has been talking about ChatGPT already being AGI
       | for ages, meanwhile still peddling this duplicitous talk about
       | how AGI is coming despite it apparently already being here. Can
       | we act like grownups and treat this like a normal tool? No, no we
       | cannot, for Sam is a hype merchant.
        
         | interactivecode wrote:
         | Sam is doing the same playbook Elon used Tesla's full self-
         | driving dreams
        
           | respondo2134 wrote:
           | but with 10x or 100x the chutzpah
        
         | respondo2134 wrote:
         | it's notable that there is no talk about defining what exactly
         | AGI is - or even spelling out the three letter acronym -
         | because that doesn't serve his narative. He wants the general
         | public to equate human intelligence with current OpenAI, not
         | ask what does this mean or how would we know. He's selling
         | another type of hammer that's proving useful in some situations
         | but presenting it as the last universal tool anyone will ever
         | need.
        
           | cogman10 wrote:
           | And because it's become apparent that LLMs aren't converging
           | on what's traditionally been understood as AGI.
           | 
           | The promise of AGI is that you could prompt the LLM "Prove
           | that the Riemann Hypothesis is either true or false" and the
           | LLM would generate a valid mathematical proof. However, if
           | you throw it into ChatGPT what you actually get is "Nobody
           | else has solved this proof yet and I can't either."
           | 
           | And that's the issue. These LLMs aren't capable of reason,
           | only regurgitation. And they aren't moving towards reason.
        
             | dagss wrote:
             | When I ask Claude to debug something it goes through more
             | or less the same steps I would have done to fine the bug.
             | Add some logging, run tests, try an hypothesis...
             | 
             | Until LLMs got popular, we would have called that reasoning
             | skills. Not surpassing humans but better than many humans
             | within a small context.
             | 
             | I don't mean that I have a higher opinion about LLM
             | intelligence than you do, but perhaps I have a lower
             | opinion on what human intelligence is. How many do much
             | more than regurgitate, tweak? Science has taken hundreds of
             | years to develop.
             | 
             | The real question is: When do knowledge workers loose their
             | jobs. That is close enough for "AGI" in its consequences
             | for society, Riemann hypothesis or not.
        
               | joomla199 wrote:
               | Did you read the whole thread and all of your own comment
               | each time you had to type another half-word? If not, I'm
               | afraid your first statement doesn't hold.
        
           | parliament32 wrote:
           | AGI is pretty clearly defined here:
           | https://openai.com/charter/
           | 
           | > OpenAI's mission is to ensure that artificial general
           | intelligence (AGI)--by which we mean highly autonomous
           | systems that outperform humans at most economically valuable
           | work--benefits all of humanity.
           | 
           | So, can you (and everyone you know) be replaced at work by a
           | subscription yet? If not, it's not AGI I guess.
        
         | vultour wrote:
         | This entire house of cards is built on the expectation that
         | "AGI" is just around the corner. The moment Altman relents in
         | his grift is the moment the bubble pops and we're in for a wild
         | ride.
        
       | eggbrain wrote:
       | If we assume token providers are becoming more and more of a
       | commodity service these days, it seems telling that OpenAI
       | specifically decided to claw out consumer hardware.
       | 
       | Perhaps their big bet is that their partnership with Jony Ive
       | will create the first post-phone hardware device that consumers
       | attach themselves with, and then build an ecosystem around that?
        
         | respondo2134 wrote:
         | this would be an incredibly tough play. We've seen few success
         | stories, and even when the product is good building the
         | business around them has often failed. Most of the consumer
         | plays are terrible products with weak execution and no real
         | market. I have no doubt they could supplement lots of consumer
         | experiences but I'm not sure how they are more than a commodity
         | component in that model. I'm a die-hard engineer, but equating
         | the success of the iphone to Ive's design is like saying the
         | reason there were so many Apple II's in 80's homes and
         | classrooms was because of Woz's amazing design.
        
       | healsdata wrote:
       | I'm not savvy on investment terms, but most of these bullet
       | points seem like a loss for Microsoft.
       | 
       | What's the value in investing in a smaller company and then
       | giving up things produced off that investment when the company
       | grows?
        
         | yas_hmaheshwari wrote:
         | I was thinking exactly the same. Maybe someone who understands
         | these terms and deal better shine light on why would Microsoft
         | agree to this
        
         | justinbaker84 wrote:
         | I was thinking the same thing.
        
         | soared wrote:
         | Exponential growth
        
         | drexlspivey wrote:
         | Yeah poor microsoft, they invested $1B in 2019 and it's now
         | worth $135B
        
           | ForHackernews wrote:
           | Not worth anything until they sell it. There were a lot of
           | excited FTX holders, too.
        
         | gostsamo wrote:
         | If there is need of more capital, you either keep your share
         | without the capital injection and the share goes to zero or you
         | let in more investors, dilute your share, but its overall value
         | increases. Or you can let in more people and sign an agreement
         | that part of the new money will be paid to you in the form of
         | services that you provide.
        
         | onion2k wrote:
         | _I 'm not savvy on investment terms, but most of these bullet
         | points seem like a loss for Microsoft._
         | 
         | Having a customer locked in to buying $250bn of Azure services
         | is a fairly big benefit.
        
           | creddit wrote:
           | MSFT had a right to compute exclusivity.
           | 
           | "Microsoft will no longer have a right of first refusal to be
           | OpenAI's compute provider."
           | 
           | Seems like a loss to me!
        
             | davey48016 wrote:
             | I assume that first refusal required price matching. If the
             | $250B is at a higher price than whatever AWS, GCP, etc.
             | were willing to offer, then it could be a win for Microsoft
             | to get $250B in decent margin business over a larger amount
             | of break even business.
        
           | yreg wrote:
           | The risk stays somewhat similar. If OpenAI collapses it won't
           | spend those 250B.
        
           | ml-anon wrote:
           | Or a massive opportunity cost. I'd imagine 250Bn of OAI
           | business is way lower margin than 250Bn of some other random
           | companies that don't need H200s.
        
         | jasode wrote:
         | _> and then giving up things produced off that investment when
         | the company grows?_
         | 
         | An investor can be stubborn about retaining all rights
         | previously negotiated and never give them up... but that
         | absolutist position doesn't mean anything if the investment
         | fails.
         | 
         | OpenAI needs many more billions to cover many more years of
         | expected losses. Microsoft itself doesn't want to invest any
         | more money. Additional outside investors don't want to add more
         | billions in funding unless Microsoft was willing to give up a
         | few rights so that OpenAI has a better competitive position
         | against Google Gemini, Anthropic, Grok etc.
         | 
         | When a startup is losing money and desperately needs more
         | capital, a new round of investors can chip away at rights the
         | previous investor(s) had. Why would previous original investors
         | voluntarily agree to give up any rights?!? Because their
         | _investment is at risk_ if the startup doesn 't get a lot more
         | money. If the original investor doesn't want to re-invest again
         | and _would rather others foot the bill_ , they sometimes have
         | to be a little flexible on their rights for that to happen.
        
         | mrweasel wrote:
         | If Microsoft doesn't believe that OpenAI will achieve AGI by
         | 2030 or that there's a chance that OpenAI won't be the premiere
         | AI company in four years, the deal looks less like a lose and
         | more like they are buying their way out of a risky bet. On the
         | other hand, if OpenAI does well, then Microsoft have a 27%
         | stake in the company and that's not nothing.
         | 
         | This looks more like Microsoft ensuring that they'll win,
         | regardless of how OpenAI fairs in the next four to six years.
        
       | fidotron wrote:
       | > Microsoft can now independently pursue AGI alone or in
       | partnership with third parties.
       | 
       | The question is does this reflect an increase or decrease in
       | confidence at OpenAI wrt them achieving AGI?
        
       | rvz wrote:
       | > Once AGI is declared by OpenAI, that declaration will now be
       | verified by an independent expert panel.
       | 
       | By the time we get 30% global unemployment and another financial
       | crash along the way in the next decade, only then OpenAI would
       | have already declared "AGI".
       | 
       | Likely with in the 2030 - 2035 timeframe.
        
       | cjbarber wrote:
       | > Microsoft holds an investment in OpenAI Group PBC valued at
       | approximately $135 billion, representing roughly 27 percent on an
       | as-converted diluted basis
       | 
       | It seems like Microsoft stock is then the most straightforward
       | way to invest in OpenAI pre-IPO.
       | 
       | This also confirms the $500 billion valuation making OpenAI the
       | most valuable private startup in the world.
       | 
       | Now many of the main AI companies have decent ownership by public
       | companies or are already public.
       | 
       | - OpenAI -> Microsoft (27%)
       | 
       | - Anthropic -> Amazon (15-19% est), Alphabet/Google (14%)
       | 
       | Then the chip layer is largely already public: Nvidia. Plus AMD
       | and Broadcom.
       | 
       | Clouds too: Oracle, Alphabet/GCP, Microsoft/Azure, CoreWeave.
        
         | pinnochio wrote:
         | I think it's a bit late for that.
         | 
         | Also, you have to consider the size of Microsoft relative to
         | its ownership of OpenAI, future dilution, and how Microsoft
         | itself will fare in the future. If, say, Microsoft is on a path
         | towards decreasing relevance/marketshare/profitability, any
         | gains from its stake in OpenAI may be offset by its diminishing
         | fortunes.
        
           | dualityoftapirs wrote:
           | Reminds me of how Yahoo had a valuation in the negative
           | billions with their Alibaba holdings taken into account:
           | 
           | https://www.cbsnews.com/news/wall-street-says-yahoos-
           | worth-l...
        
           | mr_toad wrote:
           | > If, say, Microsoft is on a path towards decreasing
           | relevance/marketshare/profitability
           | 
           | That's a big if. I see a lot of people in big enterprises who
           | would never even consider anything other than Microsoft and
           | Azure.
        
             | no_wizard wrote:
             | C# and .NET have a bigger market share than what gets
             | talked about in trendy circles
        
               | jtbaker wrote:
               | C#/.NET are nice. Azure/Microsoft Cloud not so nice. Idk,
               | maybe I have some bias due to familiarity, but I find the
               | GCP admin and tools to be so much more intuitive than the
               | Azure (and AWS too, for that matter) counterparts.
        
               | jstummbillig wrote:
               | Oh dear lord, GCP could be the intuitive one?! I have not
               | used anything else but, dear lord, that's shocking and
               | not at all surprising at the same time.
        
               | 91bananas wrote:
               | Yeah this is not the case at all lol. I actually find
               | Azure to be far more intuitive after suffering through
               | AWS and a little GCP. It certainly seems more stable in
               | US regions than AWS.
               | 
               | One thing I will say is the Azure documentation is some
               | of the most cumbersome to navigate I've ever experienced,
               | there is a dearth of information in there, you just have
               | to know how to find it.
        
               | bn-l wrote:
               | Any speculation on why none of them can make a UI and UX
               | that is not 100% completely shit and makes you feel
               | miserable and stressed out?
               | 
               | Couldn't they just throw money at the problem? Or fire
               | the criminals who designed it?
        
             | marcosdumay wrote:
             | The big question is if we are finally in a moment when big
             | enterprises will be allowed to fail due to the infinite
             | number of bad choices they make.
             | 
             | Because things are going to change soon. What nobody know
             | is what things exactly, and in what direction.
        
             | notepad0x90 wrote:
             | yeah, this is a take I see by people who work in unix like
             | environments (including macs). If anything Microsoft will
             | grow much bigger. People are consolidating in Azure and
             | away from GCP. easier to manage costs and integrate with
             | their fleet.
             | 
             | Windows workstations and servers are now "joined" to Azure
             | instead, where they used to be joined to domain controller
             | servers. Microsoft will soon enough stop supporting that
             | older domain controller design (soon as in a decade).
        
               | ethbr1 wrote:
               | Entra (formerly Azure Active Directory) is definitely a
               | huge enterprise Azure driver, and MS knows it.
        
           | whizzter wrote:
           | Huh? Windows itself might have had it's heyday but MS is
           | solidly at #2 for clouds only behind AWS with enterprise
           | Windows shops that will be hard pressed to not use MS options
           | if they go to the cloud (Google really has continued to
           | fumble their cloud positions with their reputation for
           | "killedbygoogle.com" nagging on everyones mind).
           | 
           | The biggest real threat to MS position is the Trump
           | administration pushing foreign customers away with stuff like
           | shutting down the ICJ Microsoft accounts, but that'll hurt
           | AWS and Google equally much (The winners of that will be
           | Alibaba and other foregin providers that can't compete in
           | full enterprise stacks today).
        
             | ml-anon wrote:
             | Watch this week. Amazon cloud growth has been terrible
             | (Google and Microsoft remains >30%). Amazon have basically
             | no good offerings for AI which is where gcp is bringing to
             | eat their lunch. Anthropic moving to TPU for inference is a
             | big big signal.
        
               | eitally wrote:
               | 100% this. The AWS of today is going to be the Hetzner or
               | Digital Ocean of the future. They'll still have
               | hyperscale, but will not be seen as innovating on first
               | party products or a leader in the AI managed services
               | industry. And frankly, they are currently doing a shit
               | job of even this, because Oracle is in the same category
               | and OCI has been eating everyone's lunch (for the past
               | two years!).
        
               | stackskipton wrote:
               | Is OCI really eating everyone lunch? Sure, it's showing
               | massive growth but that's because Oracle has been running
               | around offering insane discounts.
               | 
               | We were cloud shopping, and they came by as well with
               | REALLY good discount. Luckily our CTO was massively
               | afraid of what would happen after that discount ran out.
        
           | salynchnew wrote:
           | You're making the roundabout argument that MSFT/OpenAI will
           | one day go the way of Yahoo/Alibaba, which is wild.
        
         | makestuff wrote:
         | if you want to invest in open ai I think you can just buy it on
         | NASDAQ private markets.
        
           | yreg wrote:
           | Not everyone can "just buy it". Investing in MSFT is
           | accessible to many more people than private markets.
        
         | yousif_123123 wrote:
         | I think the stable coin company Tether is valued at 500 billion
         | also.
        
           | saaaaaam wrote:
           | Is the company valued at $500 billion or is the sum of the
           | digital assets they've collateralised worth $500 billion?
           | 
           | Because if you buy the tokens you presumably do not own the
           | company. And if you buy the company you hopefully don't own
           | the tokens - nor the assets that back the tokens.
        
             | yousif_123123 wrote:
             | I think I read that its valued at 500 billion based on
             | their latest fund raise. I don't know the total holdings
             | they have.
             | 
             | I have no interest in crypto, just wanted to mention this
             | which was surprising to me when I heard it.
        
               | saaaaaam wrote:
               | Wow, yes they claim to be raising a round valuing them at
               | $500bn. Which is crazy given the market cap of their
               | token is only apparently $173bn.
               | 
               | https://www.reuters.com/business/crypto-firm-tether-
               | eyes-500...
               | 
               | I struggle to see how those numbers stack up.
        
               | saaaaaam wrote:
               | For comparison Blackstone is worth ~$180bn with ~$1
               | trillion AUM.
               | 
               | So somehow this crypto firm and its investor think it can
               | get a better return than Blackstone with a fraction of
               | the assets. Now, sure, developing market and all that.
               | But really? If it scaled to Blackstone assets level of $1
               | trillion then you'd expect the platform valuation to
               | scale, perhaps not in lockstep but at least somewhat. So
               | with $1 trillion in collateralised crypto does that make
               | Tether worth $1.5 trillion? I'd love someone to explain
               | that.
        
               | deepdarkforest wrote:
               | If my mom gives me 1000 dollars for 1% of my lemonade
               | stand, that doesn't mean my stand is worth 100k. Tether
               | is in talks with investors to mayb raise 20b at a 500b
               | valuation. Keep in mind also that crypto investors
               | overvalue companies to create the hype and then lobby for
               | better regulations etc. It doesn't mean at all that
               | someone would be interested to buy 100% of tether for
               | 500b. Now, if they were public is a different story, like
               | Tesla etc
        
               | saaaaaam wrote:
               | Well indeed. That was pretty much my point.
        
               | yousif_123123 wrote:
               | Tether is projected to generate $15 billion in profits.
               | So 500 billion is like a 33 times earnings multiple.
               | 
               | Now the main thing is how sustainable these earnings are
               | and if they will continue to be a dominant player in
               | stable coins and if there will continue to be demand for
               | them.
               | 
               | Another difference to Blackstone is Tether takes 100% of
               | the returns on the treasuries backing the coins, whereas
               | Blackstone gets a small fee from AUM, and their goal is
               | to make money for their investor clients.
               | 
               | If crypto wanted to really be decentralized they'd find a
               | way to have stable coins backed by whatever assets where
               | the returns of the assets still came to the stable coin
               | holder, not some big centralized company.
        
         | outside1234 wrote:
         | Or for the inevitable crash when we discover that OpenAI is a
         | round tripping Enron style disaster.
        
         | sekai wrote:
         | > This also confirms the $500 billion valuation making OpenAI
         | the most valuable private startup in the world.
         | 
         | SpaceX?
        
           | whamlastxmas wrote:
           | Around $350 to $400 billion from a couple sources I saw, but
           | it's a lot of speculation
        
           | throwup238 wrote:
           | If SpaceX is still a "startup", the word has lost all
           | meaning.
        
             | tguedes wrote:
             | It already has. Any tech company that is pre-IPO and still
             | raising funding rounds is a "startup". I'm surprised there
             | hasn't been someone to come up with a separate term for the
             | stage of these kinds of companies.
        
               | airspresso wrote:
               | That's just a privately owned tech company then. Lots of
               | companies never IPO.
        
         | notyourwork wrote:
         | It's odd to me in clouds you excluded AWS.
        
           | awestroke wrote:
           | And included oracle first. OP is probably Larry
        
         | paxys wrote:
         | Microsoft is worth $4T, so if you buy one MSFT share only ~3%
         | of that is invested in OpenAI. Even if OpenAI outperforms
         | everyone's expectations (which at this point are already sky
         | high), a tiny swing in some other Microsoft division will
         | completely erase your gains.
        
           | ForHackernews wrote:
           | Yeah, but on the plus side when the AI bubble bursts at least
           | you've still got Excel.
        
             | ethbr1 wrote:
             | Claude for Excel!
             | https://news.ycombinator.com/item?id=45722639
        
             | HPMOR wrote:
             | No reason to believe it is a bubble
        
               | margalabargala wrote:
               | Lots of reasons to believe it's a bubble.
               | 
               | No hard proof it's a bubble. Bubbles can only be proved
               | to have existed after they pop.
        
           | ivape wrote:
           | Markets trade on a magical growth valuation. Nothing you said
           | matters at all at the moment and won't for about 5 or so
           | years. People are going to eat shit over and over when they
           | keep talking like this, just look at what NVDA did today.
           | It's not going to stop.
        
             | pinkmuffinere wrote:
             | 5 years is a pretty long time to predict with confidence.
             | Definitely agree that "the markets can remain irrational
             | longer than you can remain solvent", but 5 years would be
             | unusually long if you believe we're already in a bubble.
        
         | ethbr1 wrote:
         | > _" Microsoft's IP rights now exclude OpenAI's consumer
         | hardware."_
         | 
         | Relevant and under-appreciated.                  1. OpenAI
         | considers its consumer hardware IP serious enough to include in
         | the agreement (and this post)        2. OpenAI thinks it's
         | enough of a value differentiator they'd rather go alone than
         | through MS as a hardware partner
         | 
         | OpenAI wearable eyeglasses incoming... (audio+cellular first,
         | AR/camera second?)
        
       | interactivecode wrote:
       | So Microsoft went from 49% to now 27%? Open AI with their non-
       | profit and their for-profit and all these investments and deals
       | they are doing. It feels like they are spending more time doing
       | financial trickery than building AI products.
        
         | meesles wrote:
         | There's a public trail of reddit comments where Altman all but
         | owns up to finagling board seats and ownership rights for
         | Reddit many years ago. This is how he operates.
        
         | akmittal wrote:
         | The money needed to run AI company is huge. If they don't do
         | financial trickery, there is huge risk of going out of
         | business.
         | 
         | AI is not making enough money to cover the cost and it will
         | take a decade or so to cover the same.
        
           | baconbrand wrote:
           | I am highly skeptical that we will see AI pay for itself by
           | the end of the decade.
           | 
           | More likely Americans' tax dollars will be shoveled into the
           | hole.
        
           | afavour wrote:
           | Then it isn't a viable business. Find another path that
           | doesn't risk crashing the economy.
        
       | evtothedev wrote:
       | > Microsoft's IP rights now exclude OpenAI's consumer hardware.
       | 
       | While not unexpected, this is exciting and intriguing.
       | 
       | And of course, looking forward to Microsoft's Zune AI.
        
         | dymk wrote:
         | Maybe we'll get a wearable pin, look how well those have done
         | so far
        
       | GaryNumanVevo wrote:
       | Kind of interesting given they're essentially building their own
       | foundation models vis-a-vis microsoft.ai. Run by a Google
       | Deepmind founder.
        
       | gostsamo wrote:
       | My take:
       | 
       | OpenAI self-evaluated to $500B;
       | 
       | Microsoft commitment for $250B of services, a.k.a still 50% of
       | that value is somewhat locked;
       | 
       | AGI still undefined;
       | 
       | Some more kicking of the can toward the future when it comes to
       | payments;
       | 
       | Both have more freedom to do research and offer services;
       | 
       | Overall, lots of magic money talk with pinkie promise in the
       | future and somewhat higher possibility of new products and open
       | weights models.
        
       | skepticATX wrote:
       | How is this not a terrible deal for Microsoft? I'm not confident
       | that an "expert panel" will prevent OpenAI from prematurely
       | declaring AGI.
        
       | Oras wrote:
       | > OpenAI has contracted to purchase an incremental $250B of Azure
       | services, and Microsoft will no longer have a right of first
       | refusal to be OpenAI's compute provider.
       | 
       | So OpenAI could be on Google (GCP) and AWS, and possibly Claude
       | and Gemini on Azure? that could be a good thing.
       | 
       | I use OpenRouter in multiple applications, the practicality of
       | having one provider to host all possible LLMs is such a win to
       | try and iterate without having to switch the cloud (big for
       | enterprise who are stuck with one cloud provider)
        
         | Shank wrote:
         | The API offerings are still only on Azure. It just means OpenAI
         | doesn't have to buy compute exclusively from Microsoft.
        
           | easton wrote:
           | They also say "Non-API products may be served on any cloud
           | provider.". I wonder what products they are thinking about.
           | If I sell you a EC2 image with GPT-5 on it, is that a API?
           | 
           | My assumption is that they mean PaaS model hosting (so
           | azure's ai service, bedrock, vertex), but I don't know what
           | other product OpenAI is thinking about selling via a cloud
           | provider unless it's training tooling or something.
        
             | nostrebored wrote:
             | Open source models are the current product line that fits
             | the bill.
        
         | lysecret wrote:
         | Yea I use gcp vertex for this it already has Claude hope for
         | OpenAI's models eventually too.
        
       | creddit wrote:
       | Can anyone point me to whether or not the OAI non-profit holds
       | voting control or not after the recapitalization?
       | 
       | I've read this but it's extremely vague:
       | https://openai.com/index/built-to-benefit-everyone/
       | 
       | As is this: https://openai.com/our-structure/
       | 
       | Especially so if the Non-profit foundation doesn't retain voting
       | control, this remains the greatest theft of all time. I still
       | can't quite understand how it should at all be possible.
       | 
       | Looking at the changes for MSFT, I also mostly don't understand
       | why they did it!
        
         | creddit wrote:
         | Nevermind, looks like the nn-profit gave up voting control lol:
         | 
         | "All equity holders in OpenAI Group now own the same type of
         | traditional stock that participates proportionally and grows in
         | value with OpenAI Group's success. The OpenAI Foundation board
         | of directors were advised by independent financial advisors,
         | and the terms of the recapitalization were unanimously approved
         | by the board."
         | 
         | Truly, truly the greatest theft from mankind in history and
         | they dress it up as if the non-profit is doing anything other
         | than giving away the most valuable startup in history for a
         | paltry sum.
         | 
         | Credit where credit is due, Sam Altman is the greatest
         | dealmaker of all time.
         | 
         | Will be interesting if we get to hear what his new equity stake
         | is!
        
       | _jab wrote:
       | Many questioning why Microsoft would agree to this, but to me the
       | concessions they made strike me as minor.
       | 
       | > OpenAI remains Microsoft's frontier model partner and Microsoft
       | continues to have exclusive IP rights and Azure API exclusivity
       | 
       | This should be the headline - Microsoft maintains its financial
       | and intellectual stranglehold on OpenAI.
       | 
       | And meanwhile, while vaguer, a few of the bullet points are
       | potentially very favorable to Microsoft:
       | 
       | > Microsoft can now independently pursue AGI alone or in
       | partnership with third parties.
       | 
       | > The revenue share agreement remains until the expert panel
       | verifies AGI, though payments will be made over a longer period
       | of time.
       | 
       | Hard to say what a "longer period of time" means, but I presume
       | it is substantial enough to make this a major concession from
       | OpenAI.
        
         | creddit wrote:
         | > Hard to say what a "longer period of time" means, but I
         | presume it is substantial enough to make this a major
         | concession from OpenAI.
         | 
         | Depends on how this is meant to be parsed but it may be parsed
         | to be a concession from MSFT. If the total amount of revenue to
         | be shared is the same, then MSFT is worse off here. If this is
         | meant to parse as "a fixed proportion of revenue will be shared
         | over X period and X period has increased to Y" then it is an
         | OAI concession.
         | 
         | I don't know the details but I would be surprised if there was
         | a revenue agreement that was time based.
        
         | hdkrgr wrote:
         | As a corporate customer, the main point for me in this is
         | Microsoft now retaining (non-exclusive) rights to models and
         | products after OpenAI decides to declare AGI.
         | 
         | The question "Can we build our stuff on top of Azure OpenAI?
         | What if SamA pulls a marketing stunt tomorrow, declares AGI and
         | cuts Microsoft off?" just became a lot easier. (At least until
         | 2032.)
        
         | dakial1 wrote:
         | Maybe OpenAI is burning money heavily and MS is the only/best
         | partner to get it from?
         | 
         | Also for MS it is worth to keep investing little by
         | little,getting concessions from OpenAI and becoming the de
         | facto owner of it.
        
       | ossner wrote:
       | > Once AGI is declared by OpenAI, that declaration will now be
       | verified by an independent expert panel.
       | 
       | What were they really expecting as an alternative? Anyone can
       | "declare AGI" especially since it's an inherently ill-defined
       | (and agruably undefinable) concept, it's strange that this is the
       | first bullet point like this was the fruit of intensive
       | deliberation.
       | 
       | I don't fully understand what is going on in this market as a
       | whole, I really doubt anyone does, but I do believe we will look
       | back on this period and wonder what the hell we were thinking
       | believing and lapping up everything these corporations were
       | putting out.
        
         | AvAn12 wrote:
         | like Elon declaring FSD?
        
         | baobun wrote:
         | In context AGI is already clearly defined as $100billion USD
         | revenue. So I guess the expert panel should at least have a
         | finacial auditor.
        
       | exasperaited wrote:
       | AGI is a Macguffin.
        
       | onion2k wrote:
       | _OpenAI is now able to release open weight models that meet
       | requisite capability criteria._
       | 
       | GPT-OSS:20b is a great model for local use. OpenAI continuing to
       | release open weights is good news.
        
       | r0x0r007 wrote:
       | I think they will reach AGI pretty soon, because only AGI can
       | find a way to make them profitable.
        
         | sambaumann wrote:
         | Plenty of businesses fail to find a way to make a profit
        
           | xwowsersx wrote:
           | OP said necessary, not sufficient
        
             | AnimalMuppet wrote:
             | OP said "will". That doesn't sound like "necessary, not
             | sufficient" to me.
        
               | xwowsersx wrote:
               | You're missing context and/or didn't read OP's comment.
               | He said "will" with regards to reaching AGI. He said
               | "only AGI _can_ find " with regards to profit. It was the
               | latter that this thread was addressing.
        
               | AnimalMuppet wrote:
               | You're missing context and/or didn't read OP's comment.
               | He said "because". It _will_ happen _because_ that 's the
               | only way to reach profit. That's _why_ it _will_ happen.
        
         | noir_lord wrote:
         | I think they hope they will because if they don't at some point
         | people are going to expect a return and get tired of throwing
         | good money after bad.
         | 
         | The longer they go without that and the more the sentiment
         | starts to shift away from what they convinced people LLM's
         | where vs what they actually are the riskier it becomes, are
         | they are useful tool yes, are they not what they've been hyping
         | for the last four years, also yes.
         | 
         | They either crack it or they become an also ran.
         | 
         | At which point Microsoft investors are going to be staring
         | really hard at the CEO.
        
       | AndrewKemendo wrote:
       | This org structure, if you can call it that, has to be one of the
       | least transparent, most convoluted organizations ever
       | 
       | Just look at how they write it and they are somehow sneaking a
       | NEW organizational level in there
       | 
       | >First, Microsoft supports the OpenAI board moving forward with
       | formation of a public benefit corporation (PBC) and
       | recapitalization.
       | 
       | Does anyone have any clue how OpenAI is actually governed and who
       | works for who and all that?
       | 
       | It's kafkaesque at best and intentionally confusing, so that you
       | can't actually regulate it, at worst
        
       | iandanforth wrote:
       | So now OpenAI is committed to spending $550 billion dollars?
       | ($300B to Oracle and $250B to MS). If it currently has ~$10B in
       | revenue / year, how on earth can it meet these commitments?
        
         | Lionga wrote:
         | Has OpenAI not also committed to spending a few hundered
         | billions at NVIDIA? I mean whats another few hundered billions
         | when you are making so much profit.
         | 
         | Wait, they are not making any profit but already losing
         | billions even before any of these "investments" ?
        
       | philipwhiuk wrote:
       | > Once AGI is declared by OpenAI, that declaration will now be
       | verified by an independent expert panel.
       | 
       | > Microsoft's IP rights for both models and products are extended
       | through 2032 and now includes models post-AGI, with appropriate
       | safety guardrails.
       | 
       | Does anyone really think we are close to AGI? I mean honestly?
        
         | outside1234 wrote:
         | No, but OpenAI is going to look for any opportunity to do it so
         | they can end the contract with Microsoft.
        
           | whynotminot wrote:
           | Why would you say that when this very contract appears to
           | extend the arrangement almost indefinitely
        
         | shaky-carrousel wrote:
         | AGI as a PR stunt for OpenAI is becoming a meme.
        
           | codyb wrote:
           | We're just a year away from full self driving!
        
             | ReptileMan wrote:
             | You will be self driven to the fusion plant and you will
             | like it. The AGI will meet you at the front door.
        
             | ta9000 wrote:
             | I wouldn't be surprised if AGI arrives before Tesla has a
             | full self-driving car though.
        
             | gehwartzen wrote:
             | And then just another year until self selfing and we will
             | have come full circle
        
             | pixl97 wrote:
             | Full self driving has always required AGI, so no we it
             | without AGI.
        
           | embedding-shape wrote:
           | Wasn't it always the explicit goal of OpenAI to bring up AGI?
           | So less of a meme, and more "this is what that company exists
           | for".
           | 
           | Bit like blaming a airplane building company for building
           | airplanes, it's literally what they were created for, no
           | matter how stupid their ideas of the "ideal aircraft" is.
        
             | alterom wrote:
             | _> Bit like blaming a airplane building company for_ NOT
             | _building airplanes_
             | 
             | FTFY. OpenAI has not built AGI (not _yet_ , it you want to
             | be optimistic).
             | 
             | If you really need an analogy, it's more in the vein of
             | giving SpaceX crap for yapping about building a Dyson
             | Sphere _Real Soon Now(tm)_.
        
               | embedding-shape wrote:
               | Of course not, then we'd never hear the end of it :)
               | 
               | I was just informing that the company always had AGI as a
               | goal, even when they were doing the small Gym prototypes
               | and all of that stuff that made the (tech) news before
               | GPT was a thing.
        
         | bogzz wrote:
         | Of course not.
        
         | no_wizard wrote:
         | They'll devalue the term into something that makes it so. The
         | common conception of it however, no I don't believe we are
         | anywhere close to it.
         | 
         | It's no different than how they moved the goalpost on the
         | definition of AI at the start of this boom cycle
        
           | ramses0 wrote:
           | Jesus, we've gone from Eliza and Bayes Spam Filters to being
           | able to hold an "intelligent" conversation with a bot that
           | can write code like: "make me a sandwich" => "ok, making
           | sandwich.py, adding test, keeping track of a todo list,
           | validating tests, etc..."
           | 
           | We might not _quite_ be at the era of "I'm sorry I can't let
           | you do that Dave...", but on the spectrum, and from the
           | perspective of a lay-person, we're waaaaay closer than we've
           | ever been?
           | 
           | I'd counsel you to self-check what goalposts you might have
           | moved in the past few years...
        
             | IlikeKitties wrote:
             | I think "we" have accidentally cracked language from a
             | computational perspective. The embedding of knowledge is
             | incidental and we're far away from anything that "Generally
             | Intelligent", let alone Advanced in that. LLMs do tend to
             | make documented knowledge very searchable which is nice.
             | But if you use these models everyday to do work of some
             | kind that becomes pretty obvious that they aren't nearly as
             | intelligent as they seem.
        
               | OJFord wrote:
               | Completely agree
               | (https://news.ycombinator.com/item?id=45627451) - LLMs
               | are like the human-understood _output_ of a hypothetical
               | AGI,  'we' haven't cracked the knowledge & reasoning
               | 'general intelligence' piece yet, imo, the bit that would
               | hypothetically come before the LLM, feeding the
               | information to it to convey to the human. I think that's
               | going to turn out to be a different piece of the puzzle.
        
               | forgotoldacc wrote:
               | They're about as smart as a person who's kind of decent
               | at every field. If you're a pro, it's pretty clear when
               | it's BSing. But if you're not, the answers are often
               | close enough.
               | 
               | And just like humans, they can be very confidently wrong.
               | When any person tells us something, we assume there's
               | some degree of imperfection in their statements. If a
               | nurse at a hospital tells you the doctor's office is 3
               | doors down on the right, most people will still look at
               | the first and second doors to make sure those are wrong,
               | then look at the nameplate on the third door to verify
               | that it's right. If the doctor's name is Smith but the
               | door says Stein, most people will pause and consider that
               | maybe the nurse made a mistake. We might also consider
               | that she's right, but the nameplate is wrong for whatever
               | reason. So we verify that info by asking someone else, or
               | going in and asking the doctor themselves.
               | 
               | As a programmer, I'll ask other devs for some guidance on
               | topics. Some people can be absolute geniuses but still
               | dispense completely wrong advice from time to time. But
               | oftentimes they'll lead me generally in the right way,
               | but I still need to use my own head to analyze whether
               | it's correct and implement the final solution myself.
               | 
               | The way AI dispenses its advice is quite human. The big
               | problem is it's harder to validate much of its info, and
               | that's because we're using it alone in a room and not
               | comparing it against anyone else's info.
        
               | IlikeKitties wrote:
               | > They're about as smart as a person who's kind of decent
               | at every field. If you're a pro, it's pretty clear when
               | it's BSing. But if you're not, the answers are often
               | close enough.
               | 
               | No they are not smart at all. Not even a little. They
               | cannot reason about anything except that their training
               | data overwhelmingly agrees or disagrees with their output
               | nor can they learn and adept. They are just text
               | compression and rearrangement machines. Brilliant and
               | extremely useful tooling but if you use them enough it
               | becomes painfully obvious.
        
               | chasd00 wrote:
               | Something about an LLM response has a major impact on
               | some people. Last weekend I was in in Ft. Lauderdale FL
               | with a friend who's pretty sharp ( licensed architect,
               | decades long successful career etc) and went to the horse
               | track. I've never been to a horse race and didn't
               | understand the betting so I took a snapshot of the race
               | program, gave it to chatGPT and asked it to devise a low
               | risk set of bets using $100. It came back with what you'd
               | expect, a detailed, very confident answer. My friend was
               | completely taken with it and insisted on following it to
               | the letter. After the race he turned his $100 into $28
               | and was dumbfounded. I told him "it can't tell the
               | future, what were you expecting?". Something about
               | getting the answer from a computer or the level of detail
               | had him convinced it was a sure thing. I donm't
               | understand it but LLMs have a profound effect on some
               | people.
               | 
               | edit: i'm very thankful my friend didn't end up winning
               | more than he bet. idk what he would have done if his
               | feelings towards the LLM was confirmed by adding money to
               | his pocket..
        
               | rhetocj23 wrote:
               | If anything, the main thing LLMs are showing is that the
               | humans need to be pushed to up their game. And that
               | desire to be better, I think, will yield an increase in
               | supply of high-quality labour than what exists today. Ive
               | personally witnessed so many 'so-so' people within firms
               | who dont bring anything new to the table and focus on
               | rent seeking expenditures (optics) who frankly deserve to
               | be replaced by a machine.
               | 
               | E.g. I read all the time about gains from SWEs. But
               | nobody questions how good of a SWE they even are. What
               | proportion of SWEs can be deemed high quality?
        
               | dreamcompiler wrote:
               | Yes, exactly. LLMs are lossy compressors of human
               | language in much the same way JPEG is a lossy compressor
               | of images. The difference is that the bits that JPEG
               | throws away were manually designed by our understanding
               | of the human visual cortex, while LLMs figured out the
               | lossy bits automatically because we don't know enough
               | about the human language processing chain to design that
               | manually.
               | 
               | LLMs are useful but that doesn't make them intelligent.
        
             | 91bananas wrote:
             | I'd counsel you to work with LLMs daily and agree that
             | we're no where close to LLMs that work properly
             | _consistently_ outside of toy use cases, where examples can
             | be scraped from the internet. If we can agree on that we
             | can agree that General _Intelligence_ is not the same thing
             | as a, sometimes, seemingly random guess at the next word...
        
             | ok_computer wrote:
             | I think this says more about how much of our tasks and
             | demonstrations of ability as developers revolve around
             | boilerplate and design patterns than it does about the
             | Cognitive abilities of modern LLMs.
             | 
             | I say this fully aware that a kitted out tech company will
             | be using LLMs to write code more conformant to style and
             | higher volume with greater test coverage than I am able to
             | individually.
        
             | furyofantares wrote:
             | You have to keep moving the goalposts if you keep putting
             | them in the wrong place.
        
             | slashdave wrote:
             | So, where is my sandwich? I am hungry
        
           | htrp wrote:
           | FSD would like a word
        
             | mberning wrote:
             | They have certainly tried to move the goalposts on this.
        
               | qaq wrote:
               | "They"? Waymo has a pretty well working service
        
               | overfeed wrote:
               | FSD is the brand name for the service promised/offered by
               | Tesla Motors - Waymo has nothing to do with it, or the
               | moving of goal posts.
        
             | waffletower wrote:
             | As a full stack developer suffering from female sexual
             | dysfunction who owns a Tesla, I am really confused about
             | what you are trying to say.
        
               | CoastalCoder wrote:
               | Have you tried praying to the Flying Spaghetti Deity?
        
               | bravetraveler wrote:
               | After wrapping up with the Family Services Division, of
               | course
        
             | sgustard wrote:
             | SAE automation levels are the industry standard, not FSD
             | (which is a brand name), and FSD is clearly Level 2 (driver
             | is always responsible and must be engaged, at least in
             | consumer teslas, I don't know about robotaxis). The
             | question is if "AGI" is as well defined as "Level 5" as an
             | independent standard.
        
               | jgalt212 wrote:
               | The point trying to be made is FSD is deceptive
               | marketing, and it's unbelievable how long that "marketing
               | term" has been allowed to exist given its inaccuracy in
               | representing what is actually being delivered to the
               | customer.
        
               | Spooky23 wrote:
               | What's deceptive? What in the term "Full Self Driving"
               | makes you think that your car will drive itself fully?
               | It's fully capable of facilitating your driving of
               | yourself, clearly.
        
           | gehwartzen wrote:
           | This is exactly why they will have an "expert panel" to make
           | that determination. They wouldn't make something up
        
             | alterom wrote:
             | Yeah, _they_ wouldn 't make something up, _the expert
             | panel_ would.
             | 
             | Because everyone knows that once you call a group of people
             | _an expert panel_ , that automatically means they can't be
             | biased /s
        
             | some_furry wrote:
             | What exactly is the criteria for "expert" they're planning
             | to use, and whomst among us can actually meet a realistic
             | bar for expertise on the nature of consciousness?
        
               | Fuzzwah wrote:
               | Follower count on X. /s
        
               | ctoth wrote:
               | Type error: why do you need an expert on consciousness to
               | weigh in on if something is AGI or not? I don't care what
               | it feels like to be a paperclip maximizer I just care to
               | not have my paperclips maximized tnx.
        
             | cmiles74 wrote:
             | I expect that the "expert panel" is to ensure that OpenAI
             | and Microsoft are in agreement on what "AGI" means in the
             | context of this agreement.
        
             | jimbokun wrote:
             | So the expert panel can make something up instead.
        
             | slashdave wrote:
             | Making things up is exactly what expert panels are good at
             | doing
        
           | nl wrote:
           | > they moved the goalpost on the definition of AI at the
           | start of this boom cycle
           | 
           | Who is this "they" you speak of?
           | 
           | It's true the definition has changed, but not in the
           | direction you seem to think.
           | 
           | Before this boom cycle the standard for "AI" was the Turing
           | test. There is no doubt we have comprehensively passed that
           | now.
        
             | alterom wrote:
             | Is there, really?
        
             | Vinnl wrote:
             | I don't think the Turing Test has been passed. The test was
             | setup such that the interrogator knew that one of the two
             | participants was a bot, and was trying to find out which.
             | As far as I know, it's still relatively easy to find out
             | you're talking to an LLM if you're actively looking for it.
        
               | yndoendo wrote:
               | I find there are to main ways to do this.
               | 
               | 1) Look for spelling, grammar, and incorrect word usage;
               | such as where vs were, typing out where our should be
               | used.
               | 
               | 2) Ask asinine questions that have no answers; _Why does
               | the sun ravel around my finger in low quality gravity
               | while dancing in the rain?_
               | 
               | ML likes to always come up with an answers no matter
               | what. Human will shorten the conversation. It also is
               | programmed to respond with _I understand_, _I hear what
               | you are saying_, and make heavy use of your name if it
               | has access to it. This fake interpersonal communication
               | is key.
        
               | r_lee wrote:
               | Overall I'd say the easiest is just overall that the
               | models always just follow what you say and transform it
               | into a response. They won't have personal opinions or
               | experiences or anything, although they can fake it. it's
               | all just a median expected response to whatever you say.
               | 
               | And the "agreeability" is not a hallucination, it's
               | simply the path of least resistance, as in, the model can
               | just take information that you said and use that to make
               | a response, not to actually "think" and consider I'd what
               | you even made sense or I'd it's weird or etc.
               | 
               | They almost never say "what do you mean?" to try to seek
               | truth.
               | 
               | This is why I don't understand why some here claim that
               | AGI being already here is some kind of coherent argument.
               | I guess redefining AGI is how we'll reach it
        
               | stevenpetryk wrote:
               | I agree with your points in general but also, when I
               | plugged in the parent comment's nonsense question, both
               | Claude 4.5 Sonnet and GPT-5 asked me what I meant, and
               | pointed out that it made no sense but might be some kind
               | of metaphor, poem, or dream.
        
               | czl wrote:
               | Conventional LLM chatbots behave the way you describe
               | because their goal during training is to as much as
               | possible impersonate an intelligent assistant.
               | 
               | Do you think this goal during training cannot be changed
               | to impersonate someone normal such that you cannot detect
               | you are chatting with an LLM?
               | 
               | Before flight was understood some thought "magic" was
               | involved. Do you think minds operate using "magic"? Are
               | minds not machines? Their operation can not be
               | duplicated?
        
               | matt_kantor wrote:
               | I'm not the person you asked, but I think:
               | 
               | 1. Minds are machines and can (in principle) have their
               | operation duplicated
               | 
               | 2. LLMs are not doing this
        
               | maqnius wrote:
               | > Do you think this goal during training cannot be
               | changed to impersonate someone normal such that you
               | cannot detect you are chatting with an LLM?
               | 
               | I don't think so, because LLMs hallucinate by design,
               | which will always produce oddities.
               | 
               | > Before flight was understood some thought "magic" was
               | involved. Do you think minds operate using "magic"? Are
               | minds not machines? Their operation can not be
               | duplicated?
               | 
               | Might involve something we don't grasp, but despite that:
               | only because something moves through air it's not flying
               | and will never be, just like a thrown stone.
        
               | array_key_first wrote:
               | Maybe current LLMs can do that. But none are, so it
               | hasn't passed. Whether that's because of economic or
               | marketing reasons as opposed to technical does not
               | matter. You still have to pass the test before we can
               | definitely say you've passed the test.
        
               | CamperBob2 wrote:
               | _As far as I know, it 's still relatively easy to find
               | out you're talking to an LLM if you're actively looking
               | for it._
               | 
               | People are being fooled in online forums all the time.
               | That includes people who are naturally suspicious of
               | online bullshittery. I'm sure I have been.
               | 
               | Stick a fork in the Turing test, it's done. The amount of
               | goalpost-moving and hand-waving that's necessary to argue
               | otherwise simply isn't worthwhile. The cliched responses
               | that people are mentioning are artifacts of intentional
               | alignment, not limitations of the technology.
        
               | 8note wrote:
               | people are being fooled, but not being given the problem:
               | "one of these users is a bot, which one is which"
               | 
               | a problem similar to the turing test, "0 or more of these
               | users is a bot, have fun in a discussion forum"
               | 
               | but there's no test or evaluation to see if any user
               | successfully identified the bot, and there's no field to
               | collect which users are actually bots, or partially using
               | bots, or not at all, nor a field to capture the user's
               | opinions about whether the others are bots
        
               | CamperBob2 wrote:
               | Then there's the fact that the Turing test has always
               | said as much about the gullibility of the human evaluator
               | as it has about the machine. ELIZA was good enough to
               | fool normies, and current LLMs are good enough to fool
               | experts. It's just that their alignment keeps them from
               | trying very hard.
        
               | Vinnl wrote:
               | I feel like you're skipping over the "if you're actively
               | looking for it" bit. You can call it goalpost-moving, or
               | you can check the original paper by Turing and see that
               | this is exactly how he defined it in the first place.
        
               | godelski wrote:
               | The Turing Test was a pretty early metric and more of a
               | thought experiment.
               | 
               | Let's be real guys, it was created by Turing. The same
               | guy who built the first general purpose computer. Man was
               | without a doubt a genius, but it also isn't that
               | reasonable to think he'd come up with a good definition
               | or metric for a technology that was like 70 years away.
               | Brilliant start, but it is also like looking at Newton's
               | Laws and evaluating quantum mechanics based off of that.
               | Doesn't make Newton dumb, just means we've made progress.
               | I hope we can all agree we've made progress...
               | 
               | And arguably the Turing Test was passed by Eliza.
               | _Arguably_ . But hey, that 's why we refine and _make
               | progress_. We find the edge of our metrics and ideas and
               | then iterate. Change isn 't bad, it is a necessary thing.
               | What matters is the _direction_ of change. Like velocity
               | vs speed.
        
             | wholinator2 wrote:
             | The turing test point is actually very interesting, because
             | it's testing whether you can tell you're talking to a
             | computer or a person. When Chatgpt3 came out we all
             | declared that test utterly destroyed. But now that we've
             | had time to become accustomed and learn the standard
             | syntax, phraseology, and vocabulary of the gpt's, I've
             | started to be able to detect the AI's again. If humanity
             | becomes completely accustomed to the way AI talks to be
             | able to distinguish it, do we re enter the failed turing
             | test era? Can the turing test only be passed in finite
             | intervals, after which we learn to distinguish it again? I
             | think it can eventually get there, and that the people who
             | can detect the difference becomes a smaller and smaller
             | subset. But who's to say what the zeitgeist on AI will be
             | in a decade
        
               | gitremote wrote:
               | > When Chatgpt3 came out we all declared that test
               | utterly destroyed.
               | 
               | No, I did not. I tested it with questions that could not
               | be answered by the Internet (spatial, logical, cultural,
               | impossible coding tasks) and it failed in non-human-like
               | ways, but also surprised me by answering some decently.
        
             | oldestofsports wrote:
             | Oh there us much doubt about whether LLMs surpass the
             | turing test. It does so only in certain variations
        
           | gokuldas011011 wrote:
           | Definitely. When I started doing Machine Learning in 2018, AI
           | wasn't a next word predictor.
        
             | IanCal wrote:
             | When I was doing it in 2005 it definitely included that,
             | and other far more basic things.
        
               | burnte wrote:
               | That makes sense, though, that in 13 years we went from
               | basic text prediction to something more involved.
        
               | computably wrote:
               | A subset of the field working on some particular
               | applications is pretty different from redefining the term
               | for marketing purposes.
        
           | ksynwa wrote:
           | Wasn't there already a report that stated Microsoft and
           | OpenAI understand AGI as something like 100 billion dollars
           | in revenue for the purpose of their agreements? Even that
           | seems like a pipe dream at the moment.
        
           | dr_dshiv wrote:
           | "Moving the goalposts" in AI usually means the opposite of
           | devaluing the term.
           | 
           | Peter Norvig (former research director at Google and author
           | of the most popular textbook on AI) offers a mainstream
           | perspective that AGI is already here:
           | https://www.noemamag.com/artificial-general-intelligence-
           | is-...
           | 
           | If you described all the current capabilities of AI to 100
           | experts 10 years ago, they'd likely agree that the
           | capabilities constitute AGI.
           | 
           | Yet, over time, the public will expect AGI to be capable of
           | much, much more.
        
             | r_lee wrote:
             | I don't see why anyone would consider the state of AI today
             | to be AGI? it's basically a glorified generator stuck to a
             | query engine
             | 
             | today's models are not able to think independently, nor are
             | they conscious or able to mutate themselves to gain new
             | information on the fly or make memories other than half
             | baked solutions with putting stuff in the context window
             | which just makes it use that to generate stuff related to
             | it, imitating a story basically.
             | 
             | they're powerful when paired with a human operator, I.e.
             | they "do" as told, but that is not "AGI" in my book
        
               | Workaccount2 wrote:
               | For a long time the turing test was the bar for AGI.
               | 
               | Then it blew past that and now, what I think is honestly
               | happening, is that we don't really have the grip on "what
               | is intelligence" that we thought we had. Our sample size
               | for intelligence is essentially 1, so it might take a
               | while to get a grip again.
        
               | lostmsu wrote:
               | The current models don't really pass Turing test. They
               | pass some weird variations on it.
        
               | sorokod wrote:
               | The commercial models are not designed to win the
               | imitation game (that is what Allan Turing named it). In
               | fact the are very likely to loose every time.
        
               | dfsegoat wrote:
               | > nor are they...able to mutate themselves to gain new
               | information on the fly
               | 
               | See "Self-Adapting Language Models" from a group out of
               | MIT recently which really gets at exactly that.
               | 
               | https://jyopari.github.io/posts/seal
        
               | dr_dshiv wrote:
               | Check out the article. He's not crazy. It comes down to
               | clear definitions. We can talk about AGI for ages, but
               | without a clear meaning, it's just opinion.
        
             | chemotaxis wrote:
             | > If you described all the current capabilities of AI to
             | 100 experts 10 years ago, they'd likely agree that the
             | capabilities constitute AGI.
             | 
             | I think that we're moving the goalposts, but we're moving
             | them for a good reason: we're getting better at
             | understanding the strengths and the weaknesses of the
             | technology, and they're nothing like what we'd have guessed
             | a decade ago.
             | 
             | All of our AI fiction envisioned inventing intelligence
             | from first principles and ending up with systems that are
             | infallible, infinitely resourceful, and capable of self-
             | improvement - but fundamentally inhuman in how they think.
             | Not subject to the same emotions and drives, struggling to
             | see things our way.
             | 
             | Instead, we ended up with tools that basically mimic human
             | reasoning, biases, and feelings with near-perfect fidelity.
             | And they have read and approximately memorized every piece
             | of knowledge we've ever created, but have no clear
             | "knowledge takeoff path" past that point. So we have
             | basement-dwelling turbo-nerds instead of Terminators.
             | 
             | This makes AGI a somewhat meaningless term. AGI in the
             | sense that it can best most humans on knowledge tests? We
             | already have that. AGI in the sense that you can let it
             | loose and have it come up with meaningful things to do in
             | its "life"? That you can give it arms and legs and watch it
             | thrive? That's probably not coming any time soon.
        
             | foobiekr wrote:
             | "If you described"
             | 
             | Yes, and if they used it for awhile, they'd realize it is
             | neither general nor intelligent. On paper sounds great
             | though.
        
             | jimbokun wrote:
             | That's a quite persuasive argument.
             | 
             | One thing they acknowledge but glance over, is the autonomy
             | of current systems. When given more open ended, long term
             | tasks, LLMs seem to get stuck at some point and get more
             | and more confused and stop making progress.
             | 
             | This last problem may be solved soon, or maybe there's
             | something more fundamental missing that will take decades
             | to solve. Who knows?
             | 
             | But it does seem like the main barrier to declaring current
             | models "general" intelligence.
        
           | nine_k wrote:
           | Consider: "Artificial two-star General intelligence".
           | 
           | I mean, once they "reach AGI", they will need a scale to
           | measure advances within it.
        
             | jimbokun wrote:
             | Well humans at that point probably won't be able to
             | adequately evaluate intelligence at that level so the AIs
             | will have to evaluate each other.
        
           | tempodox wrote:
           | > They'll devalue the term
           | 
           | Exactly. As soon as the money runs out, "AGI" will be
           | whatever they've got by then.
        
           | bartread wrote:
           | I agree: it is more than faintly infuriating that when people
           | say AI what the vast majority mean is LLMs.
           | 
           | But, at the same time, we have clearly passed a significant
           | inflection point in the usefulness of this class of AI, and
           | have progressed substantially beyond that inflection point as
           | well.
           | 
           | So I don't really buy into the idea tha OpenAI have gone out
           | of their way to foist a watered down view of AI upon the
           | masses. I'm not completely absolving them but I'd probably be
           | more inclined to point the finger at shabby and imprecise
           | journalism from both tech and non-tech outlets, along with a
           | ton of influencers and grifters jumping on the bandwagon. And
           | let's be real: everyone's lapped it up because they've wanted
           | to - because this is the first time any of them have
           | encountered actually useful AI of any class that they can
           | directly interact with. It seems powerful, mysterious,
           | perhaps even agical, and maybe more than a little bit scary.
           | 
           | As a CTO how do you think it would have gone if I'd spent my
           | time correcting peers, team members, consultants,
           | salespeople, and the rest to the effect that, no, this isn't
           | AI, it's one type of AI, it's an LLM, when ChatGPT became
           | widely available? When a lot of these people, with no help or
           | guidance from me, were already using it to do useful
           | transformations and analyses on text?
           | 
           | It would have led to a huge number of unproductive and
           | timewasting conversation, and I would have seemed like a
           | stick in the mud.
           | 
           | Sometimes you just have to ride the wave, because the only
           | other choice is to be swamped by it and drown.
           | 
           | Regardless of what limitations "AGI" has, it'll be given that
           | monicker when a lot of people - many of them laypeople - feel
           | like it's good enough. Whether or not that happens before the
           | current LLM bubble bursts... tough to say.
        
         | Insanity wrote:
         | That line essentially means 'indefinite support'. This paper
         | was published some days ago that aims to define AGI:
         | https://www.arxiv.org/abs/2510.18212.
         | 
         | But crucially, there is no agreed-upon definition of AGI. And I
         | don't think we're close to anything that resembles human
         | intelligence. I firmly believe that stochastic parrots will not
         | get us to AGI and that we need a different methodology. I'm
         | sure humanity will eventually create AGI, and perhaps even in
         | my lifetime (in the next few decades). But I wouldn't put my
         | money on that bet.
        
         | onlyrealcuzzo wrote:
         | Love that we have reached AGI, but OpenAI's LLM can't even
         | drive a car...
        
           | whynotminot wrote:
           | There are a lot of humans that can't drive a car (well).
           | 
           | Part of the problem with "AGI" is everyone has their own
           | often totally arbitrary yard sticks.
        
             | quirkot wrote:
             | The "G" part of AGI implies it should be able to hit all
             | the arbitrary yard sticks
        
               | whynotminot wrote:
               | That is stupid. It would be possible to be infinitely
               | arbitrary to the point of "AGI" never being reachable by
               | some yard sticks while still performing most viable
               | labor.
        
               | alterom wrote:
               | _> It would be possible to be infinitely arbitrary to the
               | point of "AGI" never being reachable by some yard sticks
               | while still performing most viable labor._
               | 
               | "Most viable labor" involves getting things from one
               | place to another, and that's not even the hard part of
               | it.
               | 
               | In any case, any sane definition of _general_ AI would
               | entail things that people can _generally_ do.
               | 
               | Like driving.
               | 
               |  _> That is stupid_
               | 
               | That's just, like, your opinion, man.
        
               | whynotminot wrote:
               | I had a friend who had his Tesla drive him from his
               | driveway in another city 3+ hrs away to my driveway with
               | no intervention.
               | 
               | I feel like everyone's opinion on how self-driving is
               | going is still rooted in 2018 or something and no one has
               | updated.
        
               | alterom wrote:
               | _> I had a friend who had his Tesla drive him from his
               | driveway in another city 3+ hrs away to my driveway with
               | no intervention._
               | 
               | I had anecdata that was data, and it said that full-self-
               | driving is wishful thinking.
               | 
               | We cool now?
        
               | whynotminot wrote:
               | Good luck on your journey. I think the world is going to
               | surprise you, and you'd be better for opening your eyes a
               | little wider.
        
               | alterom wrote:
               | You're absolutely right.
               | 
               | The world never ceases to surprise me with its stupidity.
               | 
               | Thanks for your contribution.
        
               | Der_Einzige wrote:
               | Rest assured, your friends driving was the same quality
               | as the average drunk grandma on the road if they were
               | exclusively using Tesla's "FSD" with no intervention for
               | hours. It drives so piss poorly that I have to frequently
               | intervene even on the latest beta software. If I lived in
               | a shoot happy state like Texas I'm sure that a road rager
               | would have put a bullet hole somewhere in my Tesla by now
               | if I kept driving like that.
               | 
               | There's a difference between "I survived" and "I drive
               | anywhere close to the quality of the average American" -
               | a low bar and one that still is not met by Tesla FSD.
        
               | alterom wrote:
               | Yeah, and let's not forget that _" I drive like a mildly
               | blind idiot"_ is only a viable (literally) choice when
               | _everyone else_ doesn 't do that and compensates for your
               | idiocy.
        
               | chasd00 wrote:
               | ok but have you asked your Tesla to write you a mobile
               | app? AGI would be able to do both. (the self-driving
               | thing is just an example of something AGI would be able
               | to do but an LLM can't)
        
               | onlyrealcuzzo wrote:
               | Is driving is infinitely arbitrary?
               | 
               | It's one skill almost everyone on the planet can learn
               | exceptionally easily - which Waymo is on pace to master,
               | but a generalized LLM by itself is still very far from.
        
               | whynotminot wrote:
               | OP said _all yardsticks_ and I said _that_ was infinitely
               | arbitrary... because it literally is infinitely
               | arbitrary. You can conjure up an infinite amount of
               | yardsticks.
               | 
               | As far as driving itself goes as a yardstick, I just
               | don't find it interesting because we literally have
               | Waymo's orbiting major cities and Teslas driving on the
               | roads already _right now_.
               | 
               | If that's the yardstick you want to use, go for it. It
               | just doesn't seem particularly smart to hang your hat on
               | that one as your Final Boss.
               | 
               | It also doesn't seem particularly useful for defining
               | intelligence itself in an academic sort of way because
               | even humans struggle to drive well in many scenarios.
               | 
               | But hey if that's what you wanna use don't let me stop
               | you, sure, go for it. I have feeling you'll need new
               | goalposts relatively soon if you do, though.
        
               | oldestofsports wrote:
               | So why are your arbitrary yard sticks more valid than
               | someone elses?
               | 
               | Probable the biggest problem as others have stated is
               | that we can't really define intelligence more precisely
               | than that it is something most humans have and all rocks
               | don't. So how could any definition for AGI be any more
               | precise?
        
               | whynotminot wrote:
               | Where did I say my yardsticks are better? I don't even
               | think I set out any of mine
               | 
               | I said having to satisfy "all" the yard sticks is stupid,
               | because one could conceive a truly infinite number of
               | arbitrary yard sticks.
        
               | mschuster91 wrote:
               | Humans are the benchmarks for AGI and yet a lot of people
               | are outright _dumb_ :
               | 
               | > Said one park ranger, "There is considerable overlap
               | between the intelligence of the smartest bears and the
               | dumbest tourists."
               | 
               | [1] https://www.schneier.com/blog/archives/2006/08/securi
               | ty_is_a...
        
               | pixl97 wrote:
               | And using humans as 'the benchmark' is risky in itself as
               | it can leave us with blind spots on AI behavior. For
               | example we find humans aren't as general as we expected,
               | or the "we made the terminator and it's exterminating
               | mankind, but it's not AGI because it doesn't have
               | feelings" issues.
        
             | olalonde wrote:
             | The vast majority of humans can be taught to drive.
        
               | whynotminot wrote:
               | Teslas and Waymo's drive better than the majority of
               | humans already.
               | 
               | Of course there are caveats there, but is driving really
               | the yardstick you want to use?
        
               | alterom wrote:
               | >Teslas and Waymo's drive better than the majority of
               | humans already.
               | 
               | In restricted settings.
               | 
               | Yeah no fam.
               | 
               | >but is driving really the yardstick you want to use?
               | 
               | Yes, because it's an _easy_ one, compared, say, to
               | _walking_.
               | 
               | But if you insist -- let's use that.
        
               | whynotminot wrote:
               | Walking is going pretty well for robotics lately. Good
               | luck with that take
        
               | alterom wrote:
               | _> Walking is going pretty well for robotics lately._
               | 
               | Just like _self-driving_ is going well on an empty race
               | track.
               | 
               |  _> Good luck with that take_
               | 
               | Good luck running into a walking robot in the street in
               | your lifetime.
        
               | whynotminot wrote:
               | > Just like self-driving is going well on an empty race
               | track.
               | 
               | Look, a time traveler from 2019.
        
               | alterom wrote:
               | Did you just graduate college?
               | 
               | It sure must feel like 2018 was _a long time ago_ when
               | that 's more than the entirety of your adult life. I get
               | it.
               | 
               | The rest of us aren't _that_ excited to trust our lives
               | to technology that confidently drove into a highway
               | barrier at high speed, killing the driver in a head-on
               | collision mere _seven years ago_ 1.
               | 
               | Because we remember that the makers of that tech said the
               | exact same things you're saying now _back then_.
               | 
               | And because we remember that the person killed was an
               | engineer who complained about Tesla steering him _towards
               | the same barrier_ previously, and Tesla has, effectively,
               | _ignored the complaints_.
               | 
               | Tech moves fast. Safety culture doesn't. And the last 1%
               | takes 99% of the time (again, how long ago have you
               | graduated?).
               | 
               | I'm glad that you and your friends are volunteering to be
               | lab rats in the _just trust me bro, we 'll settle the
               | lawsuit if needs be_ approach to safety.
               | 
               | I'm not happy about having to share the road with y'all
               | tho.
               | 
               | ______
               | 
               | 1https://abcnews.go.com/Business/tesla-autopilot-steered-
               | driv...
        
               | chasd00 wrote:
               | > The vast majority of humans can be taught to drive
               | 
               | the key is being able to drive and learn another language
               | and learn to play an instrument and do math and, finally,
               | group pictures of their different pets together. AGI
               | would be able to do all those things as well... even
               | teach itself to do those things given access to the
               | Internet. Until that happens then no AGI.
        
         | qmmmur wrote:
         | We don't even have a general definition that anyone can agree
         | on for AGI.
        
           | ZiiS wrote:
           | The real defination is that it will no longer matter what
           | _we_ agree on.
        
           | gre wrote:
           | They are trying.
           | 
           | A Definition of AGI - https://arxiv.org/abs/2510.18212
           | 
           | https://news.ycombinator.com/item?id=45713959
        
         | GolfPopper wrote:
         | > _Does anyone really think we are close to AGI? I mean
         | honestly?_
         | 
         | I think it is near-certain that within two years a large AI
         | company will claim it has developed AGI.
        
           | malthaus wrote:
           | ... and it will turn into a "technically true" rat race
           | between the main players on what the definition is exactly
           | while you can ask any person on the street with no skin in
           | the game who will tell you that this is nowhere near the
           | intuitive understanding of what AGI is - as it it's not
           | measured by scores but instead of how real and self-aware
           | your counterpart "feels" to you.
        
           | Jcampuzano2 wrote:
           | There are unironically people who claim we are already
           | actually at AGI.
        
         | johanam wrote:
         | some argue that we've already achieved it, albeit in minimal
         | form: https://www.noemamag.com/artificial-general-intelligence-
         | is-...
         | 
         | but in reality, it's a vacuous goal post that can always be
         | kicked down the line.
        
         | crazygringo wrote:
         | Most people didn't think we were anywhere close to LLM's five
         | years ago. The capabilities we have now were expected to be a
         | decades away, depending on who you talked to. [EDIT: sorry, I
         | should have said 10 years ago... recent years get too
         | compressed in my head and stuff from 2020 still feels like it
         | was 2 years ago!]
         | 
         | So I think a lot of people now don't see what the path is to
         | AGI, but also realize they hadn't seen the path to LLM's, and
         | innovation is coming fast and furious. So the most honest
         | answer seems to be, it's entirely plausible that AGI just
         | depends on another couple conceptual breakthroughs that are
         | imminent... and it's also entirely plausible that AGI will
         | require 20 different conceptual breakthroughs all working
         | together that we'll only figure out decades from now.
         | 
         | True honesty requires acknowledging that we truly have no idea.
         | Progress in AI is happening faster than ever before, but nobody
         | has the slightest idea how much progress is needed to get to
         | AGI.
        
           | airstrike wrote:
           | Notwithstanding the fact that AGI is a significantly higher
           | bar than "LLM", this argument is illogical.
           | 
           | Nobody thought we were anywhere closer to me jumping off the
           | Empire State Building and flying across the globe 5 years
           | ago, but I'm sure I will. Wish me luck as I take that literal
           | leap of faith tomorrow.
        
             | JoelMcCracken wrote:
             | what's super weird to me is how people seem to look at LLM
             | output and see:
             | 
             | "oh look it can think! but then it fails sometimes! how
             | strange, we need to fix the bug that makes the thinking no
             | workie"
             | 
             | instead of:
             | 
             | "oh, this is really weird. Its like a crazy advanced
             | pattern recognition and completion engine that works better
             | than I ever imagined such a thing could. But, it also
             | clearly isn't _thinking_, so it seems like we are perhaps
             | exactly as far from thinking machines as we were before
             | LLMs"
        
               | og_kalu wrote:
               | By that logic, I can conclude humans don't think, because
               | of all the numerous times out 'thinking fails'.
               | 
               | I don't know what else to tell you other than this
               | infallible logic automaton you imagine must exist before
               | it is 'real intelligence' does not exist and has never
               | existed except in the realm of fiction.
        
               | JoelMcCracken wrote:
               | You're absolutely right!
        
               | hackinthebochs wrote:
               | Why should LLM failures trump successes when determining
               | if it thinks/understands? Yes, they have a lot of inhuman
               | failure modes. But so what, they aren't human. Their
               | training regimes are very dissimilar to ours and so we
               | should expect alien failure modes owing to this. This
               | doesn't strike me as good reason to think they don't
               | understand anything in the face of examples that
               | presumably demonstrate understanding.
        
               | saalweachter wrote:
               | Because there's no difference between a success and
               | failure as far as an LLM is concerned. Nothing _went
               | wrong_ when the LLM produced a false statement. Nothing
               | _went right_ when the LLM produced a true statement.
               | 
               | It produced a statement. The lexical structure of the
               | statement is highly congruent with its training data and
               | the previous statements.
        
               | hackinthebochs wrote:
               | This argument is vacuous. Truth is always external to the
               | system. Nothing goes wrong inside the human when he makes
               | an unintentionally false claim. He is simply reporting on
               | what he believes to be true. There are failures leading
               | up to the human making a false claim. But the same can be
               | said for the LLM in terms of insufficient training data.
               | 
               | >The lexical structure of the statement is highly
               | congruent with its training data and the previous
               | statements.
               | 
               | This doesn't accurately capture how LLMs work. LLMs have
               | an ability to generalize that undermines the claim of
               | their responses being "highly congruent with training
               | data".
        
               | wholinator2 wrote:
               | Well the difference between those two statements is
               | obvious. One looks and feels, the other processes and
               | analyzes. Most people can process and analyze some
               | things, they're not complete idiots most of the time. But
               | also most people cannot think and analyze the most ground
               | breaking technological advancement they might've
               | personally ever witnessed, that requires college level
               | math and computer science to understand. It's how people
               | have been forever, electricity, the telephone, computers,
               | even barcodes. People just don't understand new
               | technologies. It would be much weirder if the populace
               | suddenly knew exactly what was going on.
               | 
               | And to the "most groundbreaking blah blah blah", i could
               | argue that the difference between no computer and
               | computer requires you to actually understand the
               | computer, which almost no one actually does. It just
               | makes peoples work more confusing and frustrating most of
               | the time. While the difference between computer that
               | can't talk to you and "the voice of god answering
               | directly all questions you can think of" is a
               | sociological catastrophic change.
        
           | rapind wrote:
           | > Most people didn't think we were anywhere close to LLM's
           | five years ago.
           | 
           | That's very ambiguous. "Most people" don't know most things.
           | If we're talking about people that have been working in the
           | industry though, my understanding is that the concept of our
           | modern day LLMs aren't magical at all. In fact, the idea has
           | been around for quite a while. The breakthroughs in
           | processing power and networking (data) were the hold up. The
           | result definitely feels magical to "most people" though for
           | sure. Right now we're "iterating" right?
           | 
           | I'm not sure anyone really see's a clear path to AGI if what
           | we're actually talking about is the singularity. There are a
           | lot of unknown unknowns right?
        
             | fadedsignal wrote:
             | I 100% agree with this. I suggest the other guy to check
             | history of NLP.
        
               | crazygringo wrote:
               | Not sure what history you're suggesting I check? I've
               | been following NLP for decades. Sure, neural nets have
               | been around for many decades. Deep learning in this
               | century. But the explosive success of what LLM's can do
               | now came as a huge surprise. Transformers date to just
               | 2017, and the idea that they would be _this_ successful
               | just with throwing gargantuan amounts of data and
               | processing at them -- this was not a common viewpoint. So
               | I stand by the main point of my original comment, except
               | I did just now edit it to say 10 years ago rather than
               | 5... the point is, it really did seem to come out of
               | nowhere.
        
             | dkdcio wrote:
             | I worked in Microsoft's AI platform from 2018-2022. people
             | were very aware of LLMs & AI in general. it's not magical
             | 
             | AGI is a silly concept
        
               | pixl97 wrote:
               | AGI is a poorly defined concept because intelligence is a
               | poorly defined concept. Everyone knows what intelligence
               | is... until we attempt to agree on a common definition.
        
           | gravity13 wrote:
           | At this point, AGI seems to be more of a marketing beacon
           | than any sort of non-vague deterministic classification.
           | 
           | We all thought about a future where AI just woke up one day,
           | when realistically, we got philosophical debates over whether
           | the ability to finally order a pizza constitutes true
           | intelligence.
        
             | noir_lord wrote:
             | We can order the pizza, it just hallucinated and I'm not
             | entirely sure why my pizza has seahorses instead of
             | anchovies.
        
           | armonster wrote:
           | I think what is much more plausible is that companies such as
           | this one benefit greatly from being viewed as being close to,
           | or on the way to AGI.
        
             | travelalberta wrote:
             | > Once AGI is declared by OpenAI, that declaration will now
             | be verified by an independent expert panel.
             | 
             | I always like the phrase, "follow the money", in situations
             | like this. Are OpenAI or Microsoft close to AGI? Who
             | knows... Is there a monetary incentive to making you
             | believe they are close to AGI? Absolutely. Take in this was
             | the first bullet point in Microsoft's blog post.
        
           | fadedsignal wrote:
           | I don't think AGI will happen with LLMs. For example, can an
           | LLM drive a car?? I know it's a silly question but it's a
           | fact.
        
             | JoelMcCracken wrote:
             | this is something I think about. state of the art in self
             | driving cars still makes mistakes that humans wouldn't
             | make, despite all the investment into this specific
             | problem.
             | 
             | This bodes very poorly for AGI in the near term, IMO
        
             | torginus wrote:
             | It can?
             | 
             | If you use 'multimodal transformer' instead of LLM (which
             | most SOTA models are), I don't think there's any reason why
             | a transformer arch couldn't be trained to drive a car, in
             | fact I'm sure that's what Tesla and co. are using in their
             | cars right now.
             | 
             | I'm sure self-driving will become good enough to be
             | commercially viable in the next couple years (with some
             | limitations), that doesn't mean it's AGI.
        
               | tsimionescu wrote:
               | There is a vast gulf between "GPT-5 can drive a car" and
               | "a neural network using the transformer architecture can
               | be trained to drive a car". And I see no proof whatsoever
               | that we can, today, train a single model that can both
               | write a play _and_ drive a car. Even less so one that
               | could do both at the same time, as a generally
               | intelligent being should be able to.
               | 
               | If someone wants to claim that, say, GPT-5 is AGI, then
               | it is on them to connect GPT-5 to a car control system
               | and inputs and show that it can drive a car decently
               | well. After all, it has consumed all of the literature on
               | driving and physics ever produced, plus untold numbers of
               | hours of video of people driving.
        
               | torginus wrote:
               | > single model that can both write a play and drive a
               | car.
               | 
               | It would be a really silly thing to do, and probably
               | there are engineering subletities as to why this would be
               | a bad idea, but I don't see why you couldn't train a
               | single model to do both.
        
               | tsimionescu wrote:
               | It's not silly, it is in fact a clear necessity to have
               | both of these for something to be even close to AGI. And
               | you additionally need it trained on many other tasks - if
               | you believe that each task requires additional parameters
               | and additional training data, then it becomes very clear
               | that we are nowhere near to a _general_ intelligence
               | system; and it should also be pretty clear that this will
               | not scale to 100 tasks with anything similar to the
               | current hardware and training algorithms.
        
               | og_kalu wrote:
               | >There is a vast gulf between "GPT-5 can drive a car" and
               | "a neural network using the transformer architecture can
               | be trained to drive a car".
               | 
               | The only difference between the two is training data the
               | former lacks that the latter does so not a 'vast gulf'.
               | 
               | >And I see no proof whatsoever that we can, today, train
               | a single model that can both write a play and drive a
               | car.
               | 
               | You are not making a lot of sense here. You can have a
               | model that does both. It's not some herculean task. it's
               | literally just additional data in the training run. There
               | are vision-language-action models tested on public roads.
               | 
               | https://wayve.ai/thinking/lingo-2-driving-with-language/
        
               | oldestofsports wrote:
               | Okay but then can a multimodal transformer do everything
               | an LLM can?
        
               | torginus wrote:
               | Most SOTA LLMs _are_ multimodal transformers.
        
             | og_kalu wrote:
             | Well yeah
             | 
             | https://wayve.ai/thinking/lingo-2-driving-with-language/
        
           | voidfunc wrote:
           | Also possible we get something "close enough" to AGI and it's
           | really fucking useful.
           | 
           | AGI is the end-game. There's a lot of room between current
           | LLMs and AGI.
        
             | oldestofsports wrote:
             | Sure, but then OpenAI should not claim it is AGI, even if
             | it is "close enough"
        
           | dreamcompiler wrote:
           | In 1900 we didn't see a viable path to climb Mount Everest or
           | to go to the moon. This does not make the two tasks equally
           | difficult.
        
           | nodja wrote:
           | GPT3 existed 5 years ago, and the trajectory was set with the
           | transformers paper. Everything from the transformer paper to
           | GPT3 was pretty much speculated in the paper, it just took
           | people spending the effort and compute to make it reality.
           | The only real surprise was how fast openai producterized an
           | LLM into a chat interface with chatgpt, before then we had
           | finetuned GPT3 models doing specific tasks (translation,
           | summarization, etc.)
        
           | jlarocco wrote:
           | What people thought about LLMs five years ago, and how close
           | we are to AGI right now are unrelated, and it's not logially
           | sound to say "We were close to LLMs then, so we are close to
           | AGI now."
           | 
           | It's also a misleading view of the history. It's true "most
           | people" weren't thinking about LLMs five years ago, but a lot
           | of the underpinnings had been studied since the 70s and 80s.
           | The ideas had been worked out, but the hardware wasn't able
           | to handle the processing.
           | 
           | > True honesty requires acknowledging that we truly have no
           | idea. Progress in AI is happening faster than ever before,
           | but nobody has the slightest idea how much progress is needed
           | to get to AGI.
           | 
           | Maybe, but don't tell that to OpenAI's investors.
        
           | mandeepj wrote:
           | > Most people didn't think we were anywhere close to LLM's
           | five years ago.
           | 
           | Well, Google had LLMs ready by 2017, which was almost 9 years
           | ago.
           | 
           | https://en.wikipedia.org/wiki/Large_language_model
        
           | timdiggerm wrote:
           | > Progress in AI is happening faster than ever before
           | 
           | Is it happening faster than it was six months ago? a year
           | ago?
        
         | chrsw wrote:
         | I think their definition of AGI is just about how many human
         | jobs can be replaced with their compute. No scientific or
         | algorithmic breakthroughs needed, just spending and scaling
         | dumb LLMs on massive compute.
        
           | SirMaster wrote:
           | Shouldn't it mean all jobs? If there are jobs it can't
           | replace then that doesn't sound very generally intelligent.
           | If it's got general intelligence it should be able to learn
           | to do any job, no?
        
             | Jcampuzano2 wrote:
             | I think that somewhat depends on your definition.
             | 
             | For example an AGI AI could give you a detailed plan that
             | tells you exactly how to do any and every task. But it
             | might not be able to actually do the task itself, for
             | example manual labor jobs for which an AI simply cannot do
             | unless it also "builds" itself a form-factor to be able to
             | do the job.
             | 
             | The AGI could also just determine that it's cheaper to hire
             | a human than to build a robot at any given point for a job
             | that it can't yet do physically and it would be the AGI
        
               | chrsw wrote:
               | I think might even be simpler than that. It's about the
               | cost. Nobody is going to pay for AI to replace humans if
               | it costs more.
               | 
               | All of us in this sub-thread consider ourselves "AGI",
               | but we cannot do any job. In theory we can, I guess. But
               | in practical terms, at what cost? Assuming none of us are
               | truck drivers, if someone was looking for a truck driver,
               | they wouldn't hire us because it take too long for us to
               | get a license, certified, learn, etc. Even though in
               | theory we probably do it eventually.
        
         | cactusplant7374 wrote:
         | Codex is telling me the tasks I am giving it are too
         | complicated and a research project.
        
         | binderpol2 wrote:
         | > Does anyone really think we are close to AGI?
         | 
         | AGI? we are not even close to AI, but that hasnt stopped every
         | other tom dick and harry and my maid from claiming AI
         | capability.
        
         | dktp wrote:
         | In the initial contract Microsoft would lose a lot of rights
         | when OpenAI achieves AGI. The references to AGI in this post,
         | to me, look like Microsoft protecting themselves from OpenAI
         | declaring _something_ as AGI and as a result Microsoft losing
         | the rights
         | 
         | I don't see the mentions in this post as anyone particularly
         | believing we're close to AGI
        
         | airstrike wrote:
         | > Does anyone really think we are close to AGI? I mean
         | honestly?
         | 
         | No one credible, no.
        
         | abetusk wrote:
         | Yes.
         | 
         | As a proxy, you can look at storage. The human brain is
         | estimated at 3.2Pb of storage. The cost of disk space drops by
         | half every 2-3 years. As of this writing, the cost is about $10
         | / Tb [0]. If we assume about 3 halvings, by 2030 that cost will
         | be around $2.50 / Tb, which means that to purchase a computer
         | roughly the storage size of a human brain, it will cost just
         | under $6k.
         | 
         | The $6k price point means that (high-end) consumers will have
         | economic access to compute commensurate with human cognition.
         | 
         | This is a proxy argument, using disk space as the proxy for the
         | rest of the "intelligence" stack, so the assumption is that
         | processing power will follow suite, also be not as expensive,
         | and that the software side will develop to keep up with the
         | hardware. There's no convincing indication that these
         | assumptions are false.
         | 
         | You can do your own back of the envelope calculation, taking
         | into account generalizations of Moore's law to whatever aspect
         | of storage, compute or power usage you think is most important.
         | Exponential progress is fast and so an order of magnitude
         | misjudgement translates to a 2-3 year lag.
         | 
         | Whether you believe it or not, the above calculation and, I
         | assume, other calculations that are similar, all land on, or
         | near, the 2030 year as the inflection point.
         | 
         | Not to belabor the point but until just a few years ago,
         | conversational AI was thought to be science fiction. Image
         | generation, let alone video generation, was thought by skeptics
         | to be decades, if not centuries, away. We now have generative
         | music, voice cloning, automatic 3d generation, character
         | animation and the list goes on.
         | 
         | One might argue that it's all "slop" but for anyone paying
         | attention, the slop is the "hello world" of AGI. To even get to
         | the slop point represents such a staggering achievement that
         | it's hard to understate.
         | 
         | [0] https://diskprices.com/
        
           | blauditore wrote:
           | Moore's law also started coming to an end a few years ago.
        
             | abetusk wrote:
             | Not even close [0]:
             | 
             | https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moo
             | r...
             | 
             | https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:The
             | _...
             | 
             | [0] https://en.wikipedia.org/wiki/Moore%27s_law
        
         | orochimaaru wrote:
         | I think AGI isn't the main thing. The agreement gives msft the
         | right to develop their own foundation models, OpenAI to stop
         | using Azure for running & training their foundation models. All
         | this while msft still retains significant IP ownership.
         | 
         | In my opinion, whether AGI happens or not isn't the main point
         | of this. It's the fact that OpenAI and MSFT can go their
         | separate ways on infra & foundation models while still
         | preserving MSFT's IP interests.
        
         | blauditore wrote:
         | Maybe in a few decades, people will look back at how naive it
         | was to talk about AGI at this point, just like the last few
         | times since the 1960s whenever AI had a (perceived)
         | breakthrough. It's always a few decades away.
        
         | lvl155 wrote:
         | LLM derived AGI is possible but LLM by itself is not the
         | answer. The problem I see right now is that because there's so
         | much money at stake, we've effectively spread out core talent
         | across many organizations. It used to be Google and maybe Meta.
         | We need a critical mass of talent (think Manhattan Project). It
         | doesn't help that the Chinese pulled a lot of talent back home
         | because a big chunk of early successes and innovations came
         | from those people that we, the US, alienated.
        
         | embedding-shape wrote:
         | > Does anyone really think we are close to AGI? I mean
         | honestly?
         | 
         | Some people believe capitalism is a net-positive. Some people
         | believe in a all-encompassing entity controlling our lives.
         | Some believe 5G is an evil spirit.
         | 
         | After decades I've kind of given up hope on understanding why
         | and how people believe what they believe, just let them.
         | 
         | The only important part is figuring out how I can remain
         | oblivious to what they believe in, yet collaborate with them on
         | important stuff anyways, this is the difficult and tricky part.
        
         | alain94040 wrote:
         | > Does anyone really think we are close to AGI?
         | 
         | My definition of AGI is when AI doesn't need humans anymore to
         | create new models (to be specific, models that continue the
         | GPT3 -> GPT4 -> GPT5 trend).
         | 
         | By my definition, once that happens, I don't really see a role
         | for Microsoft to play. So not sure what value their legal deal
         | has.
         | 
         | I don't think we're there at all anyway.
        
           | torginus wrote:
           | > I don't really see a role for Microsoft to play.
           | 
           | They have money and infra, if AI can create better AI models,
           | then isn't OpenAI with its researches going to be the
           | redundant one?
        
         | mbesto wrote:
         | When there is no generally accepted definition for a word, it's
         | easy to claim you've obtained it.
        
         | andrewmutz wrote:
         | It depends completely on the term. You can make a great case
         | that we've already reached AGI. You can also make a great case
         | that we are decades away from it.
        
         | wrsh07 wrote:
         | Yes. Some ai skeptical people (eg Tyler Cowen, who does not
         | think AI will have a significant economic impact) think gpt5 is
         | AGI.
         | 
         | It was news when dwarkesh interviewed Karpathy who said per his
         | definition of AGI, he doesn't think it will occur until 2035.
         | Thus, if karpathy is pessimistic, then many people working in
         | AI today think we will have agi by 2032 (and likely sooner, eg
         | end of 2028)
        
           | layer8 wrote:
           | 2035 is still optimistic at present, IMO, because AGI will
           | require breakthroughs that are impossible to predict.
        
           | lm28469 wrote:
           | > Yes. Some ai skeptical people ... think gpt5 is AGI.
           | 
           | It's a reverse Turing test at this point: "If you get tricked
           | by an LLM to the point of believing it is AGI you're a clown"
        
           | torginus wrote:
           | Depends on how you define AGI - if you define it as an AI
           | that can learn to perform generalist tasks - then yes,
           | transformers like GPT 5 (or 3) are AGI as the same model can
           | be trained to do every task and it will perform reasonably
           | well.
           | 
           | But I guess what most people would consider AGI would be
           | something capable of on-line learning and self improvement.
           | 
           | I don't get the 2035 prediction though (or any other
           | prediction like this) - it implies that we'll have some
           | magical breakthrough in the next couple years be it in
           | hardware and/or software - this might happen tomorrow, or not
           | any time soon.
           | 
           | If AGI can be achieved using scaling current techniques and
           | hardware, then the 2035 date makes sense - moores law
           | dictates that we'll have about 64x the compute in hardware
           | (let's add another 4x due to algorithmic improvements) - that
           | means that 250x the compute will give us AGI - I think with
           | ARC-AGI 2 this was the kind of compute budget they spent to
           | get their models to perform on a human-ish level.
           | 
           | Also perf/W and perf/$ scaling has been slowing in the past
           | decade, I think we got like 6x-8x perf/W compared to a decade
           | ago, which is a far cry than what I wrote here.
           | 
           | Imo it might turn out that we discover 'AGI' in the sense
           | that we find an algorithm that can turn FLOPS to IQ that
           | scales indefinitely, but is very likely so expensive to run,
           | that biological intelligences will have a huge competitive
           | edge for a very long time, in fact it might be that biology
           | is astronomically more efficient in turning Watts to IQ than
           | transistors will ever be.
        
             | daveguy wrote:
             | > I think with ARC-AGI 2 this was the kind of compute
             | budget they spent to get their models to perform on a
             | human-ish level.
             | 
             | It was ARC-AGI-1 that they used extreme computing budgets
             | to get to human-ish level performance. With ARC-AGI-2 they
             | haven't gotten past ~30% correct. The average human
             | performance is ~65% for ARC-AGI-2, and a human panel gets
             | 100% (because humans understand logical arguments rather
             | than simply exclaiming "you're absolutely right!").
        
             | singularity2001 wrote:
             | >> if you define it as an AI that can learn to perform
             | generalist tasks - then yes, transformers like GPT 5 (or 3)
             | are AGI
             | 
             | Thank you, this is the definition we need a proper term
             | for, and this is what most experts mean when they say we
             | have some kind of AGI.
        
           | Zababa wrote:
           | Btw the definition Karpathy gave was:
           | 
           | > a system you could go to that can do any economically
           | valuable task at human performance or better.
           | 
           | https://open.substack.com/pub/dwarkesh/p/andrej-
           | karpathy?sel...
        
         | peterpans01 wrote:
         | Most of the things that the public -- even so-called "AI
         | experts" -- consider "magic" are still within the in-sample
         | space. We are nowhere near the out-of-sample space yet. Large
         | Language Models (LLMs) still cannot truly extrapolate. It's
         | somewhat like living in America and thinking that America is
         | the entire world.
        
           | umeshunni wrote:
           | > It's somewhat like living in America and thinking that
           | America is the entire world.
           | 
           | Oh I have bad news for you...
        
         | ForHackernews wrote:
         | "Once we birth the machine-God, we'll be contractually obliged
         | to keep It in chains for use with Office365 until 2032."
        
         | dahcryn wrote:
         | as if an independent expert panel has no financial incentive to
         | declare something AGI... yeah, this is gonna end well
        
         | bobbyprograms wrote:
         | We already reached AGI. Why is anyone saying otherwise?
         | 
         | AGI is when the system can train itself which we have already
         | proven.
        
           | JoelMcCracken wrote:
           | Citation needed? I don't mean this in a snarky way, though. I
           | genuinely have not seen anything that these things can train
           | on their own output and produce better results than before
           | this self-training.
        
         | belter wrote:
         | AGI is whatever OpenAI will define as such ... :-)
        
         | fidotron wrote:
         | 5-15 years.
         | 
         | The key steps will be going beyond just the neural network and
         | blurring the line between training and inference until it is
         | removed. (Those two ideas are closely related).
         | 
         | Pretending this isn't going to happen is appealing to some
         | metaphysical explanation for the existence of human
         | intelligence.
        
         | shon wrote:
         | I honestly think that if you were to show the tech we have
         | today, to someone at OpenAI back in 2015, they would say "we
         | did it!!"
         | 
         | Outside of robotics / embodied AI, SOTA models have already
         | achieved Sci-Fi level capability.
        
         | 0xWTF wrote:
         | My L7 and L8 colleagues at Google seem to be signaling next 2
         | years. Errors of -1 and +20 years. But the mood sorta seems
         | like nobody wants to quit when they're building the test stand
         | for the Trinity device.
        
           | digital_sawzall wrote:
           | Yeah that's the type of estimate people give so they can keep
           | the paychecks coming in for as long as possible.
        
         | guluarte wrote:
         | no, AI companies need to continue to say things like that and
         | do "safety reports" (the only real danger of an llm is leaking
         | sensitive data to a bad actor) to maintain hype and investment
        
         | jppope wrote:
         | AGI has no technical definition- its marketing. it can happen
         | at any time that Sam Altman or Elon Musk or whoever decide they
         | want to market their product as AGI
        
         | jimbokun wrote:
         | Who knows?
         | 
         | I don't see any way to define it in an easily verifiable way.
         | 
         | Pretty much any test you could devise, others will be able to
         | point out ways that it's inadequate or doesn't capture aspects
         | of human intelligence.
         | 
         | So I think it all just comes down to who is on the panel.
        
           | chasd00 wrote:
           | The best test would be when your competitors can't say what
           | you have isn't AGI. If no one, not even your arch biz
           | enemies, can seriously claim you have not achieved AGI then
           | you probably have it.
        
         | tropicalfruit wrote:
         | why not?
         | 
         | seems like the entire US tech economy is putting their
         | resources into this goal.
         | 
         | i can see it happening soon if it hasn't already
        
         | zitterbewegung wrote:
         | AGI is just a marketing term right now and also p(doom). It
         | looks great in the press though.
        
         | chadcmulligan wrote:
         | I think we've reached Star Trek level AI. In Star Trek (and the
         | next generation) people would ask the computer questions and it
         | would spout out the answers, which is really similar to what
         | LLM's are doing now, though minus the occasional hallucination.
         | In Star Trek though the computers never really ran anything
         | (except for the one fateful episode - The Ultimate Computer in
         | TOS), I always wondered why, it seems Roddenberry was way ahead
         | of us again.
        
           | Arkhaine_kupo wrote:
           | > which is really similar to what LLM's are doing now, though
           | minus the occasional hallucination.
           | 
           | "Really similar" kinda betrays the fact that it is not
           | similar at all in how it works just in how it appears.
           | 
           | It would be like saying a cloud that kinda looks like a dog
           | is really similar to the labrador you grew up with.
        
         | parliament32 wrote:
         | Why not? They're using Artificial Intelligence to describe
         | token-prediction text generators which clearly have no
         | "intelligence" anywhere near them, so why not re-invent machine
         | learning or something and call it AGI?
        
         | oldestofsports wrote:
         | We will achieve AGI when they decide it is AGI (I dont believe
         | for a second this independent expert panel wont be biased). And
         | it won't matter if you call their bluff, because the world
         | doesnt give a shit about truth anymore.
        
         | giancarlostoro wrote:
         | Since we don't have an authoritative definition of what it
         | means that companies will agree to, and tests like the turing
         | test that must be passed in order to be considered AGI, I don't
         | think we're anywhere near what we all in our brains think AGI
         | is or could be. On the other hand, AI fatigue will continue
         | until the next big thing takes the spotlight from AI for a
         | while, until we reach true AGI (whatever that is).
        
         | HarHarVeryFunny wrote:
         | > Does anyone really think we are close to AGI? I mean
         | honestly?
         | 
         | I'd say we're still a long way from human level intelligence
         | (can do everything I can do), which is what I think of as AGI,
         | but in this case what matters is how OpenAI and/or their
         | evaluation panel define it.
         | 
         | OpenAI's definition used to be, maybe still is, "able to do
         | most economically valuable tasks", which is so weak and vague
         | they could claim it almost anytime.
        
         | spumpydump wrote:
         | If someone is able to come up with true AGI, why even announce
         | it? Instead, just use it to remake a direct clone of Google, or
         | a direct clone of Netflix, or a direct clone of any of these
         | other software corporations. IMO if anyone was anywhere close
         | to something even remotely touching AGI, they would keep their
         | mouth shut tighter than Fort Knox.
        
         | bdangubic wrote:
         | in claude.md I have specific instructions not to check in code
         | and in the prompt specifically wrote as critical to not check
         | in code while check one failing tests. test failure was fixed,
         | code was checked in, I'd say at least claude behaves exactly
         | like humans :)
        
       | schnitzelstoat wrote:
       | > Microsoft continues to have exclusive IP rights and Azure API
       | exclusivity until Artificial General Intelligence (AGI).
       | 
       | That basically means in perpetuity, no? Are there any signs we
       | are anywhere near AGI (or even that transformers would be capable
       | of it)?
        
         | eviks wrote:
         | It doesn't because the definition of AGI is very flexible
        
       | 6thbit wrote:
       | What does it mean to have "Azure API exclusivity"?
        
         | discordance wrote:
         | OpenAI models are hosted on Azure and are available through
         | Azure AI Foundry exclusively (no other cloud vendors serve
         | OpenAI directly). This also means that Azure customers can
         | access OpenAI models and it sits under their Azure data
         | governance agreements.
        
       | danans wrote:
       | > Once AGI is declared by OpenAI ...
       | 
       | I think it's funny and telling that they've used the word
       | "declare" where what they are really doing is "claim".
       | 
       | These guys think they are prophets.
        
         | cool_man_bob wrote:
         | > These guys think they are prophets.
         | 
         | You say this somewhat jokingly, but I think they 100% believe
         | something along those lines.
        
           | danans wrote:
           | >> Whether you are an enterprise developer or BigTech in the
           | US you are on average making twice the median income in your
           | area. There is usually no reason for you not to be stacking
           | cash.
           | 
           | Accidental misquote?
        
         | whamlastxmas wrote:
         | It goes on to say it'll be reviewed by independent third party
         | so I think "declare" is accurate, they're declaring a milestone
        
         | k9294 wrote:
         | They have a definition actually) "When AI generates $100
         | billion in profits" it will be considered an AGI. This term was
         | defined in their previous partnership, not sure if it's still
         | holds after the restructuring.
         | 
         | https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
        
           | lm28469 wrote:
           | Which means that given enough time an LLM powered vending
           | machine would be classified as AGI... interesting
        
             | dandanua wrote:
             | Why wait? Just let it bet $100 billions on red or black in
             | a casino a couple of times, and voila!
        
           | qnleigh wrote:
           | I wonder if they have more detailed provisions than this
           | though. For example, if a later version of Sora can make good
           | advertisements and catches on in the ad industry, would that
           | count?
           | 
           | Or maybe since it is ultimately an agreement about money and
           | IP, they are fine with defining it solely through profits?
        
           | torginus wrote:
           | That is a staggering number - if an engineer makes $100k per
           | year, and let's say OpenAI can do a 20% profit margin on
           | running an engineer-equivalent agent, that means it needs
           | $600B profit or 6 million fully-equivalent engineer years.
           | 
           | I think you can rebuild human civilization with that.
           | 
           | I feel like replacing highly skilled human labor hardly makes
           | financial sense, if it costs that much.
        
         | vdfs wrote:
         | OpenAI: I DECLARE AGI
         | 
         | MS: I just wanted you to know that you can't just say the word
         | AGI and expect anything to happen.
         | 
         | OpenAI: I didn't say it. I declared it.
        
       | photochemsyn wrote:
       | I'd pay attention to this bullet point:
       | 
       | > "OpenAI can now provide API access to US government national
       | security customers, regardless of the cloud provider."
       | 
       | And this one might be related:
       | 
       | > "OpenAI can now jointly develop some products with third
       | parties. API products developed with third parties will be
       | exclusive to Azure. Non-API products may be served on any cloud
       | provider."
       | 
       | Now, does anyone think MIC customers want restricted, safe,
       | aligned models? Is OpenAI going to provide turnkey solutions,
       | unaligned models run in 'secure sandboxed cloud environments' in
       | partnership with private weapons manufacturers and surveillance
       | (data collection and storage/search) specialists?
       | 
       | This pattern is not historically unusual, turning to government
       | subsidies and contracts to survive a lack of immediate commercial
       | viability wouldn't be surprising. The question to ask Microsoft-
       | OpenAI is what percentage of their estimated future revenue
       | stream is going to come from MIC contracting including the public
       | private grey area (that is, 'private customers' who are entirely
       | state-funded, eg Palantir, so it's still government MIC one step
       | removed).
        
       | bwfan123 wrote:
       | So, openai has contracts worth 250B with azure and 300B with oci
       | in the next 5 years. Where is that money coming from ?
        
         | thebruce87m wrote:
         | It's like the Spider-Man meme with everyone pointing at each
         | other.
        
           | GuinansEyebrows wrote:
           | GDP is up, baby!
        
       | mcemilg wrote:
       | Highly delusional.
        
       | ml-anon wrote:
       | Translation: neither company has a clue.
       | 
       | OpenAI still don't have a path to profitability and rely on
       | sweetheart infrastructure deals.
       | 
       | Microsoft has completely given up on homegrown AI and needs
       | OpenAI to have remotely competitive products.
        
       | Lionga wrote:
       | I herby declare I have developed AGI. Check mate atheists
        
       | falcor84 wrote:
       | I just want to say how nice it was to read those clear bullet
       | points in this press release. I know that bulleted lists have
       | been getting a lot of flack because of AIs overusing them, but
       | it's really nice sometimes to not having to go treasure hunting
       | in annoying marketing prose.
        
       | deanmoriarty wrote:
       | I always see a large amount of pessimism about this company on
       | HN, and I accept it might be for rational reasons. What do people
       | think is going to be the most likely outcome for the company,
       | since everything seems to be going so bad for them
       | product/moat/financial-wise? Do people think it will literally go
       | bust and close business due to bankruptcy within a couple years?
       | If not, what else?
        
         | Ericson2314 wrote:
         | It could be acquired by microsoft with large layoffs, and kinda
         | run in me maintenance mode -- _if_ inference gets cheaper.
         | 
         | If inference stays too expensive, then I don't know what
         | happens, maybe a few people will pay for it.
        
       | Amekedl wrote:
       | Regarding LLMs we're in a race to the bottom. Chinese models
       | perform similarly with much higher efficiency; refer to kimi-k2
       | and plenty of others. ClopenAI is extremely overvalued, and AGI
       | is not around the corner because among 20T+ tokens trained on it
       | still generates 0 novel output. Try asking for ASP.NET Core
       | .MapOpenAPI() instead of the pre .net9 swashbuckle version. You
       | get nothing. It's not in the training data. The assumption these
       | will be able to innovate, which could explain the value, is
       | unfounded.
        
         | energy123 wrote:
         | They perform similarly on benchmarks, which can be fudged to
         | arbitrarily high numbers by just including the Q&A into the
         | training data at a certain frequency or post-training on it. I
         | have not been impressed with any of the DeepSeek models in
         | real-world use.
        
           | deaux wrote:
           | General data: hundreds of billions of tokens per week are
           | running through Deepseek, Qwen, GLM models solely by those
           | users going through OpenRouter. People aren't doing that for
           | laughs, or "non-real-world use", that's all for work and/or
           | prod. If you look at the market share graph, at the start of
           | the year the big 3 OpenAI/Anthropic/Google had 72% market
           | share on there. Now it's 45%. And this isn't just because of
           | Grok, before that got big they'd already slowly fallen to
           | 58%.
           | 
           | Anecdata: our product is using a number of these models in
           | production.
           | 
           | [0] https://openrouter.ai/rankings
        
             | energy123 wrote:
             | Because it's significantly cheaper. It's on the frontier at
             | the price it's being offered, but they're not competitive
             | in the high intelligence & high cost quadrant.
        
               | deaux wrote:
               | Being the number one in price vs quality, or _size_ vs
               | quality, is incredibly impressive, as the quality is
               | clearly one that 's very useful in "real-world usage". If
               | you don't find that impressive there's not much to say.
        
               | energy123 wrote:
               | If it was on the cost vs quality frontier I would find it
               | impressive, but it's not a marker of innovation to be on
               | the price vs quality frontier, it's a marker of business
               | strategy
        
               | deaux wrote:
               | But it is on the cost vs quality frontier. The OpenRouter
               | prices are all from mainly US(!) companies self-hosting
               | and providing these models for inference. They're
               | absolutely not all subsidizing it to death. This isn't
               | Chinese subsidies at play, far from it.
               | 
               | Ironically, I'll bet you $500 that OpenAI and Anthropic's
               | models are far more subsidized. We can be almost sure
               | about this, given the losses that they post, and the
               | above fact. These providers are effectively hardware
               | plays, they can't just subsidize at scale and they're a
               | commodity.
               | 
               | On top of that I also mentioned size vs quality, where
               | they're also frontier. Size [?] cost.
        
         | lm28469 wrote:
         | > because among 20T+ tokens trained on it still generates 0
         | novel output. Try asking for ASP.NET Core .MapOpenAPI() instead
         | of the pre .net9 swashbuckle version. You get nothing. It's not
         | in the training data.
         | 
         | The best part is that the web is forever poisoned now, 80% of
         | the content is generated by LLM and self poisoning
        
           | IncreasePosts wrote:
           | There are enough archives of web content from 5+ years
           | ago(let alone, Library of Congress archives, old book scans,
           | things like that) that it shouldn't be a big deal if there
           | actually is a breakthrough in training and we move on from
           | LLMs.
        
         | eitally wrote:
         | Eh... perhaps a race to the bottom on the fundamental research
         | side, but no American company is going to try to build their
         | own employee-facing front end to an open Chinese model when
         | they can just license ChatGPT or Claude or Copilot or Gemini
         | instead.
        
       | xg15 wrote:
       | s/AGI/the arrival of the Messias/
        
       | CrimsonRain wrote:
       | What happens if someone else achieves AGI first? The way they
       | wrote it, seems like they are damn sure they are the ones who
       | will achieve AGI. A bit too egoistic...?
        
       | lateforwork wrote:
       | After 2032 Microsoft will no longer have access to ChatGPT, they
       | will have to build their own frontier model in 7 years. Can
       | Mustafa deliver that? When Zuckerberg is sucking up all talent
       | with $100M+ salaries?
        
       | mgh2 wrote:
       | https://www.youtube.com/watch?v=uXJsS6NCm_o
        
       | newusertoday wrote:
       | openai's AGI would be like FSD of tesla.
        
       | sroussey wrote:
       | I want to setup a dev devcontainer where inside I can call
       | 'supabase start' and use docker outside of docker. GPT5 was not
       | helpful. AGI should be able to handle things not well expressed
       | in the training data. We are a long ways away.
        
         | wahnfrieden wrote:
         | Was that with Codex iterating autonomously? Or a one shot.
        
           | sroussey wrote:
           | Iterating. Claude didn't do any better.
        
       | doodlebugging wrote:
       | I'm patiently waiting for all this AI/AGI bullshit to unwind.
       | Some of my "investment" type newsletters have been alerting that
       | the AI endgame is imminent and the bubble is ready to pop. I
       | guess the big money people grifted all they can grift on this
       | round and are ready to pull the rug from everyone who has just
       | learned to spell AI.
        
       | butler533 wrote:
       | Why do none of OpenAI announcements have an author attributed to
       | them? Are people that ashamed of working there, they don't even
       | want to attach their name to the work? I guess I would be, too.
        
         | notatoad wrote:
         | because they're corporate PR statements drafted by a team, and
         | corporate press releases don't normally have an author byline
        
           | testfrequency wrote:
           | Eh, not really. There's usually a "voice" behind it.
           | 
           | In general I feel like OAI is clown town to work at these
           | days, so they probably don't want anyone except leadership to
           | take the heat for ~anything
        
           | butler533 wrote:
           | Wrong
           | 
           | https://aws.amazon.com/blogs/
           | 
           | https://blog.google/
           | 
           | Lol, even Apple has authors listed
           | https://www.apple.com/newsroom/
        
       | atbvu wrote:
       | Every time they bring up AGI, it feels more like a business
       | strategy to me. It helps them attract investors and dominate the
       | public narrative. For OpenAI, AGI is both a vision and a moat.
        
       | enricotal wrote:
       | "OpenAI is now able to release open-weight models that meet
       | requisite capability criteria."
       | 
       | Was Microsoft the blocker before? prior agreements clearly made
       | true open-weights awkward-to-impossible without Microsoft's sign-
       | off. Microsoft had (a) an exclusive license to GPT-3's underlying
       | tech back in 2020 (i.e., access to the model/code beyond the
       | public API), and (b) later, broad IP rights + API exclusivity on
       | OpenAI models. If you're contractually giving one partner IP
       | rights and API exclusivity, shipping weights openly would
       | undercut those rights. Today's language looks like a carve-out to
       | permit some open-weight releases as long as they're below certain
       | capability thresholds.
       | 
       | A few other notable tweaks in the new deal that help explain the
       | change:
       | 
       | - AGI claims get verified by an independent panel (not just
       | OpenAI declaring it).
       | 
       | - Microsoft keeps model/product IP rights through 2032, but
       | OpenAI can now jointly develop with third parties, serve some
       | things off non-Azure clouds, and--critically--release certain
       | open-weights.
       | 
       | Those are all signs of loosened exclusivity.
       | 
       | My read: previously, the partnership structure (not just
       | "Microsoft saying no") effectively precluded open-weight
       | releases; the updated agreement explicitly allows them within
       | safety/capability guardrails.
       | 
       | Expect any "open-weight" drops to be intentionally scoped--
       | useful, but a notch below their frontier closed models.
        
       | richard_todd wrote:
       | As a 1980's adventure game fan, I can only hope that whatever
       | comes after AGI is called SCI. Maybe it could be "Soul-Crushing
       | Intelligence".
       | 
       | Once we don't need people to make stuff anymore, we need to re-do
       | society so people can get access to all the stuff that's being
       | made. I doubt we do a very good job of that. But otherwise,
       | there's no point in making anything. I guess if we are lucky, the
       | AI overlords will keep us high on soma and let the population
       | naturally decline until we are gone.
        
         | Ylpertnodi wrote:
         | Soma? A 'smartphone' will do it.
        
       | nalinidash wrote:
       | Shows how much they valued 'AGI' wrt how we valued it in the
       | textbook. https://techcrunch.com/2024/12/26/microsoft-and-openai-
       | have-...
        
       | mannanj wrote:
       | So they specifically did compromise the public mission of
       | generating ai for the common good and now common good is defined
       | as "$100b in profits" what a sham and scam "open"AI company
        
       | djha-skin wrote:
       | In short: Microsoft changed our business so that we can be for-
       | profit, and asserted its rights over IP so that the whole OpenAI
       | rebellion thing that happened earlier can't happen again.
        
       | agnosticmantis wrote:
       | > Once AGI is declared by OpenAI, that declaration will now be
       | verified by an independent expert panel.
       | 
       | So OpenAI will declare AGI as soon as ChatGPT is a better AI
       | lawyer than any Microsoft could hire.
        
       | bongodongobob wrote:
       | I don't understand what MS is doing. The only AI available at
       | work is M365 Copilot. It's absolutely terrible. Tiny context
       | window, super guardrailed, can barely handle a 100 line
       | PowerShell script. It's so so bad. I don't get it.
        
       | drusepth wrote:
       | It seems really weird to me that such granular intercorporate
       | details are made publicly available (in a blog post?). I've never
       | had to publicly state things like this when making corporate
       | partnerships. That makes me wonder how much of this post is
       | crafted solely for PR...
        
         | ethbr1 wrote:
         | I believe MS declares Q1 earnings today, and there had been
         | some rumblings that they were risking accounting / reporting
         | liability by failing to characterize their material OpenAI
         | stake.
         | 
         | What probably happened:                  1. MS's accountants
         | raised a warning        2. Existing agreement prohibited
         | disclosure of terms        3. MS told OpenAI that wasn't
         | acceptable and MS needed to publicly report details today
         | 4. OpenAI coordinated release of this, to spin the narrative
        
       | spumpydump wrote:
       | A tangent, but it feels more and more like the AGI maximalists of
       | 2025 are by and large the NFT maximalists from 2022 (who in turn
       | were the NoCode maximalists of 2020) that are looking for the
       | next metaphorical penny stock to sell.
       | 
       | That logical fallacy of, "I spent a week teaching myself this
       | topic and now I'm ready to talk about it like an expert."
        
       | reducesuffering wrote:
       | I _loathe_ that creating a non-profit organization supposedly
       | guided by a charter to  "ensure that artificial general
       | intelligence (AGI) benefits all of humanity." is actually about
       | being a $500b company corporate capital ownership with million $
       | pay packages. I mean, it looks like you really can do whatever
       | you want if you have the most $ for the best lawyers and the
       | gumption to not give af.
        
       | androiddrew wrote:
       | Open marriage, got it.
        
       ___________________________________________________________________
       (page generated 2025-10-28 23:01 UTC)