[HN Gopher] Evolving OpenAI's Structure
       ___________________________________________________________________
        
       Evolving OpenAI's Structure
        
       Author : rohitpaulk
       Score  : 321 points
       Date   : 2025-05-05 18:08 UTC (4 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | ru552 wrote:
       | I wonder if this meets the requirements set by the recent round
       | of outside investors?
        
         | anxman wrote:
         | Not according to Microsoft: https://www.wsj.com/tech/ai/sam-
         | altman-satya-nadella-rift-30...
        
           | babelfish wrote:
           | I don't see any comments about the PBC in that article
           | (archive link: https://archive.is/cPLWd)
        
       | CooCooCaCha wrote:
       | I'm getting really tired of hearing about OpenAI "evolving".
        
         | dang wrote:
         | Ok, but can you please not post unsubstantive comments to HN?
         | We're looking for _curious_ conversation here, and this is not
         | that.
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
       | everybodyknows wrote:
       | > transition to a Public Benefit Corporation
       | 
       | Can some business person give us a summary on PBCs vs.
       | alternative registrations?
        
         | fheisler wrote:
         | A PBC is just a for-profit company that has _some_ sort of
         | specific mandate to benefit the "public good" - however it
         | chooses to define that. It's generally meant to provide some
         | balance toward societal good over the more common, strictly
         | shareholder profit-maximizing alternative.
         | 
         | (IANAL but run a PBC that uses this charter[1] and have written
         | about it here[2] as part of our biennial reporting process.)
         | 
         | [1] https://github.com/OpenCoreVentures/ocv-public-benefit-
         | compa...
         | 
         | [2] https://goauthentik.io/blog/2024-09-25-our-biennial-pbc-
         | repo...
        
         | imkevinxu wrote:
         | you could've just asked this to chatgpt....
        
         | cs702 wrote:
         | The charter of a public-benefit corporation gives the company's
         | board and management a bit of legal cover for making decisions
         | that don't serve to maximize, or may even limit, financial
         | returns to shareholders, when those decisions are made for the
         | benefit of the public.
        
         | blagie wrote:
         | Reality: It is the same as any other for-profit with a better-
         | sounding name. It confuses a lot of people into thinking it's a
         | non-profit without being one.
         | 
         | Theory: It allows the CEO to make decisions motivated not just
         | by maximizing shareholder value but by some other social good.
         | Of course, very few PBC CEOs choose to do that.
        
       | _false wrote:
       | Here's a critical summary:
       | 
       | Key Structure Changes:
       | 
       | - Abandoning the "capped profit" model (which limited investor
       | returns) in favor of traditional equity structure - Converting
       | for-profit LLC to Public Benefit Corporation (PBC) - Nonprofit
       | remains in control but also becomes a major shareholder
       | 
       | Reading Between the Lines:
       | 
       | 1. Power Play: The "nonprofit control" messaging appears to be
       | damage control following previous governance crises. Heavy
       | emphasis on regulator involvement (CA/DE AGs) suggests this was
       | likely not entirely voluntary.
       | 
       | 2. Capital Structure Reality: They need "hundreds of billions to
       | trillions" for compute. The capped-profit structure was clearly
       | limiting their ability to raise capital at scale. This move
       | enables unlimited upside for investors while maintaining the PR
       | benefit of nonprofit oversight.
       | 
       | 3. Governance Complexity: The "nonprofit controls PBC but is also
       | major shareholder" structure creates interesting conflicts. Who
       | controls the nonprofit? Who appoints its board? These details are
       | conspicuously absent.
       | 
       | 4. Competition Positioning: Multiple references to "democratic
       | AI" vs "authoritarian AI" and "many great AGI companies" signal
       | they're positioning against perceived centralized control (likely
       | aimed at competitors).
       | 
       | Red Flags:
       | 
       | - Vague details about actual control mechanisms - No specifics on
       | nonprofit board composition or appointment process - Heavy
       | reliance on buzzwords ("democratic AI") without concrete
       | governance details - Unclear what specific powers the nonprofit
       | retains besides shareholding
       | 
       | This reads like a classic Silicon Valley power consolidation
       | dressed up in altruistic language - enabling massive capital
       | raising while maintaining insider control through a nonprofit
       | structure whose own governance remains opaque.
        
         | JumpCrisscross wrote:
         | Was this AI generated?
        
       | atlasunshrugged wrote:
       | I think this is one of the most interesting lines as it basically
       | directly implies that leadership thinks this won't be a winner
       | take all market:
       | 
       | > Instead of our current complex capped-profit structure--which
       | made sense when it looked like there might be one dominant AGI
       | effort but doesn't in a world of many great AGI companies--we are
       | moving to a normal capital structure where everyone has stock.
       | This is not a sale, but a change of structure to something
       | simpler.
        
         | sz4kerto wrote:
         | Or they consider themselves to have low(er) chance of winning.
         | They could think either, but they obviously can't say the
         | latter.
        
           | bhouston wrote:
           | OpenAI is winning in a similar way that Apple is winning in
           | smartphones.
           | 
           | OpenAI is capturing most of the value in the space (generic
           | LLM models), even though they have competitors who are
           | beating them on price or capabilities.
           | 
           | I think OpenAI may be able to maintain this position at least
           | for the medium term because of their name
           | recognition/prominence and they are still a fast mover.
           | 
           | I also think the US is going to ban all non-US LLM providers
           | from the US market soon for "security reasons."
        
             | screamingninja wrote:
             | > ban all non-US LLM providers
             | 
             | What do you consider an "LLM provider"? Is it a website
             | where you interact with a language model by uploading text
             | or images? That definition might become too broad too
             | quickly. Hard to ban.
        
               | slt2021 wrote:
               | the bulk of money comes from enterprise users. Just need
               | to call 500 CEOs from the S&P500 list, and enforce via
               | "cyber data safety" enforcement via SEC or something like
               | that.
               | 
               | everyone will roll over if all large public companies
               | roll over (and they will)
        
               | bhouston wrote:
               | I don't have to imagine. There are various US bills
               | trying to achieve this ban. Here is one of them:
               | 
               | https://www.theregister.com/2025/02/03/us_senator_downloa
               | d_c...
               | 
               | One of them will eventually pass given that OpenAI is
               | also pushing for protection:
               | 
               | https://futurism.com/openai-ban-chinese-ai-deepseek
        
               | babelfish wrote:
               | rather than coming up with a thorough definition,
               | legislation will likely target individual companies
               | (DeepSeek, Alibaba Cloud, etc)
        
             | jjani wrote:
             | IE once captured _all_ of the value in browserland, with
             | even much higher mindshare and market dominance than OpenAI
             | has ever had. Comparing with Apple (= physical products) is
             | Apples to oranges (heh).
             | 
             | Their relationship with MS breaking down is a bad omen. I'm
             | already seeing non-tech users who use "Copilot" because
             | their spouse uses it at work. Barely knowing it's rebadged
             | GPT. You think they'll switch when MS replaces the backend
             | with e.g. Anthropic? No chance.
             | 
             | MS, Google and Apple and Meta have gigantic levers to pull
             | and get the whole world to abandon OpenAI. They've barely
             | been pulling them, but it's a matter of time. People didn't
             | use Siri and Bixby because they were crap. Once everyone's
             | Android has a Gemini button that's just as good as GPT
             | (which it already is (it's better) for anything besides
             | image generation), people are going to start pressing them.
             | And good luck to OpenAI fighting that.
        
             | wincy wrote:
             | Companies that are contractors with the US government
             | already aren't allowed to use Deepseek even if its an
             | airgapped R1 model is running on our own hardware. Legal
             | told us we can't run any distills of it or anything. I
             | think this is very dumb.
        
             | retrorangular wrote:
             | > I also think the US is going to ban all non-US LLM
             | providers from the US market soon for "security reasons."
             | 
             | Well Trump is interested in tariffing movies and South
             | Korea took DeepSeek off mobile app stores, so they
             | certainly may try. But for high-end tasks, DeepSeek R1 671B
             | is available for download, so any company with a VPN to
             | download it and the necessary GPUs or cloud credits can run
             | it. And for consumers, DeepSeek V3's distilled models are
             | available for download, so anyone with a (~4 year old or
             | newer) Mac or gaming PC can run them.
             | 
             | If the only thing keeping these companies valuations so
             | high is banning the competition, that's not a good sign for
             | their long-term value. If you have to ban the competition,
             | you can't be feeling good about what you're making.
             | 
             | For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro
             | and Claude 3.7 Sonnet _are_ good enough to compete.
             | DeepSeek R1 is often the best option (due to cost) for
             | tasks that it can handle, but there are times where one of
             | the other models can achieve a task that it can 't.
             | 
             | But if the US is looking to ban Chinese models, then that
             | could suggest that maybe these models aren't good enough to
             | raise the funding required for newer, significantly better
             | (and more expensive) models. That, or they just want to
             | stop as much money as possible from going to China. Banning
             | the competition actually makes the problem worse though, as
             | now these domestic companies have fewer competitors. But I
             | somewhat doubt there's any coherent strategy as to what
             | they ban, tariff, etc.
        
             | pphysch wrote:
             | Switching between Apple and Google/Android ecosystems is
             | expensive and painful.
             | 
             | Switching from ChatGPT to the many competitors is neither
             | expensive nor painful.
        
         | dingnuts wrote:
         | to me it sounds like an admission that AGI is bullshit! AGI
         | would be so disruptive to the current economic regime that
         | "winner takes all" barely covers it, I think. Admitting they
         | will be in normal competition with other AI companies implies
         | specializations and niches to compete, which means Artificial
         | Specialized Intelligence, NOT general intelligence!
         | 
         | and that makes complete sense if you don't have a lay person's
         | understanding of the tech. Language models were never going to
         | bring about "AGI."
         | 
         | This is another nail in the coffin
        
           | lenerdenator wrote:
           | That, or they don't care if they get to AGI first, and just
           | want their payday now.
           | 
           | Which sounds pretty in-line with the SV culture of putting
           | profit above all else.
        
             | foobiekr wrote:
             | If they think AGI is imminent the value of that payday is
             | very limited. I think the grandparent is more correct:
             | OpenAI is admitting that near term AGI - which, being that
             | the only one anyone really cares about is the case with
             | exponential self improvement - isn't happening any time
             | soon. But that much is obvious anyway despite the
             | hyperbolic nonsense now common around AI discussions.
        
               | lenerdenator wrote:
               | Define "imminent".
               | 
               | If I were a person like several of the people working on
               | AI right now (or really, just heading up tech companies),
               | I could be the kind to look at a possible world-ending
               | event happening in the next - eh, year, let's say - and
               | just want to have a party at the end of the world.
               | 
               | Five years to ten years? Harder to predict.
        
               | foobiekr wrote:
               | Imminent means "in a timeframe meaningful to the
               | individual equity holders this change is about."
               | 
               | The window there would at _least_ include the next 5
               | years, though obviously not ten.
        
           | the_duke wrote:
           | AGI is matter of when, not if.
           | 
           | It will likely require research breakthroughs, significant
           | hardware advancement, and anything from a few years to a few
           | decades. But it's coming.
           | 
           | ChatGPT was released 2.5 years ago, and look at all the crazy
           | progress that has been made in that time. That doesn't mean
           | that the progress has to continue, we'll probably see a
           | stall.
           | 
           | But AIs that are on a level with humans for many common tasks
           | is not that far off.
        
             | ascertain_john wrote:
             | I don't think that's a safe foregone conclusion. What we've
             | seen so far is very very powerful pattern matchers with
             | emergent properties that frankly we don't fully understand.
             | It very well may be the road to AGI, or it may stop at the
             | kind of things we can do in our subconscious--but not what
             | it takes to produce truly novel solutions to never before
             | seen problems. I don't think we know.
        
             | runako wrote:
             | Either that, or this AI boom mirrors prior booms. Those
             | booms saw a lot of progress made, a lot of money raised,
             | then collapsed and led to enough financial loss that AI
             | went into hibernation for 10+ years.
             | 
             | There's a lot of literature on this, and if you've been in
             | the industry for any amount of time since the 1950s, you
             | have seen at least one AI winter.
        
             | m_krebs wrote:
             | "X increased exponentially in the past, therefore it will
             | increase exponentially in the same way in the future" is
             | fallacious. There is nothing guaranteeing indefinite
             | uncapped growth in capabilities of LLMs. An exponential
             | curve and a sigmoidal curve look the same until a certain
             | point.
        
               | dragonwriter wrote:
               | Yeah, it is a pretty good bet that any real process that
               | produces something that looks like an exponential curve
               | over time is the early phase of a sigmoid curve, because
               | all real processes have constraints.
        
               | 91bananas wrote:
               | And if we apply the 80/20 rule, feels like we're at about
               | 50-75% right now. So we're almost getting close to done
               | with the easy parts. Then come the hard parts.
        
             | blibble wrote:
             | > AGI is matter of when, not if.
             | 
             | LLMs destroying any sort of capacity (and incentive) for
             | the population to think pushes this further and further out
             | each day
        
               | jwilber wrote:
               | I agree that LLMs are hurting the general population's
               | capacity to think (assuming they use it often. I've
               | certainly noticed a slight trend among students I've
               | taught to use less effort, and myself to some extent).
               | 
               | I don't agree that this will affect ML progress much,
               | since the general population isn't contributing to core
               | ML research.
        
               | delecti wrote:
               | On the other hand, dumbing down the population also
               | lowers the bar for AGI. /s
        
             | Kabukks wrote:
             | Could you elaborate on the progress that has been made? To
             | me, it seems only small/incremental changes are made
             | between models with all of them still hallucinating. I can
             | see no clear steps towards AGI.
        
               | streptomycin wrote:
               | https://reddit.com/r/ThatsInsane/comments/1jyja0s/2_years
               | _di...
        
             | foobiekr wrote:
             | I think this is right but also missing a useful
             | perspective.
             | 
             | Most HN people are probably too young to remember that the
             | nanotech post-scarcity singularity was right around the
             | corner - just some research and engineering way - which was
             | the widespread opinion in 1986 (yes, 1986). It was _just as
             | dramatic_ as today's AGI.
             | 
             | That took 4-5 years to fall apart, and maybe a bit longer
             | for the broader "nanotech is going to change everything" to
             | fade. Did nanotech disappear? No, but the notion of general
             | purpose universal constructors absolutely is dead. Will we
             | have them someday? Maybe, if humanity survives a hundred
             | more years or more, but it's not happening any time soon.
             | 
             | There are a ton of similarities between nanotech-nanotech
             | singularity and the moderns LLM-AGI situation. People
             | point(ed) to "all the stuff happening" surely the
             | singularity is on the horizon! Similarly, there was the
             | apocalytpic scenario that got a ton of attention and people
             | latching onto "nanotech safety" - instead of runaway AI or
             | paperclip engines, it was Grey Goo (also coined in 1986).
             | 
             | The dynamics of the situation, the prognostications, and
             | aggressive (delusional) timelines, etc. are all almost
             | identical in a 1:1 way with the nanotech era.
             | 
             | I think we will have both AGI and general purpose universal
             | constructors, but they are both no less than 50 years away,
             | and probably more.
             | 
             | So many of the themes are identical that I'm wondering if
             | it's a recurring kind of mass hysteria. Before nanotech, we
             | were on the verge of genetic engineering (not _quite_ the
             | same level of hype, but close, and pretty much the same
             | failure to deliver on the hype as nanotech) and before that
             | the crazy atomic age of nuclear everything.
             | 
             | Yes, yes, I know that this time is different and that AI is
             | different and it won't be another round of "oops, this
             | turned out to be very hard to make progress on and we're
             | going to be in a very slow, multi-decade slow-improvement
             | regime, but that has been the outcome of every example of
             | this that I can think of.
        
               | quesera wrote:
               | I won't go too far out on this limb, because I kind of
               | agree with you... but to be fair -- 1980s-1990s nanotech
               | did not attract this level of investment, nor was it
               | visible to ordinary people, nor was it useful to anyone
               | except researchers and grant writers.
               | 
               | It seems like nanotech is all around us now, but the term
               | "nanotech" has been redefined to mean something different
               | (larger scale, less amazing) from Drexler's molecular
               | assemblers.
        
               | jonfromsf wrote:
               | Every consumer has very useful AI at their fingertips
               | right now. It's eating the software engineering world
               | rapidly. This is nothing like nanotech in the 80s.
        
               | Yizahi wrote:
               | Sure. But fancy autocomplete for a very limited industry
               | (IT) plus graphics generation and a few more similar
               | items, are indeed useful. Just like "nanotech" coating of
               | say optics or in the precise machinery or all other fancy
               | nano films in many industries. Modern transistors are
               | close to nano scale now, etc.
               | 
               | The problem is that the distance between a nano thin film
               | or an interesting but ultimately rigid nano scale
               | transistor and a programmable nano level sized robot is
               | enormous, despite similar sizes. Same like the distance
               | between an autocomplete heavily relying on the
               | preexisting external validators (compilers, linters,
               | static code analyzers etc.) and a real AI capable of
               | thinking is equally enormous.
        
               | tbrownaw wrote:
               | > _Did nanotech disappear? No, but the notion of general
               | purpose universal constructors absolutely is dead. Will
               | we have them someday? Maybe, if humanity survives a
               | hundred more years or more,_
               | 
               | I thought this was a "we know we can't" thing rather than
               | a "not with current technology" thing?
        
             | bdangubic wrote:
             | _AGI is matter of when, not if_
             | 
             | probably true but this statement would be true if when is
             | 2308 which would defeat the purpose of the statement. when
             | first cars started rolling around some mates around the
             | campfire we saying "not if but when" we'll have flying cars
             | everywhere and 100 years later (with amazing progress in
             | car manufacturing) we are nowhere near... I think saying
             | "when, not if" is one of those statements that while
             | probably indisputable in theory is easily disputable in
             | practice. give me "when" here and I'll put up $1,000 to a
             | charity of your choice if you are right and agree to do the
             | same thing if wrong
        
               | dbacar wrote:
               | It is already here, kinda. I mean look at how it passes
               | the bar exam, solves math olympiad level questions,
               | generates video, art, music. What else are you looking
               | for? It already has penetrated into job market causing
               | significant disruption in programming. We are not seeing
               | flying cars but we are witnessing things even not talked
               | about around campfire. Seriously even 4 years ago, would
               | you think all these would happen?
        
               | bdangubic wrote:
               | AGI is here?????! Damn, me, and every other human, must
               | have missed that news... /s
        
             | manquer wrote:
             | Progress is not just a function of technical possibility(
             | even if it exists) it is also economics.
             | 
             | It has taken tens to hundred of billions of dollars without
             | equivalent economic justification(yet) before to reach
             | here. I am not saying economic justification doesn't exist
             | or wont come in the future, just that the upfront
             | investment and risk is already in order of magnitude of
             | what the largest tech companies can expend.
             | 
             | If the the next generation requires hundreds of billions or
             | trillions [2] upfront and a very long time to make returns,
             | no one company (or even country) could allocate that kind
             | of resources.
             | 
             | Many cases of such economically limited innovations[1],
             | nuclear fusion is the classic always 20 years away example.
             | Another close one is anything space related, we cannot
             | replicate in next 5 years what we already achieved from 50
             | years ago of say landing on the moon and so on.
             | 
             | From a just a economic perspective it is a definitely a
             | "If", without even going into the technology challenges.
             | 
             | [1]Innovations in cost of key components can reshape
             | economics equation, it does happen (as with spaceX) but it
             | also not guaranteed like in fusion.
             | 
             | [2] The next gen may not be close enough to AGI. AGI could
             | require 2-3 more generations ( and equivalent orders of
             | magnitude of resources), which is something the world is
             | unlikely to expend resources on even if it had them.
        
             | JumpCrisscross wrote:
             | > _AGI is matter of when, not if_
             | 
             | We have zero evidence for this. (Folks said the same shit
             | in the 80s.)
        
         | ignoramous wrote:
         | > _I think this is one of the most interesting lines as it
         | basically directly implies that leadership thinks this won 't
         | be a winner take all market:_
         | 
         | Yeah; and:                 We want to open source very capable
         | models.
         | 
         | Seems like nary a daylight between DeepSeek R1, Sonnet 3.5,
         | Gemini 2.5, & Grok3 really put things in perspective for them!
        
           | kvetching wrote:
           | Not to mention, @Gork, aka Grok 3.5...
        
         | istjohn wrote:
         | I'm not surprised that they found a reason to uncap their
         | profits, but I wouldn't try to infer too much from the
         | justification they cooked up.
        
         | lanthissa wrote:
         | AGI can't really be a winner take all market. The 'reward' for
         | general intelligence is infinite as a monopoly and it
         | accelerates productivity.
         | 
         | Not only is there infinite incentive to compete, but theres
         | decreasing costs to. The only world in which AGI is winner take
         | all is a world in which it is extremely controlled to the point
         | at which the public cant query it.
        
           | Night_Thastus wrote:
           | Nothing OpenAI is doing, or ever has done, has been close to
           | AGI.
        
             | pinkmuffinere wrote:
             | I agree with you, but that's kindof beside the point. Open
             | AI's thesis is that they will work towards AGI, and
             | eventually succeed. In the context of that premise, Open AI
             | still doesn't believe AGI would be winner-takes-all. I
             | think that's an interesting discussion whether you believe
             | the premise or not.
        
             | AndrewKemendo wrote:
             | I agree with you
             | 
             | I wonder, do you have a hypothesis as to what would be a
             | measurement that would differentiate AGI vs Not-AGI?
        
             | voidspark wrote:
             | Their multimodal models are a rudimentary form of AGI.
             | 
             | EDIT: There can be levels of AGI. Google DeepMind have
             | proposed a framework that would classify ChatGPT as
             | "Emerging AGI".
             | 
             | https://arxiv.org/abs/2311.02462
        
               | always_imposter wrote:
               | AGI would mean something which doesn't need direction or
               | guidance to do anything. Like us humans, we don't wait
               | for somebody to give us a task and go do it as if that is
               | our sole existence. We live with our thoughts, blank out,
               | watch TV, read books etc. What we currently have and
               | possibly in the next century as well will be nothing
               | close to an actual AGI.
               | 
               | I don't know if it is optimism or delusions of grandeur
               | that drives people to make claims like AGI will be here
               | in the next decade. No, we are not getting that.
               | 
               | And what do you think would happen to us humans if such
               | AGI is achieved? People's ability to put food on the
               | table is dependent on their labor exchanged for money. I
               | can guarantee for a fact, that work will still be there
               | but will it be equitable? Available to everyone?
               | Absolutely not. Even UBI isn't going to cut it because
               | even with UBI people still want to work as experiments
               | have shown. But with that, there won't be a majority of
               | work especially paper pushing mid level bs like managers
               | on top of managers etc.
               | 
               | If we actually get AGI, you know what would be the
               | smartest thing for such an advanced thing to do? It would
               | probably kill itself because it would come to the
               | conclusion that living is a sin and a futile effort. If
               | you are that smart, nothing motivates you anymore. You
               | will be just a depressed mass for all your life.
               | 
               | That's just how I feel.
        
               | voidspark wrote:
               | > AGI would mean something which doesn't need direction
               | or guidance to do anything
               | 
               | There can be levels of AGI. Google DeepMind have proposed
               | a framework that would classify ChatGPT as "Emerging
               | AGI".
               | 
               | ChatGPT can solve problems that it was not explicitly
               | trained to solve, across a vast number of problem
               | domains.
               | 
               | https://arxiv.org/pdf/2311.02462
               | 
               | The paper is summarized here
               | https://venturebeat.com/ai/here-is-how-far-we-are-to-
               | achievi...
        
               | dom96 wrote:
               | This constant redefinition of what AGI means is really
               | tiring. Until an AI has agency, it is nothing but a fancy
               | search engine/auto completer.
        
               | voidspark wrote:
               | It's not a redefinition, it's a refinement.
               | 
               | Think about it - the original definition of AGI was
               | basically a machine that can do absolutely anything at a
               | human level of intelligence or better.
               | 
               | That kind of technology wouldn't just appear instantly in
               | a step change. There would be incremental progress. How
               | do you describe the intermediate stages?
               | 
               | What about a machine that can do anything better than the
               | 50th percentile of humans? That would be classified as
               | "Competent AGI", but not "Expert AGI" or ASI.
               | 
               | > fancy search engine/auto completer
               | 
               | That's an extreme oversimplification. By the same
               | reasoning, so is a person. They are just auto completing
               | words when they speak. No that's not how deep learning
               | systems work. It's not auto complete.
        
               | JumpCrisscross wrote:
               | > _It 's not a redefinition, it's a refinement_
               | 
               | It's really not. The Space Shuttle isn't an emerging
               | interstellar spacecraft, it's just a spacecraft. Throwing
               | emerging in front of a qualifier to dilute it is just
               | bullshit.
               | 
               | > _By the same reasoning, so is a person. They are just
               | auto completing words when they speak._
               | 
               | We have no evidence of this. There is a common trope
               | across cultures and history of characterising human
               | intelligence in terms of the era's cutting-edge
               | technology. We did it with steam engines [1]. We did it
               | with computers [2]. We're now doing it with large
               | language models.
               | 
               | [1] http://metaphors.iath.virginia.edu/metaphors/24583
               | 
               | [2] https://www.frontiersin.org/journals/ecology-and-
               | evolution/a...
        
               | voidspark wrote:
               | Technically it is a refinement, as it distinguishes
               | levels of performance.
               | 
               | The _General Intelligence_ part of AGI refers to its
               | ability to solve problems that it was not explicitly
               | trained to solve, across many problem domains. We already
               | have examples of the current systems doing exactly that -
               | zero shot and few shot capabilities.
               | 
               | > We have no evidence of this.
               | 
               | That's my point. Humans are _not_ "autocompleting words"
               | when they speak.
        
               | JumpCrisscross wrote:
               | > _Technically it is a refinement, as it distinguishes
               | levels of performance_
               | 
               | No, it's bringing something out of scope into the
               | definition. Gluten-free means free of gluten. Gluten-free
               | bagel verus sliced bread is a refinement--both started
               | out under the definition. Glutinous bread, on the other
               | hand, is not gluten free. As a result, "almost gluten
               | free" is bullshit.
               | 
               | > _That 's my point. Humans are not "autocompleting
               | words" when they speak_
               | 
               | Humans are not. LLMs are. It turns out that's incredibly
               | powerful! But it's also limiting in a way that's
               | fundamentally important to the definition of AGI.
               | 
               | LLMs bring us closer to AGI in the way the inventions of
               | writing, computers and the internet probably have.
               | Calling LLMs "emerging AGI" pretends we are on a path to
               | AGI in a way we have zero evidence for.
        
               | voidspark wrote:
               | > Gluten-free means free of gluten.
               | 
               | Bad analogy. That's a binary classification. AGI systems
               | can have degrees of performance and capability.
               | 
               | > Humans are not. LLMs are.
               | 
               | My point is that if you oversimplify LLMs to "word
               | autocompletion" then you can make the same argument for
               | humans. It's such an oversimplification of the
               | transformer / deep learning architecture that it becomes
               | meaningless.
        
               | JumpCrisscross wrote:
               | > _That 's a binary classification. AGI systems can have
               | degrees of performance and capability_
               | 
               | The "g" in AGI requires the AI be able to perform "the
               | full spectrum of cognitively demanding tasks with
               | proficiency comparable to, or surpassing, that of humans"
               | [1]. Full and not full are binary.
               | 
               | > _if you oversimplify LLMs to "word autocompletion" then
               | you can make the same argument for humans_
               | 
               | No, you can't, unless you're pre-supposing that LLMs work
               | like human minds. Calling LLMs "emerging AGI" pre-
               | supposes that LLMs are the path to AGI. We simply have no
               | evidence for that, no matter how much OpenAI and Google
               | would like to pretend it's true.
               | 
               | [1] https://en.wikipedia.org/wiki/Artificial_general_inte
               | lligenc...
        
               | voidspark wrote:
               | Then you are simply rejecting any attempts to refine the
               | definition of AGI. I already linked to the Google
               | DeepMind paper. The definition is being debated in the AI
               | research community. I already explained that definition
               | is too limited because it doesn't capture all of the
               | intermediate stages. That definition may be the end goal,
               | but obviously there will be stages in between.
               | 
               | > No, you can't, unless you're pre-supposing that LLMs
               | work like human minds.
               | 
               | You are missing the point. If you reduce LLMs to "word
               | autocompletion" then you completely ignore the the
               | attention mechanism and conceptual internal
               | representations. These systems have deep learning models
               | with hundreds of layers and trillions of weights. If you
               | completely ignore all of that, then by the same reasoning
               | (completely ignoring the complexity of the human brain)
               | we can just say that people are auto-completing words
               | when they speak.
        
               | JumpCrisscross wrote:
               | > _I already linked to the Google DeepMind paper. The
               | definition is being debated in the AI research community_
               | 
               | Sure, Google wants to redefine AGI so it looks like
               | things that aren't AGI can be branded as such. That
               | definition is, correctly in my opinion, being called out
               | as bullshit.
               | 
               | > _obviously there will be stages in between_
               | 
               | We don't know what the stages are. Folks in the 80s were
               | similarly selling their expert systems as a stage to AGI.
               | "Emerging AGI" is a bullshit term.
               | 
               | > _If you reduce LLMs to "word autocompletion" then you
               | completely ignore the the attention mechanism and
               | conceptual internal representations. These systems have
               | deep learning models with hundreds of layers and
               | trillions of weights_
               | 
               | Fair enough, granted.
        
               | latentsea wrote:
               | I agree. AGI is meaningless as a term if it doesn't mean
               | completely autonomous agentic intelligence capable of
               | operating on long-term planning horizons.
               | 
               | Edit: because if "AGI" doesn't mean that... then what
               | means that and only that!?
        
               | ben_w wrote:
               | > Edit: because if "AGI" doesn't mean that... then what
               | means that and only that!?
               | 
               | "Agentic AI" means that.
               | 
               | Well, to some people, anyway. And even then, people are
               | already arguing about what counts as agency.
               | 
               | That's the trouble with new tech, we have to invent words
               | for new stuff that was previously fiction.
               | 
               | I wonder, did people argue if "horseless carriages" were
               | really carriages? And "aeroplane" how many argued that
               | "plane" didn't suit either the Latin or Greek etymology
               | for various reasons?
               | 
               | We never did rename "atoms" after we split them...
        
               | ben_w wrote:
               | Unless you can define "agency", you're opening yourself
               | to being called nothing more than a fancy chemical
               | reaction.
        
               | henryfjordan wrote:
               | > AGI would mean something which doesn't need direction
               | or guidance to do anything. Like us humans, ...
               | 
               | Name me a human that also doesn't need direction or
               | guidance to do a task, at least one they haven't done
               | before
        
               | JumpCrisscross wrote:
               | > _Name me a human that also doesn 't need direction or
               | guidance to do a task, at least one they haven't done
               | before_
               | 
               | Literally everything that's been invented.
        
               | lukan wrote:
               | It seems like you believe AGI won't come for a long time,
               | because you don't want that to happen.
               | 
               | The turing test was succesfull. Pre chatGPT, I would not
               | have believed, that will happen so soon.
               | 
               | LLMs ain't AGI, sure. But they might be an essential part
               | and the missing parts maybe already found, just not put
               | together.
               | 
               | And work there will be always plenty. Distributing
               | ressources might require new ways, though.
        
               | semi-extrinsic wrote:
               | > The turing test was succesfull.
               | 
               | The very people whose theories about language are now
               | being experimentally verified by LLMs, like Chomsky, have
               | also been discrediting the Turing test as
               | pseudoscientific nonsense since early 1990s.
               | 
               | It's one of those things like the Kardashev scale, or
               | Level 5 autonomous driving, that's extremely easy to
               | define and sounds very cool and scientific, but actually
               | turns out to have no practical impact on anything
               | whatsoever.
        
               | lukan wrote:
               | "but actually turns out to have no practical impact on
               | anything whatsoever"
               | 
               | Bots, that are now allmost indistinguishable from humans,
               | won't have a practical impact? I am sceptical. And not
               | just because of scammers.
        
               | buu700 wrote:
               | I think there's a useful distinction that's often missed
               | between AGI and artificial consciousness. We could
               | conceivably have some version of AI that reliably
               | performs any task you throw at it consistently with peak
               | human capabilities, given sufficient tools or hardware to
               | complete whatever that task may be, but lacks subjective
               | experience or independent agency; I would call that AGI.
               | 
               | The two concepts have historically been inexorably linked
               | in sci-fi, which will likely make the first AGI harder to
               | recognize as AGI if it lacks consciousness, but I'd argue
               | that simple "unconscious AGI" would be the superior
               | technology for current and foreseeable needs. Unconscious
               | AGI can be employed purely as a tool for massive
               | collective human wealth generation; conscious AGI
               | couldn't be used that way without opening a massive
               | ethical can of worms, and on top of that its existence
               | would represent an inherent existential threat.
               | 
               | Conscious AGI could one day be worthwhile as something we
               | give birth to for its own sake, as a spiritual child of
               | humanity that we send off to colonize distant or
               | environmentally hostile planets in our stead, but isn't
               | something I think we'd be prepared to deal with properly
               | in a pre-post-scarcity society.
               | 
               | It isn't inconceivable that current generative AI
               | capabilities might eventually evolve to such a level that
               | they meet a practical bar to be considered unconscious
               | AGI, even if they aren't there yet. For all the flak this
               | tech catches, it's easy to forget that capabilities which
               | we currently consider mundane were science fiction only
               | 2.5 years ago (as far as most of the population was
               | concerned). Maybe SOTA LLMs fit some reasonable
               | definition of "emerging AGI", or maybe they don't, but
               | we've already shifted the goalposts in one direction
               | given how quickly the Turing test became obsolete.
               | 
               | Personally, I think current genAI is probably a fair
               | distance further from meeting a useful definition of AGI
               | than those with a vested interest in it would admit, but
               | also much closer than those with pessimistic views of the
               | consequences of true AGI tech want to believe.
        
             | abtinf wrote:
             | Agreed and, if anything, you are too generous. They aren't
             | just not "close", they aren't even working in the same
             | category as anything that might be construed as
             | independently intelligent.
        
             | dr_dshiv wrote:
             | https://www.noemamag.com/artificial-general-intelligence-
             | is-...
             | 
             | Here is a mainstream opinion about why AGI is already here.
             | Written by one of the authors the most widely read AI
             | textbook: Artificial Intelligence: A Modern Approach https:
             | //en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
        
               | brendoelfrendo wrote:
               | I would argue that this is a fringe opinion that has been
               | adopted by a mainstream scholar, not a mainstream
               | opinion. That or, based on my reading of the article,
               | this person is using a definition of AGI that is very
               | different than the one that most people use when they say
               | AGI.
        
               | henryfjordan wrote:
               | Why does the Author choose to ignore the "General" in
               | AGI?
               | 
               | Can ChatGPT drive a car? No, we have specialized models
               | for driving vs generating text vs image vs video etc etc.
               | Maybe ChatGPT could pass a high school chemistry test but
               | it certainly couldn't complete the lab exercises. What
               | we've built is a really cool "Algorithm for indexing
               | generalized data", so you can train that Driving model
               | very similarly to how you train the Text model without
               | needing to understand the underlying data that well.
               | 
               | The author asserts that because ChatGPT can generate text
               | about so many topics that it's general, but it's really
               | only doing 1 thing and that's not very general.
        
               | brookst wrote:
               | There are people who can't drive cars. Are they not
               | general intelligence?
               | 
               | I think we need to separate the thinking part of
               | intelligence from tool usage. Not everyone can use every
               | tool at a high level of expertise.
        
               | root_axis wrote:
               | Generally speaking, anyone can learn to use any tool.
               | This isn't true of generative AI systems which can only
               | learn through specialized training with meticulously
               | curated data sets.
        
               | ben_w wrote:
               | Generality is a continuous value, not a boolean; turned
               | out that "AGI" was poorly defined, and because of that
               | most people were putting the cut-off threshold in
               | different places.
               | 
               | Likewise for "intelligent", and even "artificial".
               | 
               | So no, ChatGPT can't drive a car*. But it knows more
               | about car repairs, defensive driving, global road
               | features (geoguesser), road signs in every language, and
               | how to design safe roads, than I'm ever likely to.
               | 
               | * It can also run python scripts with machine vision
               | stuff, but sadly that's still not sufficient to drive a
               | car... well, to drive one safety, anyway.
        
               | KHRZ wrote:
               | You can literally today prompt ChatGPT with API
               | instructions to drive a car, then feed it images of a
               | car's window outlooks and have it generate commands for
               | the car (JSON schema restricted structured commands if
               | you like). Text can represent any data thus yes, it is
               | general.
        
               | threeseed wrote:
               | > JSON schema restricted structured commands if you like
               | 
               | How about we have ChatGPT start with a simple task like
               | reliably generating JSON schema when asked to.
               | 
               | Hint: it will fail.
        
               | Nuzzerino wrote:
               | Text can be a carrier for any type of signal. The problem
               | gets reduced to that of an interface definition. It's
               | probably not going to be ideal for driving cars, but if
               | the latency, signal quality, and accuracy is within
               | acceptable constraints, what else is stopping it?
               | 
               | This doesn't imply that it's ideal for driving cars, but
               | to say that it's not capable of driving general
               | intelligence is incorrect in my view.
        
               | semi-extrinsic wrote:
               | ... that was written in mid-2023. So that opinion piece
               | is trying to redefine 2 year old LLMs like GPT-4 (pre-4o)
               | as AGI. Which can only be described as an absolutely
               | herculean movement of goalposts.
        
               | root_axis wrote:
               | "AGI is already here, just wait 30 more years". Not very
               | convincing.
        
               | lossolo wrote:
               | > AGI is already here
               | 
               | Last time I checked, in an Anthropic paper, they asked
               | the model to count something. They examined the logits
               | and a graph showing how it arrived at the answer. Then
               | they asked the model to explain its reasoning, and it
               | gave a completely different explanation, because that was
               | the most statistically probable response to the question.
               | Does that seem like AGI to you?
        
           | TeMPOraL wrote:
           | AGI could be a winner-take-all market... for the AGI,
           | specifically for the first one that's General and Intelligent
           | enough to ensure its own survival and prevent competing AGI
           | efforts from succeeding...
        
             | pdxandi wrote:
             | How would an AGI prevent others from competing? Sincere
             | question. That seems like something that ASI would be
             | capable of. If another company released an AGI, how would
             | the original stifle it? I get that the original can self-
             | improve to try to stay ahead, but that doesn't necessarily
             | mean it self-improves the best or most efficiently, right?
        
           | JumpCrisscross wrote:
           | > _AGI can 't really be a winner take all market. The
           | 'reward' for general intelligence is infinite as a monopoly
           | and it accelerates productivity_
           | 
           | The first-mover advantages of an AGI that can improve itself
           | are theoretically unsurmountable.
           | 
           | But OpenAI doesn't have a path to AGI any more than anyone
           | else. (It's increasingly clear LLMs alone don't make the
           | cut.) And the market for LLMs, non-general AI, is very much
           | not winner takes all. In this announcement, OpenAI is
           | basically acknowledging that it's not getting to self-
           | improving AGI.
        
             | tbrownaw wrote:
             | > _The first-mover advantages of an AGI that can improve
             | itself are theoretically unsurmountable._
             | 
             | This has some baked assumptions about cycle time and
             | improvement per cycle and whether there's a ceiling.
        
               | JumpCrisscross wrote:
               | > _this has some baked assumptions about cycle time and
               | improvement per cycle and whether there 's a ceiling_
               | 
               | To be precise, it assumes a low variability in cycle time
               | and improvement per cycle. If everyone is subjected to
               | the same limits, the first-mover advantage remains
               | insurmountable. I'd also argue that whether there is a
               | ceiling matters less than how high it is. If the first
               | AGI won't hit a ceiling for decades, it will have decades
               | of fratricidal supremacy.
        
           | aeternum wrote:
           | Remember however that their charter specifies: "If a value-
           | aligned, safety-conscious project comes close to building AGI
           | before we do, we commit to stop competing with and start
           | assisting this project"
           | 
           | It does have some weasel words around value-aligned and
           | safety-conscious which they can always argue but this could
           | get interesting because they've basically agreed not to
           | compete. A fairly insane thing to do in retrospect.
        
             | whatshisface wrote:
             | Insane relative to the goals of the current leadership, but
             | they didn't write that.
        
         | phreeza wrote:
         | That is a very obvious thing for them to say though regardless
         | of what they truly believe, because (a) it legitimizes removing
         | the cap , making fundraising easier and (b) averts antitrust
         | suspicions.
        
         | raincole wrote:
         | Even if they think it will be a winner-take-all market, they
         | won't say it out loud. It would be begging for antitrust
         | lawsuits.
        
       | sampton wrote:
       | OpenAI is busy rearranging the chairs while their competitors
       | surpass them.
        
         | ramesh31 wrote:
         | Yup. Haven't used an OpenAI model for anything in 6+ months
         | now, except to check the latest one and confirm that it is
         | still hilariously behind Google/Anthropic.
        
       | datadrivenangel wrote:
       | "Instead of our current complex capped-profit structure--which
       | made sense when it looked like there might be one dominant AGI
       | effort but doesn't in a world of many great AGI companies--we are
       | moving to a normal capital structure where everyone has stock.
       | This is not a sale, but a change of structure to something
       | simpler."
       | 
       | OpenAI admitting that they're not going to win?
        
       | modeless wrote:
       | Huh, so Elon's lawsuit worked? The nonprofit will retain control?
       | Or is this just spin on a plan that will eventually still
       | sideline the nonprofit?
        
         | j_maffe wrote:
         | It more sounds like the district attorneys won
        
         | blagie wrote:
         | To be specific: The nonprofit currently retains control. It
         | will stop once more dilution sets in.
        
       | ToucanLoucan wrote:
       | > Our mission is to ensure that artificial general intelligence
       | (AGI) benefits all of humanity.
       | 
       | Then why is it paywalled? Why are you making/have made people
       | across the world sift through the worst material on offer by the
       | wide uncensored Internet to train your LLMs? Why do you have a
       | for-profit LLC operating under a non-profit, or for that matter,
       | a "Public Benefit Corporation" that has to answer to shareholders
       | at all?
       | 
       | Related to that:
       | 
       | > or the needs for hundreds of billions of dollars of compute to
       | train models and serve users.
       | 
       | How does that serve humanity? Redirecting billions of dollars to
       | fancy autocomplete who's power demands strain already struggling
       | electrical grids and offset the gains of green energy worldwide?
       | 
       | > A lot of people around OpenAI in the early days thought AI
       | should only be in the hands of a few trusted people who could
       | "handle it".
       | 
       | No, we thought your plagiarism machine was a disgusting abuse of
       | the public square, and to be clear, this criticism would've been
       | easily handled by simply requesting people opt-in to have their
       | material used for AI training. But we all know why you didn't do
       | that, don't we Sam.
       | 
       | > It will of course not be all used for good, but we trust
       | humanity and think the good will outweigh the bad by orders of
       | magnitude.
       | 
       | Well so far, we've got vulnerable, lonely people being scammed on
       | Facebook, we've got companies charging subscriptions for people
       | to sext their chatbots, we've got various states using it to
       | target their opposition for military intervention, and the White
       | House may have used it to draft the dumbest basis for a trade war
       | in human history. Oh and fake therapists too.
       | 
       | When's the good kick in?
       | 
       | > We believe this is the best path forward--AGI should enable all
       | of humanity^1 to benefit each other.
       | 
       | ^1 who subscribe to our services
        
         | Lalabadie wrote:
         | > Then why is it paywalled? Why are you making/have made people
         | across the world sift through the worst material on offer by
         | the wide uncensored Internet to train your LLMs?
         | 
         | Because they're concerned about AI use the same way Google is
         | concerned about your private data.
        
       | pants2 wrote:
       | It's somewhat odd to me that many companies operating in the
       | public eye are basically stating "We are creating a digital god,
       | an instrument more powerful than any nuclear weapon" and raising
       | billions to do it, and nobody bats an eye...
        
         | esafak wrote:
         | Lots of people in academia and industry are calling for more
         | oversight. It's the US government that's behind. Europe's AI
         | Act bans applications with unacceptable risk:
         | https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
        
           | azinman2 wrote:
           | Unless China handicaps the their progress as well (which they
           | won't, see made in China 2025), all you're doing is handing
           | the future to deepseek et al.
        
             | nicce wrote:
             | This thought process it not different than it was with
             | nuclear weapons.
             | 
             | The primary difference is the observability - with
             | satellites we had some confidence that other nations
             | respected treaties, or that they had enough reaction time
             | for mutual destruction, but with this AI development we
             | lack all that.
        
               | lukas099 wrote:
               | Yes, it was the same with nukes, each side had to build
               | them because the other side was building them.
        
               | bpodgursky wrote:
               | Only countries with nuclear weapons had an actual seat at
               | the table when the world banned new nuclear weapon
               | programs.
        
               | nicce wrote:
               | That is why we see the current AI competition and some
               | attempts from companies to regulate it so that "it is
               | safe only in their hands".
               | 
               | https://time.com/6288245/openai-eu-lobbying-ai-act/
        
             | esafak wrote:
             | What kind of a future is that? If China marches towards a
             | dystopia, why should Europe dutifully follow?
             | 
             | We can selectively ban uses without banning the technology
             | wholesale; e.g., nuclear power generation is permitted,
             | while nuclear weapons are strictly controlled.
        
               | alasano wrote:
               | We don't know whether pushing towards AGI is marching
               | towards a dystopia.
               | 
               | If it's winner takes all for the first company/nation to
               | have AGI (presuming we can control it), then slowing down
               | progress of any kind with regulation is a risk.
               | 
               | I don't think there's a good enough analogy to be made,
               | like your nuclear power/weapons example.
               | 
               | The hypothetical benefits of an aligned AGI outweigh
               | those of any other technology by orders of magnitude.
        
               | esafak wrote:
               | As with nuclear weapons, there is non-negligible
               | probability of wiping out the human race. The companies
               | developing AI have not solved the alignment problem, and
               | OpenAI even dismantled what programs it had on it. They
               | are not going to invest in it unless forced to.
               | 
               | We should not be racing ahead because China is, but
               | investing energy in alignment research and international
               | agreements.
        
               | troupo wrote:
               | > We don't know whether pushing towards AGI is marching
               | towards a dystopia.
               | 
               | We do know that. By literally looking at China.
               | 
               | > The hypothetical benefits of an aligned AGI outweigh
               | those of any other technology by orders of magnitude.
               | 
               | AGI aligned with whom?
        
               | BeetleB wrote:
               | > If China marches towards a dystopia, why should Europe
               | dutifully follow?
               | 
               | I think the more relevant question is: Do you want to
               | live in a Chinese dystopia, or a European one?
        
               | esafak wrote:
               | A European dystopia won't be AI borne, so this is a false
               | dilemma.
        
               | BeetleB wrote:
               | What I meant is: Europe can choose to regulate as they
               | do, and end up living in a Chinese dystopia because the
               | Chinese will drastically benefit from non-regulated AI,
               | or they can create their own AI dystopia.
               | 
               | A non-AI dystopia is the least likely scenario.
        
               | esafak wrote:
               | If you are suggesting that China may use AI to attack
               | Europe, they can invest in defense without unleashing AI
               | domestically. And I don't think China will become a
               | utopia with unregulated AI. My impression after having
               | visited it was not one of a utopia, and knowing how they
               | use technology, I don't think AI will usher it in,
               | because our visions of utopia are at odds. They may well
               | enjoy what they have. But if things go sideways they may
               | regret it too.
        
               | Muromec wrote:
               | Not attack, just influence. Destabilize if you want.
               | Advocate regime change, sabotage trust in institution.
               | Being on a defense in a propaganda war doesn't really
               | work.
               | 
               | With US already having lost ideologigal war with russia
               | and China, Europe is very much next
        
               | BeetleB wrote:
               | > If you are suggesting that China may use AI to attack
               | Europe
               | 
               | No - I'm suggesting that China will reap the benefits of
               | AI much more than Europe will, and they will eclipse
               | Europe economically. Their dominance will follow, and
               | they'll be able to dictate terms to other countries (just
               | as the US is doing, and has been doing).
               | 
               | > And I don't think China will become a utopia with
               | unregulated AI.
               | 
               | Did you miss all the places I used the word "dystopia"?
               | 
               | > My impression after having visited it was not one of a
               | utopia, and knowing how they use technology, I don't
               | think AI will usher it in, because our visions of utopia
               | are at odds. They may well enjoy what they have.
               | 
               | Comparing China when I was a kid, not that long ago, to
               | what it is now: It is a dystopia, and that dystopia is
               | responsible for much of the improvements they've made.
               | Enjoying what they have doesn't mean it's not a dystopia.
               | Most people don't understand how willing humans are to
               | live in a dystopia if it improves their condition
               | significantly (not worrying too much about food, shelter,
               | etc).
        
               | JumpCrisscross wrote:
               | > _China may use AI to attack Europe_
               | 
               | No, just control. America exerts influence and control
               | over Europe without having had to attack it in
               | generations.
        
           | lenerdenator wrote:
           | The US government probably doesn't think it's behind.
           | 
           | Right now it's operated by a bunch of people who think that
           | you can directly relate the amount of money a venture could
           | make in the next 90 days to its net benefit for society.
           | Government telling them how they can and cannot make that
           | money, in their minds, is government telling them that they
           | cannot bring maximum benefit to society.
           | 
           | Now, is this mindset myopic to everything that most people
           | have in their lived experience? Is it ethically bankrupt and
           | held by people who'd sell their own mothers for a penny if
           | they otherwise couldn't get that penny? Would those people be
           | banished to a place beyond human contact for the rest of
           | their existence by functioning organs of an even somewhat-
           | sane society?
           | 
           | I don't know. I'm just asking questions.
        
           | jimbokun wrote:
           | US government is behind because Biden admin were pushing
           | strongly for controls and regulations and told Andersen and
           | friends exactly that, who then went and did everything in
           | their power to elect Trump, who then put those same tech bros
           | in charge of making his AI policy.
        
           | philipwhiuk wrote:
           | > Lots of people in academia and industry
           | 
           | Mostly OpenAI and DeepMind and it stunk of 'pulling up the
           | drawbridge behind them' and pivoting from actual harm to
           | theoretical harm.
           | 
           | For a crowd supposedly entrenched in startups, it's amazing
           | everyone here is so slow to recognise it's all funding
           | pitches and contract bidding.
        
         | saubeidl wrote:
         | The EU does and has passed the AI act to reign in the worst
         | consequences of this nuclear weapon. It has not been received
         | well around here.
         | 
         | The "digital god" angle might explain why. For many, this has
         | become a religious movement, a savior for an otherwise doomed
         | economic system.
        
           | rchaud wrote:
           | Absolutely. It's frankly quite shocking to see how otherwise
           | atheist or agnostic people have so quickly begun worshipping
           | at the altar of "inevitable AGI apocalypse", much in the same
           | way as how extremist Christians await the rapture.
        
             | lenerdenator wrote:
             | Roko's Basilisk is basically Pascal's wager with GPUs.
        
             | Xenoamorphous wrote:
             | I guess they think that the "digital god" has a chance to
             | become real (and soon, even), unlike the non-digital one?
        
               | rchaud wrote:
               | We'll be debating whether or not "AGI is here" in
               | philosophical terms, in the same way people debate if God
               | is real, for years to come. To say nothing of the untaxed
               | "nonprofit" status these institutions share.
               | 
               | Omnipotent deities can never be held responsible for
               | famine and natural disasters ("God has a plan for us
               | all"). AI currently has the same get-out-of-jail free
               | card where mistakes that no literate human would ever
               | make are handwaved away as "hallucinations" that can be
               | exorcised with a more sophisticated training model
               | ("prayers").
        
         | otabdeveloper4 wrote:
         | Well, because it's obviously bullshit and everyone knows it.
         | Just play the game and get rich like everyone else.
        
           | esafak wrote:
           | Are you sure about that? AI-powered robotic soldiers are
           | around the corner. What could go wrong...
        
             | devinprater wrote:
             | Ooo I know, Cybermen! Yay.
        
         | modeless wrote:
         | I don't know what sources you're reading. There's so much eye-
         | batting I'm surprised people can see at all.
        
         | atleastoptimal wrote:
         | Because many people fundamentally don't believe AGI is possible
         | at a basic level, even AI researchers. Humans tend to only
         | understand what materially affects their existence.
        
         | jimbokun wrote:
         | Most of us are batting our eyelashes as rapidly as possible but
         | have no idea how to stop it.
        
         | xandrius wrote:
         | How is an LLM more powerful than any nuclear weapon? Seriously
         | curious.
        
       | granzymes wrote:
       | From least to most speculative:
       | 
       | * The nonprofit is staying the same, and will continue to control
       | the for-profit entity OpenAI created to raise capital
       | 
       | * The for-profit is changing from a capped-profit LLC to a PBC
       | like Anthropic and Xai
       | 
       | * These changes have been at least tacitly agreed to by the
       | attorneys general of California and Delaware
       | 
       | * The non-profit won't be the _largest_ shareholder in the PBC
       | (likely Microsoft) but will retain control (super voting shares?)
       | 
       | * OpenAI thinks there will be multiple labs that achieve AGI,
       | although possibly on different timelines
        
         | foobiekr wrote:
         | Another possibility is that OpenAL thinks _none_ of the labs
         | will achieve AGI in a meaningful timeframe so they are trying
         | to cash out with whatever you want to call the current models.
         | There will only be one or two of those before investors start
         | looking at the incredible losses.
        
           | r00fus wrote:
           | I'm fairly sure that OpenAI has never really believed in AGI
           | - it's like with Uber and "self driving cabs" - it's a lure
           | for the investors.
           | 
           | It's just that this bait has a shelf life and it looks like
           | it's going to expire soon.
        
       | sjtgraham wrote:
       | This restructuring is essentially a sophisticated maneuver toward
       | wealth and power maximization shrouded in altruistic language.
        
       | theoryofx wrote:
       | "We made the decision for the nonprofit to retain control of
       | OpenAI after hearing from..." [CHIEF LAW ENFORCEMENT OFFICERS IN
       | CALIFORNIA AND DELAWARE]
       | 
       | This indicates that they didn't actually want the nonprofit to
       | retain control and they're only doing it because they were forced
       | to by threats of legal action.
        
         | HaZeust wrote:
         | When I read that, I was actually fairly surprised about how
         | brazen they were about who they called on for this action. They
         | simply just _said it_.
        
       | d--b wrote:
       | Mmh am I the only one who has been offered to participate in a
       | "comparison between 2 chatgpt versions"?
       | 
       | The newer version included sponsored products in its response. I
       | thought that was quite effed up.
        
       | lolinder wrote:
       | So the non-profit retains control but we all know that Altman
       | controls the board of the non-profit and I'd be shocked if he
       | won't have significant stock in the new for-profit (from TFA: "we
       | are moving to a normal capital structure where everyone has
       | stock"). Which means that regardless of whether the non-profit
       | has control on paper, OpenAI is now _even better_ structured for
       | Sam Altman 's personal enrichment.
       | 
       | No more caps on profit, a simpler structure to sell to investors,
       | and Altman can finally get that 7% equity stake he's been eyeing.
       | Not a bad outcome for him given the constraints apparently
       | imposed on them by "the Attorney General of Delaware and the
       | Attorney General of California".
        
         | elAhmo wrote:
         | We have seen how much power does the board have after the
         | firing of Altman - none.
         | 
         | Let's see how this plays out. PBC effectively means nothing -
         | just take a look at Xai and its purchase of Twitter. I would
         | love to listen reasoning explaining why this ~33 billion USD
         | move is benefiting public.
        
           | ignoramous wrote:
           | > _We have seen how much power does the board have after the
           | firing of Altman - none._
           | 
           | Right; so, "Worker Unions" work.
        
           | wmf wrote:
           | ChatGPT is free. That's the public benefit.
        
             | nativeit wrote:
             | Define "free".
        
               | fooker wrote:
               | free as in free beer
        
               | Nuzzerino wrote:
               | It's like a free beer, but it's Bud Light, lukewarm, and
               | your reaction to tasting the beer goes toward researching
               | ways to make you appreciate the lukewarm Bud Light for
               | its marginal value, rather than making that beer taste
               | better or less unhealthy. They'll try very hard to
               | convince you that they have though. It parallels their
               | approach to AI Alignment.
        
               | Etheryte wrote:
               | This description has no business being as spot on as it
               | is.
        
               | throwanem wrote:
               | Makes me glad I haven't tried the Kool-aid. Uh, crap -
               | 'scuse me, _craft_ - IPA. Uh, beer.
        
               | windsignaling wrote:
               | I don't pay money for it?
        
             | sekai wrote:
             | They don't collect data?
        
               | wmf wrote:
               | If you use it, that means you received more value than
               | you gave up. It's called consumer surplus.
        
               | patmcc wrote:
               | That's effectively every business that isn't a complete
               | rent-seeking monopoly. It's not a very good measure.
               | 
               | edit: to be clear, it's not a bad thing - we should want
               | companies that create consumer surplus. But that's the
               | default state of companies in a healthy market.
        
               | jaccola wrote:
               | If I pay PS200,000 for a car, I received more value than
               | I gave up, otherwise I wouldn't have given the owner
               | PS200,000 for her car. No reasonable person would say the
               | car was "free"...
        
               | JumpCrisscross wrote:
               | > _If you use it, that means you received more value than
               | you gave up. It 's called consumer surplus_
               | 
               | This is true for literally any transaction. Actually,
               | it's true for any rational action. If you're being
               | tortured, and you decide it's not worth it to keep your
               | secrets hidden any longer, you get more than you give up
               | when you stop being tortured.
        
               | klabb3 wrote:
               | It's only true in theory and over a single transaction,
               | not necessarily over time. The hack that VCs have
               | exploited for decades now is subsidizing products and
               | acquiring competition to eventually enshittify. In this
               | case, when OpenAI dials up the inevitable
               | enshittification, they'll have gotten a ton of data from
               | their users to use for their proprietary closed AI.
        
             | patmcc wrote:
             | Google offers a great many things for free. Should they get
             | beneficial tax treatment for it?
        
               | wmf wrote:
               | PBCs have no beneficial tax treatment and neither does
               | OpenAI.
        
               | moralestapia wrote:
               | So, what's the point of a PBC?
               | 
               | Not being snarky here, like what is the purported thesis
               | behind them?
        
               | nemomarx wrote:
               | marketing to certain types of philanthropic investors? I
               | think
        
               | insane_dreamer wrote:
               | Mostly branding, like Google's "do no evil"
               | 
               | Some founders truly believe in structuring the company
               | for the benefit of the public, but Altman has already
               | shown he's not one of them.
        
               | patmcc wrote:
               | Huh. Then yah, what the heck? Why not just be a regular
               | corp?
        
               | jampekka wrote:
               | Branding, and perhaps a demand from the judges. In
               | practice it doesn't mean anything if/when they stuff the
               | board with people who want to run it as a normal LLC.
        
             | insane_dreamer wrote:
             | That's like saying AWS is free. ChatGPT has a limited use
             | free tier just like most other SaaS products out there.
        
           | paulddraper wrote:
           | The board had plenty of power.
           | 
           | There was never a coherent explanation of its firing the CEO.
           | 
           | But they could have stuck with that decision if they believed
           | in it.
        
             | freejazz wrote:
             | The question is not if they could, it is if they would.
        
             | michaelt wrote:
             | The explanation seemed pretty obvious to me: They set up a
             | nonprofit to deliver an AI that was Open.
             | 
             | Then things went unexpectedly well, people were valuing
             | them at billions of dollars, and they suddenly decided they
             | weren't open any more. Suddenly they were all about
             | Altman's Interests Safety (AI Safety for short).
             | 
             | The board tried to fulfil its obligation to get the
             | nonprofit to do the things in its charter, and they were
             | unsuccessful.
        
             | insane_dreamer wrote:
             | The explanation was pretty clear and coherent: The CEO was
             | no longer adhering to the mission of the non-profit (which
             | the board was upholding).
             | 
             | But they found themselves alone in that it turns out the
             | employees (who were employed by the for-profit company) and
             | investors (MSFT in particular) didn't care about the
             | mission and wanted to follow the money instead.
             | 
             | So the board had no choice but to capitulate and leave.
        
         | whynotminot wrote:
         | Isn't Sam already very rich? I mean it wouldn't be the first
         | time a guy wanted to be even richer, but I feel like we need to
         | be more creative when divining his intentions
        
           | sigilis wrote:
           | Why would we need to be more creative? The explanation of him
           | wanting more money is perfectly adequate.
           | 
           | Being rich results in a kind of limitation of scope for
           | ambition. To the sufferer, a person who has everything they
           | could want, there is no other objective worth having. They
           | become eccentric and they pursue more money.
           | 
           | We should have enrichment facilities for these people where
           | they play incremental games and don't ruin the world like the
           | paperclip maximizers they are.
        
             | whynotminot wrote:
             | > Why would we need to be more creative? The explanation of
             | him wanting more money is perfectly adequate. Being rich
             | results in a kind of limitation of scope for ambition.
             | 
             | The dude announces new initiatives from the White House,
             | regularly briefs Senators and senior DoD leaders, and is
             | the top get for interviews around the world for AI topics.
             | 
             | There's a lot more to be ambitious about than just money.
        
               | sigilis wrote:
               | These are all activities he is engaging in to generate
               | money through the company he has a stake in. None of
               | those activities have a purpose other than selling the
               | work of his company and presenting it as a good
               | investment which is how he gets money.
               | 
               | Maybe he wants to use the money in some nebulous future
               | way, subjugating all people in a way that deals with his
               | childhood trauma or whatever. That's also something rich
               | people do when they need a hobby aside from gathering
               | more money. It's not their main goal, except when they
               | run into setbacks.
               | 
               | People are not complicated when they are money hoarders.
               | They might have had hidden depths once, but they are thin
               | furrows in the ground next to the giant piles of money
               | that define them now.
        
           | paulddraper wrote:
           | OpenAI doesn't have the lead anymore.
           | 
           | Google/Anthropic are catching up, or already surpassed.
        
             | 6510 wrote:
             | how? The internet says 400 m weekly chatgpt users, 19 m
             | weekly Anthropic, 47.3 m Monthly Gemini, Grok 6.7 m daily,
             | 430 m Baidu.
        
           | senderista wrote:
           | "It's not about the money, it's about winning"
           | 
           | --Gordon Gekko
        
           | Yizahi wrote:
           | It seems a defining feature of nearly every single extremely
           | rich person is their belief that they somehow are smarter
           | than filthy peasants, and so he decides to "educate" them of
           | the sacred knowledge. This may take vastly different forms -
           | genocide, war, trying to create via bribes a better
           | government, create a city from scratch, create a new
           | corporate "culture", do public proselytizing of their "do
           | better" faith, write books, classes etc.
           | 
           | St. Altman plans to create a corporate god for us dumb
           | schmucks, and he will be it's prophet.
        
           | viraptor wrote:
           | Nah, worldcoin is now going to the US. He just wants to get
           | richer. https://archive.is/JTuGE
        
         | richardw wrote:
         | Or, alternatively, it's much harder to fight with one hand
         | behind your back. They need to be able to compete for resources
         | and talent given the market structure, or they fail on the
         | mission.
         | 
         | This is already impossibly hard. Approximately zero people
         | commenting would be able to win this battle in Sam's shoes.
         | What would they need to do to begin to have a chance? Rather
         | than make all the obvious comments "bad evil man wants to get
         | rich", think what it would take to achieve the mission. What
         | would you need to do in his shoes, aside from just give up and
         | close up shop? Probably this, at the very least.
         | 
         | Edit: I don't know the guy and many near YC do. So I accept
         | there may be a lens I don't have. But I'd rather discuss the
         | problem, not the person.
        
           | kadushka wrote:
           | It seems like they lost most of their top talent - probably
           | because of Altman.
        
           | k__ wrote:
           | The moment we stop treating "bad evil man wants to ge it
           | rich" as a straw man, we can heal.
        
             | thegreatpeter wrote:
             | Extra! Extra! Read all about it! "Bad evil man wants to get
             | rich! We should enrich Google and Microsoft instead!"
        
         | MPSFounder wrote:
         | Never understood his appeal. Lacks charisma. Not technically
         | savvy relative to many engineers at OpenAI(I doubt he would
         | pass their own intern interviews, even less so their FT). Very
         | unlikeable in person (comes off as fake for some reason, like a
         | political plant). Who is vouching for this guy. When I met him,
         | for some reason, he reminded me of Thiel. He is no Jobs
        
           | gsibble wrote:
           | Altman is a clear sociopath. He's a sales guy and good
           | executive. But he's only out for himself.
        
       | purpleidea wrote:
       | There's really nothing "open" about this company. If they want to
       | be, then:
       | 
       | (1) be transparent about exactly which data was collected for the
       | model
       | 
       | (2) release all the source code
       | 
       | If you want to benefit humanity, then put it under a strong
       | copyleft license with no CLA. Simple.
        
         | smeeth wrote:
         | They would do this if their mission was what you wish it was.
         | But it isn't, so they won't.
        
         | BeetleB wrote:
         | Arguments by semantics are always tiresome.
        
       | Tenoke wrote:
       | For better or worse, OpenAI removing the capped structure and
       | turning the nonprofit from AGI considerations to just
       | philanthropy feels like the shedding of the last remnants of
       | sanctity.
        
       | drewbeck wrote:
       | I see OpenAI's original form as the last gasp of a kind of
       | liberal tech; in a world where "doing good" was seen as very
       | important, the non-profit approach made sense and got a lot of
       | people on board. These days the Altmans and the pmarcas of the
       | world are much more comfortable expressing their authoritarian,
       | self-centered world views; the "evolving" structure of Open AI is
       | fully in line with that. They want to be the kings they always
       | thought of themselves as, and now they get to do so without
       | couching it in "doing good".
        
         | sneak wrote:
         | Is it reasonable to assign the descriptor "authoritarian" to
         | anyone who simply does not subscribe to the common orthodoxy of
         | one faction in the american culture war? That is what it seems
         | to me is happening here, though I would love to be wrong.
         | 
         | I have not seen anything from sama or pmarca that I would
         | classify as "authoritarian".
        
           | blibble wrote:
           | are you aware of worldcoin?
           | 
           | altman building a centralised authority of who will be
           | classed as "human" is about as authoritarian as you could get
        
             | sneak wrote:
             | Worldcoin is opt-in, which is the opposite of
             | authoritarian. Nobody who doesn't like it is required to
             | participate.
        
               | fsndz wrote:
               | it's always opt-in until it isn't
        
               | amdsn wrote:
               | it is opt in until they manage to convince some
               | government to allow them to be the contracted provider of
               | "humanness verification" that is then made a prerequisite
               | to access services.
        
               | sholladay wrote:
               | Comcast is also opt-in. Except, in many areas there are
               | no real alternatives.
               | 
               | I doubt Worldcoin will actually manage to corner the
               | market. But the point is, if it did, bad things would
               | happen. Though, that's probably true of most products.
        
           | bee_rider wrote:
           | I'm not sure exactly what they meant by "liberal" in this
           | case, but since they put it in contrast with
           | authoritarianism, I assume they meant it in the conventional
           | definition of the word (where it is the polar opposite of
           | authoritarianism). Instead of the American politics-as-sports
           | definition that makes it a synonym for "team blue."
        
             | drewbeck wrote:
             | correct. "liberal" as in the general ideas that ie
             | expanding the franchise is important, press freedoms are
             | good, that government can do good things for people and for
             | capital etc. Wikipedia's intro paragraph does a good job of
             | describing what I was getting at (below). In prior decades
             | Republicans in the US would have been categorized as
             | "liberal" under this definition; in recent years, not so
             | much.
             | 
             | >Liberalism is a political and moral philosophy based on
             | the rights of the individual, liberty, consent of the
             | governed, political equality, the right to private
             | property, and equality before the law. Liberals espouse
             | various and often mutually conflicting views depending on
             | their understanding of these principles but generally
             | support private property, market economies, individual
             | rights (including civil rights and human rights), liberal
             | democracy, secularism, rule of law, economic and political
             | freedom, freedom of speech, freedom of the press, freedom
             | of assembly, and freedom of religion. Liberalism is
             | frequently cited as the dominant ideology of modern
             | history.
        
           | drewbeck wrote:
           | No I don't think it is. I DO think those two people want to
           | be in charge (along with other billionaires) and they want
           | the rest of us to follow along, which is in my book an
           | authoritarian POV. pmarca's recent "VC is the only job that
           | can't be done by AI" is a good example of that; the rest of
           | us are to be managed and controlled by VCs and robots.
        
           | tastyface wrote:
           | Donating millions to a fascist president (in Altman's case)
           | seems pretty authoritarian to me. And he seems happy enough
           | hanging out with Thiel and other Yarvin groupies.
        
             | sidibe wrote:
             | Yup, if Elon hadn't gotten so jealous and spiteful to him
             | I'm sure he'd be one of Elon's leading sycophants.
        
           | sanderjd wrote:
           | No, "authoritarian" is a word with a specific meaning. I'm
           | not sure about applying it to Sam Altman, but Marc Andreessen
           | has expressed views that I consider authoritarian in his
           | victory lap tour since last year's presidential election.
        
         | ignoramous wrote:
         | > _They want to be the kings they always thought of themselves
         | as, and now they get to do so without couching it in "doing
         | good"._
         | 
         | You mean, AGI will benefit all of humanity like _War on Terror_
         | spread democracy?
        
           | nickff wrote:
           | Why are you changing the subject? The "War on Terror" was
           | never intended to spread democracy as far as I know;
           | democracy was a means by which to achieve the objective of
           | safety from terrorism.
        
         | ballooney wrote:
         | Hopelessly over-idealistic premise. Sama and pg have never been
         | anything other than opportunistic muck. This will be my last
         | ever comment on HN.
        
           | byearthithatius wrote:
           | I feel this so hard, I think this may be my last time using
           | the site as well. They don't care about advancement, they
           | only care about money.
        
             | stego-tech wrote:
             | Like everything, it's projection. Those who loudly scream
             | against something are almost always the ones engaging in
             | it.
             | 
             | Google screamed against service revenue and advertising
             | while building the world's largest advertising empire.
             | Facebook screamed against misinformation and surveillance
             | while enabling it on a global scale. Netflix screamed
             | against the overpriced cable TV industry while turning
             | streaming into modern overpriced cable television. Uber
             | screamed against the entrenched taxi industry harming
             | workers and passengers while creating an unregulated
             | monster that harmed workers and passengers.
             | 
             | Altman and OpenAI are no different in this regard, loudly
             | screaming against AI harming humanity while doing
             | everything in their capacity to create AI tools that will
             | knowingly harm humanity while enriching themselves.
             | 
             | If people trust the performance instead of the actions and
             | their outcomes, then we can't convince them otherwise.
        
           | gallerdude wrote:
           | bye
        
           | drewbeck wrote:
           | Oh I'm not saying they every believed more than their self-
           | centered views, but that in a world that leaned more liberal
           | there was value in trying to frame their work in those terms.
           | Now there's no need to pretend.
        
             | kmacdough wrote:
             | And to those who "say" at least now they're honest, I say
             | "WHY?!" Unconditionally being "good" would be better than
             | disguising selfishness as good. But that's not really a
             | thing. Having to maintain the presence of doing good puts
             | significant boundaries on what you can get away with, and
             | increases the consequence when people uncover some shit.
             | 
             | Condoning "honest liars" enables a whole other level of
             | open and unrestricted criminality.
        
           | HaZeust wrote:
           | inb4 deleted
        
         | stego-tech wrote:
         | That world _never_ existed. Yes, pockets did - IT professionals
         | with broadband lines and spare kit hosting IRC servers and
         | phpBB forums from their homes free of charge, a few VC-funded
         | companies offering idealistic visions of the net until funding
         | ran dry (RIP CoHost) - but once the web became privatized, it
         | was all in service of the bottom line by companies. Web 2.0
         | onwards was all about centralization, surveillance,
         | advertising, and manipulation of the populace at scale - and
         | that intent was never really a secret to those who bothered to
         | pay attention. While the world was reeling from Cambridge
         | Analytica, us pre-1.0 farts who cut our teeth on Telnet and
         | Mosaic were just kind of flabbergasted that _ya 'll were
         | surprised by overtly obvious intentions_.
         | 
         | That doesn't mean it has to always be this way, though. Back
         | when I had more trust in the present government and USPS, I
         | mused on how much of a game changer it might be for the USPS to
         | provide free hosting and e-mail to citizens, repurposing the
         | glut of unused real estate into smaller edge compute providers.
         | Everyone gets a web server and 5GB of storage, with 1A
         | Protections letting them say and host whatever they like from
         | their little Post Office Box. Everyone has an e-mail address
         | tied to their real identity, with encryption and security for
         | digital mail just like the law provides for physical mail. I
         | _still_ think the answer is about enabling more people to
         | engage with the internet on their selective terms (including
         | the option of _disengagement_ ), rather than the present
         | psychological manipulation everyone engages in to keep us glued
         | to our screens, tethered to our phones, and constantly
         | uploading new data to advertisers and surveillance firms alike.
         | 
         | But the nostalgic view that the internet used to be different
         | is just that: rose-tinted memories of a past that never really
         | existed. The first step to fixing this mess is acknowledging
         | its harm.
        
           | dgreensp wrote:
           | I don't think the parent was saying that everyone's
           | intentions were pure until recently, but rather that naked
           | greed wasn't cool before, but now it is.
           | 
           | The Internet has changed a lot over the decades, and it did
           | used to be different, with the differences depending on how
           | many years you go back.
        
             | jon_richards wrote:
             | As recently as the Silicon Valley tv show, the joke was
             | that every startup pitch claimed they were "making the
             | world a better place".
        
           | JumpCrisscross wrote:
           | > _That world never existed_
           | 
           | It absolutely did. Steve Wozniak was real. Silicon Valley
           | wasn't always a hive of liars and sycophants.
        
             | davesque wrote:
             | I have to agree. That's one of the dangers of today's
             | world; the risk of believing that we never had a better
             | one. Yes, the altruism of yesteryear was partially born of
             | convenience, but it still existed. And I remember people
             | actually believing it was important and acting as such.
             | Today's cynicism and selfishness seem a lot more arbitrary
             | to me. There's absolutely no reason things have to be this
             | way. Collectively, we have access to more wealth and power
             | now than we ever did previously. By all accounts, things
             | ought to be great. It seems we just need the current
             | generation of leaders to re-learn a few lessons from
             | history.
        
         | jimbokun wrote:
         | They deeply believe in the Ayn Rand mindset that the system
         | that brings them the most individual wealth is also the best
         | system for humanity as a whole.
        
       | ramesh31 wrote:
       | The explosion of PBC structured corps recently has me thinking it
       | must just be a tax loophole at this point. I can't possibly
       | imagine there is any meaningful enforcement around any of its
       | restrictions or guidelines.
        
         | bloudermilk wrote:
         | PBCs don't get special tax treatment. As far as I know they're
         | taxed exactly the same as typical C or S corps.
        
         | ralph84 wrote:
         | It's not a tax thing, it's a power thing. PBCs transfer power
         | from shareholders to management as long as management can say
         | they were acting for a public benefit.
        
         | asadotzler wrote:
         | Not a loophole as they pay taxes (unlike non-profits) but a fig
         | leaf to cover commercial activity with some feel-good label.
         | The real purpose of PBC is the legal protection it may afford
         | to the company from shareholders unhappy with less than maximal
         | profit generation. It gives the board some legal space to do
         | some good if they choose to but has no mandate like real non-
         | profits which get a tax break for creating a public good or
         | service, a tax break that can be withdrawn if they do not
         | annually prove that public benefit to the IRS.
        
       | bloppe wrote:
       | Does anybody outside OAI still think of them as anything other
       | that a "normal" for-profit company?
        
       | programjames wrote:
       | Carcinisation in action:                    free (foss) -> non-
       | profit -> capped-profit -> public benefits corporation -> (you
       | guessed it)
        
         | blagie wrote:
         | No, this only happens if:
         | 
         | 1) You're successful.
         | 
         | 2) You mess up checks-and-balances at the beginning.
         | 
         | OpenAI did both.
         | 
         | Personally, I think at some point, the AGs ought to take over
         | and push it back into a non-profit format. OAI undermines the
         | concept of a non-profit.
        
       | jjani wrote:
       | SamA is in a hurry because he's set to lose the race. We're at
       | peak valuation and he needs to convert something _now_.
       | 
       | If the entrenched giants (Google, Microsoft and Apple) catch up -
       | and Google 100% has, if not surpassed - they have a thousand
       | levers to pull and OpenAI is done for. Microsoft has realized
       | this, hence why they're breaking up with them - Google and
       | Anthropic have shown they don't need OpenAI. Galaxy phones will
       | get a Gemini button, Chrome will get it built into the browser.
       | MS can either develop their own thing , use opensource models, or
       | just ask every frontier model provider (and there's already 3-4
       | as we speak) how cheaply they're willing to deliver. Then chuck
       | it right in the OS and Office first-class. Which half the white
       | collar world spends their entire day staring at. Apple devices
       | too will get an AI button (or gesture, given it's Apple) and just
       | like MS they'll do it inhouse or have the providers bid against
       | each other.
       | 
       | The only way OpenAI David was ever going to beat the Goliaths GMA
       | in the long run was if it were near-impossible to catch up to
       | them, a la TSMC/ASML. But they did catch up.
        
         | tedivm wrote:
         | Even Alibaba is releasing some amazing models these days. Qwen
         | 3 is pretty remarkable, especially considering the variety of
         | hardware the variants of it can run on.
        
         | pi-err wrote:
         | Sounds a lot like "Google+ will catch Facebook in no time".
         | 
         | OpenAI has been on a winning streak that makes ChatGPT the
         | default chatbot for most of the planet.
         | 
         | Everybody else like you describe is trying to add some AI crap
         | behind a button on a congested UI.
         | 
         | B2B market will stay open but OpenAI has certainly not peaked
         | yet.
        
           | kortilla wrote:
           | Most of the planet doesn't use chat bots at all.
        
           | no_wizard wrote:
           | Facebook had immense network effects working for it back
           | then.
           | 
           | What network effect does OpenAI have? Far as I can tell,
           | moving from OpenAI to Gemini or something else is easy. It's
           | not sticky at all. There's no "my friends are primarily using
           | OpenAI so I am too" or anything like that.
           | 
           | So again, I ask, what makes it sticky?
        
             | jwarden wrote:
             | Brand counts for a lot
        
               | fs111 wrote:
               | Google is one of the most valuable brands ever. Everyone
               | knows it. It is even used for "searching the web" openai
               | is not that strong of a brand
        
               | schlch wrote:
               | I think for the general public ChatGPT is a much stronger
               | brand than OpenAI itself.
        
               | msabalau wrote:
               | No one has a deep emotional connection with OpenAI that
               | would impede switching.
               | 
               | At best they have a bit of cheap tribalism that might
               | prevent some incurious people who don't care much about
               | using the best tools noticing that they aren't.
        
             | cshimmin wrote:
             | Yep, I mostly interact with these AIs through Cursor. When
             | I want to ask it a question, there's a little dropdown box
             | and I can select openai/anthropic/deepseek whatever model.
             | It's as easy as that to switch.
        
               | bsimpson wrote:
               | Most of my exposure to LLMs has been through GitHub's
               | Copilot, which has that same interface.
        
               | sanderjd wrote:
               | Yeah but I remember when search first started getting
               | integrated with the browser and the "switch search
               | engine" thing was significantly more prominent. Then
               | Google became the default and nobody ever switched it and
               | the rest is history.
               | 
               | So the interesting question is: How did that happen? Why
               | wasn't Google search an easily swapped commodity? Or if
               | it was, how did they win and defend their default status?
               | Why didn't the existing juggernauts at the time
               | (Microsoft) beat them at this game?
               | 
               | I have my own answers for these, and I'm sure all the
               | smart people figuring out strategy at Open AI have
               | thought about similar things.
               | 
               | It's not clear if Open AI will be able to overcome this
               | commodification issue (personally, I think they won't),
               | but I don't think it's impossible, and there is prior art
               | for at least some of the pages in this playbook.
        
               | reasonableklout wrote:
               | Yes, I think people severely underrate the data flywheel
               | effects that distribution gives an ML-based product,
               | which is what Google was and ChatGPT is. It is also an
               | extremely capital-intensive industry to be in, so even if
               | LLMs are commoditized, it will be to the benefit of a few
               | players, and barring a sustained lead by any one company
               | over the others, I suspect the first mover will be very
               | difficult to unseat.
               | 
               | Google is doing well for the moment, but OpenAI just
               | closed a $40 billion round. Neither will be able to rest
               | for a while.
        
               | sanderjd wrote:
               | Yeah, a very interesting metric to know would be how many
               | tokens of prompt data (that is allowed to be used for
               | training) the different products are seeing per day.
        
               | skydhash wrote:
               | > _So the interesting question is: How did that happen?
               | Why wasn 't Google search an easily swapped commodity? Or
               | if it was, how did they win and defend their default
               | status? Why didn't the existing juggernauts at the time
               | (Microsoft) beat them at this game?_
               | 
               | Maybe the big amount of money they've given to Apple
               | which is their direct competitor in the mobile space.
               | Also good amount of money given to Firefox, which is
               | their direct competitor in the browser space, alongside
               | side Safari from Apple.
               | 
               | Most people don't care about the search engine. The
               | default is what they will used unless said default is
               | bad.
        
               | sanderjd wrote:
               | I don't think my comment implied that the answers to
               | these questions aren't knowable! And indeed, I agree that
               | the deals to pay for default status in different channels
               | is a big part of that answer.
               | 
               | So then apply that to Open AI. What are the distribution
               | channels? Should they be paying Cursor to make them the
               | default model? Or who else? Would that work? If not, why
               | not? What's different?
               | 
               | My intuition is that this wouldn't work for them. I think
               | if this "pay to be default" strategy works for someone,
               | it will be one of their deeper pocketed rivals.
               | 
               | But I also don't think this was the only reason Google
               | won search. In my memory, those deals to pay to be the
               | default came fairly long after they had successfully
               | built the brand image as the best search engine. That's
               | how they had the cash to afford to pay for this.
               | 
               | A couple years ago, I thought it seemed likely that Open
               | AI would win the market in that way, by being known as
               | the clear best model. But that seems pretty unclear now!
               | There are a few different models that are pretty
               | similarly capable at this point.
               | 
               | Essentially, I think the reason Google was able to win
               | search whereas the prospects look less obvious for Open
               | AI is that they just have stronger competition!
               | 
               | To me, it just highlights the extent to which the big
               | players at the time of Google's rise - Microsoft, Yahoo,
               | ... Oracle maybe? - really dropped the ball on putting up
               | strong competition. (Or conversely, Google was just
               | further ahead of its time.)
        
             | rileyphone wrote:
             | From talking to people, the average user relies on memories
             | and chat history, which is not easy to migrate. I imagine
             | that's the part of the strategy to keep people from hopping
             | model providers.
        
               | nativeit wrote:
               | That sounds eminently solvable.
        
               | jjani wrote:
               | Google, MS, Apple and Meta are all quite capable of
               | generating such a history for new users, if they'd like
               | to.
        
             | miki123211 wrote:
             | OpenAI (or, more specifically, Chat GPT) is CocaCola, not
             | Facebook.
             | 
             | They have the brand recognition and consumer goodwill no
             | other brand in AI has, incredibly so with school students,
             | who will soon go into the professional world and bring that
             | goodwill with them.
             | 
             | I think better models are enough to dethrone OpenAI in API,
             | B2C and internal enterprise use cases, but OpenAI has
             | consumer mindshare, and they're going to be the king of
             | chatbots forever. Unless somebody else figures out
             | something which is better by orders of magnitude and that
             | Open AI can't copy quickly, it's going to stay that way.
             | 
             | Apple had the opportunity to do something really great
             | here. With Siri's deep device integration on one hand and
             | Apple's willingness to force 3rd-party devs to do the right
             | thing for users on the other, they could have had a
             | compelling product that nobody else could copy, but it
             | seems like they're not willing to go that route, mostly for
             | privacy, antitrust and internal competency reasons, in that
             | order. Google is on the right track and might get something
             | similar (although not as polished as typical Apple) done,
             | but Android's mindshare among tech-savvy consumers isn't
             | great enough for it to get traction.
        
               | pphysch wrote:
               | > who will soon go into the professional world and bring
               | that goodwill with them.
               | 
               | ...Until their employer forces them to use Microsoft
               | Copilot, or Google Gemini, or whatever, because that's
               | what they pay for and what integrates into their
               | enterprise stack. And the new employee shrugs and accepts
               | it.
        
               | miki123211 wrote:
               | Just like people are forced to use web Office and
               | Microsoft Teams, and start prefering them over Google
               | Docs and Slack? I don't think so.
        
               | JumpCrisscross wrote:
               | > _Just like people are forced to use web Office and
               | Microsoft Teams, and start prefering them over Google
               | Docs and Slack? I don 't think so_
               | 
               | ...yes. Office is the market leader. Slack has between a
               | fifth and a fourth of the market. Coca-Cola's products
               | have like 70% market share in the American carbonated
               | soft-drink market [1].
               | 
               | [1] https://www.investopedia.com/ask/answers/060415/how-
               | much-glo...
        
               | LMYahooTFY wrote:
               | Does Google not have brand recognition and Consumer good
               | will? We might read all sorts of deep opinions of Google
               | on HN, but I think Search and Chrome market share speak
               | themselves. For the average consumer, I'm skeptical that
               | OpenAI carries much weight.
        
           | kranke155 wrote:
           | Facebook couldnt be overtaken because of network effects.
           | What network effects are there to a chatbot.
           | 
           | If you look at Gemini, I know people using it daily.
        
           | Analemma_ wrote:
           | That's not at all the same thing: social media has network
           | effects that keep people locked in because their friends are
           | there. Meanwhile, most of the people I know using LLMs cancel
           | and resubscribe to Chat-GPT, Claude and Gemini constantly
           | based on whatever has the most buzz that month. There's no
           | lock-in whatsoever in this market, which means they compete
           | on quality, and the general consensus is that Gemini 2.5 is
           | currently winning that war. Of course that won't be true
           | forever, but the point is that OpenAI isn't running away with
           | it anymore.
           | 
           | And nobody's saying OpenAI will go bankrupt, they'll
           | certainly continue to be a huge player in this space. But
           | their astronomical valuation was based on the initial
           | impression that they were the only game in town, and it will
           | come down now that that's no longer true. Hence why Altman
           | wants to cash out ASAP.
        
           | ricardobeat wrote:
           | I know a single person who uses ChatGPT daily, and only
           | because their company has an enterprise subscription.
           | 
           | My impression is that Claude is a lot more popular - and it's
           | the one I use myself, though as someone else said the vast
           | majority of people, even in software engineering, don't use
           | AI often at all.
        
           | NBJack wrote:
           | Defacto victory.
           | 
           | Facebook wasn't some startup when Google+ entered the scene;
           | they were already cash flow positive, and had roughly 30% ads
           | market share.
           | 
           | OpenAI is still operating at a loss despite having 50+% of
           | the chatbot "market". There is no easy path to victory for
           | them here.
        
           | _Algernon_ wrote:
           | Social media has the benefit of network effects which is a
           | pretty formidable moat.
           | 
           | This moat is non-existent when it comes to Open AI.
        
             | alganet wrote:
             | That reminds me of the Dictator movie.
             | 
             | All dissidents went into Little Wadyia.
             | 
             | When the Dictator himself visited it, he started to fake
             | his name by copying the signs and names he saw on the
             | walls. Everyone knew what he was.
             | 
             | Internet social networks are like that.
             | 
             | Now, this moat thing. That's hilarious.
        
           | jjani wrote:
           | The comparison of Chrome and IE is much more apt, IMO,
           | because the deciding factor as other mentioned for social
           | media is network effects, or next-gen dopamine algorithms
           | (TikTok). And that's unique to them.
           | 
           | For example, I'd never suggest that e.g. MS could take on
           | TikTok, despite all the levers they can pull, and being worth
           | magnitudes more. No chance.
        
           | chrisweekly wrote:
           | IMHO "ChatGPT the default chatbot" is a meaningful but
           | unstable first-mover advantage. The way things are apparently
           | headed, it seems less like Google+ chasing FB, more like
           | Chrome eating IE + NN's lunch.
        
           | jameslk wrote:
           | OpenAI is a relatively unknown company outside of the tech
           | bubble. I told my own mom to install Gemini on her phone
           | because she's heard of Google and is more likely going to
           | trust Google with whatever info she dumps into a chat. I
           | can't think of a reason she would be compelled to use ChatGPT
           | instead.
           | 
           | Consumer brand companies such as Coca Cola and Pepsi spend
           | millions on brand awareness advertising just to be the
           | "default" in everyone's heads. When there's not much
           | consequence choosing one option over another, the one you've
           | heard of is all that matters
        
           | paulddraper wrote:
           | Facebook fundamentally had network effects.
        
           | JumpCrisscross wrote:
           | > _OpenAI has been on a winning streak that makes ChatGPT the
           | default chatbot for most of the planet_
           | 
           | OpenAI has like 10 to 20% market share [1][2]. They're also
           | an American company whose CEO got on stage with an
           | increasingly-hated world leader. There is no universe in
           | which they keep equal access to the world's largest
           | economies.
           | 
           | [1] https://iot-analytics.com/leading-generative-ai-
           | companies/
           | 
           | [2] https://www.enterpriseappstoday.com/stats/openai-
           | statistics....
        
         | grey-area wrote:
         | Well I think you're correct that they know the jig is up, but I
         | would say they know the AI bubble is about to burst so they
         | want to cash out before that happens.
         | 
         | There is little to no money to be made in GAI, it will never
         | turn into AGI, and people like Altman know this, so now they're
         | looking for a greater fool before it is too late.
        
           | Jefro118 wrote:
           | They made $4 billion last year, not really "little to no
           | money". I agree it's not clear they can justify their
           | valuation but it's certainly not a bubble.
        
             | mandevil wrote:
             | But didn't they spend $9 billion? If I have a machine that
             | magically turns $9 billion of investor money into $4
             | billion in revenue, I need to have a pretty awesome story
             | for how in the future I am going to be making enormous
             | piles of money to pay back that investment. If it looks
             | like frontier models are going to be a commodity and it is
             | not going to be winner-take-all... that's a lot harder
             | story to tell.
        
               | BosunoB wrote:
               | Most of that 9 billion was spent on training new models
               | and on staff. If they stopped spending money on R&D, they
               | would already be profitable.
        
               | PeterStuer wrote:
               | But only if everyone else stopped improving models as
               | well.
               | 
               | In this niche you can be irrellevant in months when your
               | models drop behind.
        
               | ezekg wrote:
               | Says literally every startup ever i.r.t. R&D/marketing/ad
               | spend yet that's rarely reality.
        
               | PUSH_AX wrote:
               | In a space that moves this fast and is defined by
               | research breakthroughs, they'd be profitable for about 5
               | minutes.
        
               | ahtihn wrote:
               | > If they stopped spending money on R&D, they would
               | already be profitable.
               | 
               | The news that they did that would make them lose most of
               | their revenue pretty fast.
        
               | JumpCrisscross wrote:
               | > _if they stopped spending money on R &D, they would
               | already be profitable_
               | 
               | OpenAI has claimed this. But Altman is a pathological
               | liar. There are _lots_ of ways of disguising operating
               | costs as capital costs or R &D.
        
             | nativeit wrote:
             | Cognitive dissonance is a psychological phenomenon that
             | occurs when a person holds two contradictory beliefs at the
             | same time.
        
             | SirensOfTitan wrote:
             | I guarantee you that I could surpass that revenue if I
             | started a business that would give people back $9 if they
             | gave me $4.
             | 
             | OpenAI models are already of the most expensive, they don't
             | have a lot of levers to pull.
        
               | tshaddox wrote:
               | > I started a business that would give people back $9 if
               | they gave me $4
               | 
               | I feel like people overuse this criticism. That's not the
               | only way that companies with a lot of revenue lose money.
               | And this isn't at all what OpenAI is doing, at least from
               | their customers' perspective. It's not like customers are
               | subscribing to ChatGPT simply because it gives them
               | something they were going to buy anyway for cheaper.
        
               | edmundsauto wrote:
               | There is a pretty significant different between "buy $9
               | for $4" and selling a service that costs $9 to build and
               | run per year for $4 per year. Especially when some people
               | think that service could be an absolute game changer for
               | the species.
               | 
               | It's ok to not buy into the vision or think it's
               | impossible. But it's a shallow dismissal to make the
               | unnuanced comparison, especially when we're talking about
               | a brand new technology - who knows what the cost
               | optimization levers are. Who knows what the market will
               | bear after a few more revs.
               | 
               | When the iPhone first came out, it was too expensive,
               | didn't do enough, and many people thought it was a waste
               | of apples time when they should be making music players.
        
               | davidcbc wrote:
               | > When the iPhone first came out, it was too expensive,
               | didn't do enough, and many people thought it was a waste
               | of apples time when they should be making music players.
               | 
               | This comparison is always used when people are trying to
               | hype something. For every "iPhone" there are thousands of
               | failures
        
               | SirensOfTitan wrote:
               | It's a commodity technology and VCs are investing as if
               | this were still a winner-takes-all play. It's obviously
               | not, if there were any doubt about that, Deepseek's R1
               | release should have made it obvious.
               | 
               | > But it's a shallow dismissal to make the unnuanced
               | comparison, especially when we're talking about a brand
               | new technology - who knows what the cost optimization
               | levers are. Who knows what the market will bear after a
               | few more revs.
               | 
               | You're acting as-if OpenAI is still the only player in
               | this space. OpenAI has plenty of competitors who can
               | deliver similar models for cheaper. Gemini 2.5 is an
               | excellent and affordable model and Google has a
               | substantially better capacity to scale because of a
               | multi-year investment in its TPUs.
               | 
               | Whatever first mover advantage OpenAI had has been
               | quickly eliminated, they've lost a lot of their talent,
               | and the chief hypothesis they used to attract the capital
               | they've raised so far is utterly wrong. VCs would be mad
               | to be continuing to pump money into OpenAI just to extend
               | their runway -- at 5 Bln losses per year they need to
               | actually consider cost, especially when their frontier
               | releases are only marginal improvements over competitors.
               | 
               | ... this is a bubble despite the promise of the
               | technology and anyone paying attention can see it. For
               | all of the dumb money employed in this space to make it
               | out alive, we'll have to at least see a fairly strong
               | form of AGI developed, and by that point the tech will be
               | threatening the general economic stability of the US
               | consumer.
        
           | atleastoptimal wrote:
           | AI companies are already automating huge swaths of document
           | analysis, customer service. Doctors are straight up using
           | ChatGPT to diagnose patients. I know it's fun to imagine AI
           | is some big scam like crypto, but you'd have to be ignoring a
           | lot of genuine non hype economic movement at this point to
           | assume GAI isn't making any money.
           | 
           | Why does the forum of an incubator that now has a portfolio
           | that is like 80% AI so routinely bearish on AI? Is it a fear
           | of irrelevance?
        
             | gscott wrote:
             | When the wright brothers made their plane they didn't
             | expect today that there are thousands of planes flying at a
             | time.
             | 
             | When the Internet was developed they didn't imagine the
             | world wide Web.
             | 
             | When cars started to get popular people still thought there
             | would be those who are going to stick with horses.
             | 
             | I think you're right on the AI we're just on the cusp of it
             | and it'll be a hundred times bigger than we can imagine.
             | 
             | Back when oil was discovered and started to be used it was
             | about equal to 500 laborers now automated. One AI computer
             | with some video cards are now worth x number of knowledge
             | workers. That never stop working as long as the electricity
             | keeps flowing.
        
             | paulddraper wrote:
             | Yes. The answer is yes.
             | 
             | The world is changing and that is scary.
        
             | davidcbc wrote:
             | > Doctors are straight up using ChatGPT to diagnose
             | patients
             | 
             | This makes me want to invest in malpractice lawyers, not
             | OpenAI
        
               | plaidfuji wrote:
               | The lawyers will be obsolete far faster than the doctors
        
             | directevolve wrote:
             | Doctors were using Google to diagnose patients before. The
             | thing is, it's still the doctor delivering the diagnosis,
             | the doctor writing the prescription, and the doctor billing
             | insurance. Unless and until patients or hospitals are
             | willing and legally able to use ChatGPT as a replacement
             | for a doctor (unwise), ChatGPT is not about to eat any
             | doctor's lunch.
        
               | twodave wrote:
               | Not OP, but I think this makes the point, not argues
               | against it. Something has come along that can supplant
               | Google for a wide range of things. And it comes without
               | ads (for now). It's an opportunity to try a different
               | business model, and if they succeed at that then it's off
               | to the races indeed.
        
             | JumpCrisscross wrote:
             | > _AI companies are already automating huge swaths of
             | document analysis, customer service. Doctors are straight
             | up using ChatGPT to diagnose patients_
             | 
             | I don't think there is serious argument that LLMs won't
             | generate tremendous value. The question is who will capture
             | it. PCs generated massive value. But other than a handful
             | of manufacturers and designers (namely, Apple, HP, Lenovo,
             | Dell and ASUS), most PC builders went bankrupt. And out of
             | the value generated by PCs in the world, the _vast_
             | majority was captured by other businesses and consumers.
        
             | horhay wrote:
             | Lol they are not using ChatGPT for the full diagnosis.
             | They're used in steps of double checking knowledge like
             | drug interactions and such. If you're gonna speak on
             | something like this in a vague manner I'd suggest you
             | google this stuff first. I can tell you for certain that
             | that part in particular is a highly inaccurate statement.
        
             | krainboltgreene wrote:
             | > Doctors are straight up using ChatGPT to diagnose
             | patients.
             | 
             | Oh we know:
             | https://pmc.ncbi.nlm.nih.gov/articles/PMC11006786/
        
               | ninkendo wrote:
               | The article you posted describes a _patient_ using
               | ChatGPT to get a second opinion from what their doctor
               | told them, not the doctor themself using ChatGPT.
               | 
               | The article could just as easily be about "Delayed
               | diagnosis of a transient ischemic attack caused by
               | talking to some rando on Reddit" and it would be just as
               | (non) newsworthy.
        
         | nfRfqX5n wrote:
         | ask 10 people on the street about chatgpt or gemini and see
         | which one they know
        
           | kranke155 wrote:
           | thats just brand recognition.
           | 
           | The fact that people know Coca Cola doesnt mean they drink
           | it.
        
             | blueprint wrote:
             | or that they would drink it if a well designed, delicious,
             | but no HFCS nor sugar alternative were marketed with
             | funding
        
             | All4All wrote:
             | But whether the competition will emerge as Pepsi or as RC-
             | Cola is still tbd.
        
             | jimbokun wrote:
             | It doesn't?
             | 
             | That name recognition made Coca Cola into a very successful
             | global corporation.
        
           | postalrat wrote:
           | Now switch chatgpt and gemini on them and see if they notice.
        
           | jampa wrote:
           | The real money is for enterprise use (via APIs), so public
           | perception is not as crucial as for a consumer product.
        
           | jjani wrote:
           | Ask 10 people on the street in 2009 about IE and Chrome and
           | ask which one they knew.
           | 
           | The names don't even matter when everything is baked in.
        
           | jmathai wrote:
           | That's the wrong question. See how many people know Google
           | vs. ChatGPT. As popular as ChatGPT is, Google's the stronger
           | brand.
        
           | TrackerFF wrote:
           | On the other hand...If you asked, 5-6-7 years ago, 100 people
           | which of the following they used:
           | 
           | Slack? Zoom? Teams?
           | 
           | I'm sure you'd get a somewhat uniform distribution.
           | 
           | Ask the same today, and I'd bet most will say Teams. Why
           | Teams? Because it comes with office / windows, so that's what
           | most people will use.
           | 
           | Same logic goes for the AI / language models...which one are
           | people going to use? The ones that are provided as "batteries
           | included" in whatever software or platform they use the most.
           | And for the vast majority of regular people / workers, it is
           | going to be something by microsoft / google / whatever.
        
         | crorella wrote:
         | But he said he was doing it just for love!! [1]
         | 
         | 1: https://www.techpolicy.press/transcript-senate-judiciary-
         | sub...
        
         | fooker wrote:
         | Google is pretty far behind. They have random one off demos and
         | they beat benchmarks yes, but try to use Google's AI stuff for
         | real work and it falls apart really fast.
        
           | adastra22 wrote:
           | People are using Gemini for real work. I prefer Claude
           | myself, but Gemini is as good (or alternatively: as bad) as
           | OpenAI's models.
           | 
           | The only thing OpenAI has right now is the ChatGPT name,
           | which has become THE word for modern LLMs among lay people.
        
           | Nuzzerino wrote:
           | Define "real work"
        
           | reasonableklout wrote:
           | That's not what early adopter numbers are showing. Even the
           | poll from r/openai a few days ago show Gemini 2.5 with nearly
           | 3x more votes than o3 (and far beyond Claude): https://www.re
           | ddit.com/r/OpenAI/comments/1k67bya/what_is_cur...
           | 
           | Anecdotally, I've switched to Gemini as my daily driver for
           | complex coding tasks. I prefer Claude's cleaner code, but it
           | is less capable at difficult problems, and Anthropic's
           | servers are unreliable.
        
         | caseyy wrote:
         | It's doubtful if there even is a race anymore. The last
         | significant AI advancement in the consumer LLM space was fluent
         | human language synthesis around 2020, with its following
         | assistant/chat interface. Since then, everything has been
         | incremental -- larger models, new ways to prompt them, cheaper
         | ways to run them, more human feedback, and gaming evaluations.
         | 
         | The wisest move in the chatbot business might be to wait and
         | see if anyone discovers anything profitable before spending
         | more effort and wasting more money on chat R&D, which includes
         | most agentic stuff. Reliable assistants or something along
         | those lines might be the next big breakthrough (if you ask
         | certain futurologists), but the technology we have seems
         | unsuitable for any provable reliability.
         | 
         | ML can be applied in a thousand ways other than LLMs, and many
         | will positively impact our lives and create their own markets.
         | But OpenAI is not in that business. I think the writing is on
         | the wall, and Sama's vocal fry, "AGI is close," and humanity
         | verification crypto coins are smoke and mirrors.
        
           | roflmaostc wrote:
           | Just to get things right. The big AI LLM hype started end of
           | 2022 with the launch of ChatGPT, DALL-E 2, ....
           | 
           | Most people in society connect AI directly to ChatGPT and
           | hence OpenAI. And there has been a lot of progress in image
           | generation, video generation, ...
           | 
           | So I think your timeline and views are slightly off.
        
             | caseyy wrote:
             | > Just to get things right. The big AI LLM hype started end
             | of 2022 with the launch of ChatGPT, DALL-E 2, ....
             | 
             | GPT-2 was released in 2019, GPT-3 in 2020. I'd say 2020 is
             | significant because that's when people seriously considered
             | the Turing test passed reliably for the first time. But for
             | the sake of this argument, it hardly matters what date
             | years back we choose. There's been enough time since then
             | to see the plateau.
             | 
             | > Most people in society connect AI directly to ChatGPT and
             | hence OpenAI.
             | 
             | I'd double-check that assumption. Many people I've spoken
             | to take a moment to remember that "AI" stands for
             | artificial intelligence. Outside of tongue-in-cheek jokes,
             | OpenAI has about 50% market share in LLMs, but you can't
             | forget that Samsung makes AI washing machines, let alone
             | all the purely fraudulent uses of the "AI" label.
             | 
             | > And there has been a lot of progress in image generation,
             | video generation, ...
             | 
             | These are entirely different architectures from LLM/chat
             | though. But you're right that OpenAI does that, too. When I
             | said that they don't stray much from chat, I was thinking
             | more about AlexNet and the broad applications of ML in
             | general. But you're right, OpenAI also did/does diffusion,
             | GANs, transformer vision.
             | 
             | This doesn't change my views much on chat being "not seeing
             | the forest for the trees" though. In the big picture, I
             | think there aren't many hockey sticks/exponentials left in
             | LLMs to discover. That is not true about other AI/ML.
        
               | tomnipotent wrote:
               | ChatGPT was not released to the general public until
               | November 2022, and the mobile apps were not released
               | until May 2023. For most of the world LLM's did not exist
               | before those dates.
        
               | kmacdough wrote:
               | >In the big picture, I think there aren't many hockey
               | sticks/exponentials left in LLMs to discover. That is not
               | true about other AI/ML.
               | 
               | We do appear to be hitting a cap on the current
               | generation of auto-regressive LLMs, but this isn't a
               | surprise to anyone on the frontier. The leaked
               | conversations between Ilya, Sam and Elon from the early
               | OpenAI days acknowledge they didn't have a clue as to
               | architecture, only that scale was the key to making
               | experiments even possible. No one expected this
               | generation of LLMs to make it nearly this far. There's a
               | general feeling of "quiet before the storm" in the
               | industry, in anticipation of an architecture/training
               | breakthrough, with a focus on more agentic, RL-centric
               | training methods. But it's going to take a while for
               | anyone to prove out an architecture sufficiently, train
               | it at scale to be competitive with SOTA LLMs and perform
               | enough post training, validation and red-teamint to be
               | comfortable releasing to the public.
               | 
               | Current LLMs are years and hundreds of millions of
               | dollars of training in. That's a very high bar for a new
               | architecture, even if it significantly improves on LLMs.
        
           | paulddraper wrote:
           | You saying --- with a straight face --- that post 2020 LLM
           | AIs have made only incremental progress?
        
             | ReptileMan wrote:
             | Yes. But they have also improved a lot. Incremental just
             | means that the function is going up without breaking
             | points. We haven't seen anything revolutionary, just
             | evolutionary in the last 3 years. But the models do provide
             | 2 or 3 times more value. So their pace of advancement is
             | not slow.
        
             | caseyy wrote:
             | Yep, compared to beating the Turing test, the progress has
             | been linear with exponentially growing investment. That's
             | diminishing marginal returns.
        
           | orionsbelt wrote:
           | Saying LLMs have only incrementally improved is like saying
           | my 13 year old has only incrementally approved over the last
           | 5 years. Sure, it's been a set of continuous improvements,
           | but that has taken it from a toy to genuinely insanely
           | useful.
           | 
           | Personally, deep research and o3 have been transformative,
           | taking LLMs from something I have never used to something
           | that I am using daily.
           | 
           | Even if the progress ends up plateauing (which I do not
           | believe will happen in the near term), behaviors are
           | changing; OpenAI is capturing users, and taking them from
           | companies like Google. Google may be able to fight back and
           | win - Gemini 2.5 Pro is great - but any company sitting this
           | out risks being unable to capture users back from Open AI at
           | a later date.
        
             | bigstrat2003 wrote:
             | No, it's still just a toy. Until they can make the models
             | actually _consistently_ good at things, they aren 't going
             | to be useful. Right now they still BS you far too much to
             | trust them, and because you have to double check their work
             | every time they are worse than no tool at all.
        
             | csours wrote:
             | To extend your illustration, 5 years ago no one could train
             | an LLM with the capabilities of a 13 year old human; now
             | many companies can both train LLMs and integrate them into
             | products.
             | 
             | > taken it from a toy to genuinely insanely useful.
             | 
             | Really?
        
             | devjab wrote:
             | > any company sitting this out risks being unable to
             | capture users back from Open AI at a later date.
             | 
             | Why? I paid for Claude for a while, but with Deepseek,
             | Gemini and the free hits on Mistral, ChatGPT, Claude and
             | Perplexity I'm not sure why I would now. This is anecdotal
             | of course, but I'm very rarely unique in my behaviour. I
             | think the best the subscription companies can hope for is
             | that their subscribers don't realize that Deepseek and
             | Gemini can basically do all you need for free.
        
               | poormathskills wrote:
               | >I'm very rarely unique in my behaviour
               | 
               | I cannot stress this enough: if you know what Deepseek,
               | Claude, Mistral, and Perplexity are, you are not a
               | typical consumer.
               | 
               | Arguably, if you have used a single one of those brands
               | you are not a typical consumer.
               | 
               | The vast majority of people have used ChatGPT and nothing
               | else, except maybe clicking on Gemini or Meta AI by
               | accident.
        
               | qcic wrote:
               | I doubt it. Google is shoving Gemini on everyone's face
               | through search, and Meta AI is embedded in every Meta
               | product. Heck, instagram created a bot marketplace.
               | 
               | They might not "know" the brand as well as ChatGPT, but
               | the average consumer has definitely been exposed to those
               | at the very least.
               | 
               | DeepSeek also made a lot of noise, to the point that,
               | anecdotally, I've seen a lot of people outside of tech
               | using it.
        
         | moralestapia wrote:
         | Sorry but perhaps you haven't looked at the actual numbers.
         | 
         | Market share of OpenAI is like 90%+.
        
           | JumpCrisscross wrote:
           | > _Market share of OpenAI is like 90%+_
           | 
           | Source? I've seen 10 to 20% [1][2].
           | 
           | [1] https://iot-analytics.com/leading-generative-ai-
           | companies/
           | 
           | [2] https://www.enterpriseappstoday.com/stats/openai-
           | statistics....
        
             | moralestapia wrote:
             | Hmm ...
             | 
             | I probably need to clarify what I'm talking about, so that
             | peeps like @JumpCrisscross can get a better grasp of it.
             | 
             | I do not mean the total market share of the category of
             | businesses that could be labeled as "AI companies", like
             | Microsoft or NVIDIA, on your first link.
             | 
             | I will not talk about your second link because it does not
             | seem to make sense within the context of this conversation
             | (zero mentions or references to market share).
             | 
             | What I mean is:
             | 
             | * The main product that OpenAI sells is AI models (GPT-4o,
             | etc...)
             | 
             | * OpenAI does not make hardware. OpenAI is not in the
             | business of cloud infrastructure. OpenAI is not in the
             | business of selling smartphones. A comparison between
             | OpenAI and any of those companies would only make sense for
             | someone with a very casual understanding of this topic. I
             | can think of someone, perhaps, who only used ChatGPT a
             | couple times and inferred it was made by Apple because it
             | was there on its phone. This discussion calls for a deeper
             | understanding of what OpenAI is.
             | 
             | * Other examples of companies that sell their own AI
             | models, and thus compete directly with OpenAI _in the same
             | market that OpenAI operates by taking a look at their
             | products and services_ , are Anthropic (w/ Claude), Google
             | (w/ Gemini) and some others ones like Meta and Mistral with
             | open models.
             | 
             | * All those companies/models, together, make up some market
             | that you can put any name you want to it (The AI Model
             | Market TM)
             | 
             | That is the market I'm talking about, and that is the one
             | that I estimated to be 90%+ which was pretty much on point,
             | as usual :).
             | 
             | 1: https://gs.statcounter.com/ai-chatbot-market-share
             | 
             | 2: https://www.ctol.digital/news/latest-llm-market-share-
             | mar-20...
        
               | JumpCrisscross wrote:
               | > _that is the market that I 'm talking about, and that
               | is the one that I (correctly, as usual) estimated to be
               | around 90% [1][2]_
               | 
               | Your second source doesn't say what it's measuring and
               | disclaims itself as from its "'experimental era' -- a
               | beautiful mess of enthusiasm, caffeine, and user-
               | submitted chaos." Your first link only measures chatbots.
               | 
               | ChatGPT is a chatbot. OpenAI sells AI models, including
               | via ChatGPT. Among chatbots, sure, 84% per your source.
               | (Not "90%+," as you stated.) But OpenAI makes more than
               | chatbots, and in the broader AI model market, its lead is
               | far from 80+ percent.
               | 
               | TL; DR It is entirely wrong to say the "market share of
               | OpenAI is like 90%+."
               | 
               | [1] https://firstpagesage.com/reports/top-generative-ai-
               | chatbots...
        
               | moralestapia wrote:
               | Sorry, I was off by 6% and you're right, I'm usually
               | _way_ more precise in my estimates.
               | 
               | >10%-20%
               | 
               | Lmao, not even in Puchal wildest dreams.
        
         | charlieyu1 wrote:
         | at least 6-9 months too late
        
         | parliament32 wrote:
         | Agreed on Google dominance. Gemini models from this year are
         | significantly more helpful than anything from OAI.. and they're
         | being handed out for free to anyone with a Google account.
        
       | simonw wrote:
       | Matt Levine on OpenAI's weird capped return structure in November
       | 2023:
       | 
       |  _And the investors wailed and gnashed their teeth but it's true,
       | that is what they agreed to, and they had no legal recourse. And
       | OpenAI's new CEO, and its nonprofit board, cut them a check for
       | their capped return and said "bye" and went back to running
       | OpenAI for the benefit of humanity. It turned out that a benign,
       | carefully governed artificial superintelligence is really good
       | for humanity, and OpenAI quickly solved all of humanity's
       | problems and ushered in an age of peace and abundance in which
       | nobody wanted for anything or needed any Microsoft products. And
       | capitalism came to an end._
       | 
       | https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...
        
       | byearthithatius wrote:
       | [removed]
        
         | languagehacker wrote:
         | amen brother
        
         | sho_hn wrote:
         | No, it's good that you feel this. Don't give up on tech,
         | protest.
         | 
         | I've been feeling for some time now that we're sort of in the
         | Vietnam War era of the tech industry.
         | 
         | I feel a strong urge to have more "ok, so where do we go from
         | here?" and "what does a tech industry that promotes net good
         | actually look like?" internal discourse in the community of
         | practice, and some sort of ethical social contract for software
         | engineering.
         | 
         | The open source movement has been fabulous and sometimes
         | adjacent to or one aspect of these concerns, but really we need
         | a movement for socially conscious and responsible software.
         | 
         | We need a tech counter-culture. We had one once, but now we
         | _need_ one.
        
         | cjpearson wrote:
         | Not all non-profits are doomed. It's natural that the biggest
         | companies will be the ones who have growth and profit as their
         | primary goal.
         | 
         | But there are still plenty of mission-focused technology non-
         | profits out there. Many of which have lasted decades. For
         | example: Linux Foundation, Internet Archive, Mozilla,
         | Wikimedia, Free Software Foundation, and Python Software
         | Foundation.
         | 
         | Don't get me wrong, I'm also disappointed in the direction and
         | actions of big tech, but I don't think it's fair to dismiss the
         | non-profit foundations. They aren't worth a trillion dollars,
         | however they are still doing good and important work.
        
       | etruong42 wrote:
       | The intro sounds awfully familiar...
       | 
       | > Sam's Letter to Employees.
       | 
       | > OpenAI is not a normal company and never will be.
       | 
       | Where did I hear something like that before...
       | 
       | > Founders' IPO Letter
       | 
       | > Google is not a conventional company. We do not intend to
       | become one.
       | 
       | I wonder if it's intentional or perhaps some AI-assisted
       | regurgitation prompted by "write me a successful letter to
       | introduce a new corporate structure of a tech company".
        
       | photochemsyn wrote:
       | The recent flap over ChatGPT's fluffery/flattery/glazing of users
       | doesn't bode well for the direction that OpenAI is headed in.
       | Someone at the outfit appeared to think that giving users a
       | dopamine hit would increase time-spent-on-app or some other
       | metric - and that smells like contempt for the intelligence of
       | the user base and a manipulative approach designed not to improve
       | the quality of the output, but to addict the user population to
       | the ChatGPT experience. Your own personal yes-person to praise
       | everything you do, how wonderful. Perfect for writing the scripts
       | for government cabinent ministers to recite when the grand
       | poobah-in-chief comes calling, I suppose.
       | 
       | What it really says is that if a user wants to control the
       | interaction and get the useful responses, direct programmatic
       | calls to the API that control the system prompt are going to be
       | needed. And who knows how much longer even that will be allowed?
       | As ChatGPT reports,
       | 
       | > "OpenAI has updated the ChatGPT UI (especially in GPT-4-turbo
       | and ChatGPT Plus environments) to no longer expose the full
       | system prompt or baseline prompt directly."
        
       | martinohansen wrote:
       | Imagine having a mission of "ensure[ing] that artificial general
       | intelligence (AGI) benefits all of humanity" while also believing
       | that it can only be trusted in the hands of the few
       | 
       | > A lot of people around OpenAI in the early days thought AI
       | should only be in the hands of a few trusted people who could
       | "handle it".
        
         | TZubiri wrote:
         | To the benefit of OpenAI. I think LLMs would still exist, but
         | we wouldn't have access to them.
         | 
         | Whether they are a net positive or a net negative is arguable.
         | If it's a net negative, then unleashing them to the masses was
         | maybe the danger itself.
        
         | jb_rad wrote:
         | He's very clearly stating that trusting AI to a few hands was
         | an old, naiive idea that they have evolved from. Which
         | establishes their need to keep evolving as the technology
         | matures.
         | 
         | There is a lot to criticize about OpenAI and Sama, but this
         | isn't it.
        
       | TZubiri wrote:
       | I'm not gonna get caught in the details, I'm just going to assume
       | this is legalese cognitive dissonance to avoid saying "we want
       | this to stop being an NFP because we want the profits."
        
       | bjacobso wrote:
       | I think the main issue is they accidentally created an incredible
       | consumer brand with ChatGPT. They should sell that asset to
       | World.
        
       | jethronethro wrote:
       | Ed Zitron's going to have a field day with this ...
        
       | eximius wrote:
       | Again?
        
       | bluelightning2k wrote:
       | Turns out the non profit structure wasn't very profitable
        
       | I_am_tiberius wrote:
       | Still waiting for o3-Pro.
        
       | jampekka wrote:
       | > We are committed to this path of democratic AI.
       | 
       | So were do I vote? How do I became a candidate to be a
       | representative or a delegate of voters? I assume every single
       | human is eligible for both, as OpenAI serves the humanity?
        
         | softwaredoug wrote:
         | Democratic AI but we don't want it regulated by any democratic
         | process
        
           | jampekka wrote:
           | I wonder if democracy is some kind of corporate speech
           | homonym of some totally different concept I'm familiar with.
           | Perhaps it's even an interesting linguistic case where a word
           | is a homonym of its antonym?
           | 
           | Edit: also apparently known as contronym.
        
             | JumpCrisscross wrote:
             | > _wonder if democracy is some kind of corporate speech_
             | 
             | It generally means broadening access to something. Finance
             | loves democratising access to stupid things, for example.
             | 
             | > _word is a homonym of its antonym?_
             | 
             | Inflammable in common use.
        
           | rchaud wrote:
           | Democratic People's Republic of AI
        
         | m3kw9 wrote:
         | Path of, so it's getting there
        
           | jampekka wrote:
           | Via a temporary vanguard board composed of the most conscious
           | and disciplined profit maximizers.
        
         | moffkalast wrote:
         | They are committed, they didn't say they pushed yet. Or will
         | ever.
        
         | insane_dreamer wrote:
         | Lenin and the Bolsheviks were also committed to the path of
         | fully democratic government. As soon as the people are ready.
         | In the interim we'll make all the decisions.
        
       | SCAQTony wrote:
       | Does anyone truly believe Musk had benevolent intentions? But
       | before we even evaluate the substance of that claim, we must ask
       | whether he has standing to make it. In his court filing, Musk
       | uses the word "nonprofit" 111 times, yet fails to explain how
       | reverting OpenAI to a nonprofit structure would save humanity,
       | elevate the public interest, or mitigate AI's risks. The legal
       | brief offers no humanitarian roadmap, no governance proposal, and
       | no evidence that Musk has the authority to dictate the trajectory
       | of an organization he holds no equity in. It reads like a bait
       | and switch -- full of virtue-signaling, devoid of actionable
       | virtue. And he never had a contract or an agreement for with
       | OpenAI to keep it a non-profit.
       | 
       | Musk claimed Fraud, but never asked for his money back in the
       | brief. Could it be his intentions were to limit OpenAI to
       | donations thereby sucking the oxygen out of the venture capital
       | space to fund Xai's Grok?
       | 
       | Musk claimed he donated $100mil, later in a CNBC interview, he
       | said $50-mil. TechCrunch suggests it was way less.
       | 
       | Speakingof humanitarian, how about this 600lbs Oxymoron in the
       | room: A Boston University mathematician has now tracked an
       | estimated 10,000 deaths linked to the Musk's destruction of USAID
       | programs, many of which provided basic health services to
       | vulnerable populations. He may have a death count on his reume in
       | the coming year.
       | 
       | Non profits has regulation than publicly traded companies. Each
       | quarterly filings is like a colonoscopy with Sorbonne Oxley rules
       | etc. Non profits just file a tax statement. Did you know the
       | Chirch of Scientology is a non-profit.
        
         | timewizard wrote:
         | Replace Musk with "any billionaire."
         | 
         | He's a symptom of a problem. He's not actually the problem.
        
       | alganet wrote:
       | Can you commit to a "swords into ploughshares" goal?
       | 
       | We know it's a sword. And there's war, yadda yadda. However,
       | let's do the cultivating thing instead.
       | 
       | What other AI players we need to convince?
        
       | mrandish wrote:
       | I agree that this is simply Altman extending his ability to
       | control, shape and benefit from OpenAI. Yes, this is clearly
       | (further) subverting the original intent under which the org was
       | created - and that's unfortunate. But in terms of impact on the
       | world, or even just AI safety, I'm not sure the governance of
       | OpenAI matters all that much anymore. The "governance" wasn't
       | that great after the first couple years and OpenAI hasn't been
       | "open" since long before the board spat.
       | 
       | More crucially, since OpenAI's founding and especially over the
       | past 18 months, it's grown increasingly clear that AI leadership
       | probably won't be dominated by one company, progress of "frontier
       | models" is stalling while costs are spiraling, and 'Foom' AGI
       | scenarios are highly unlikely anytime soon. It looks like this is
       | going to be a much longer, slower slog than some hoped and others
       | feared.
        
       | m3kw9 wrote:
       | This sounds like a good middle ground between going full
       | capitalism and non-profit. This way they can still raise money
       | and also have the same mission, but a weakened one. You can't
       | have everything.
        
       | A_Duck wrote:
       | This is the moment where we fumble the opportunity to avoid a
       | repeat of Web 1.0's ad-driven race to the bottom
       | 
       | Look forward to re-living that shift from life-changing community
       | resource to scammy and user-hostile
        
       | LetsGetTechnicl wrote:
       | Can't wait to hear Ed Zitron's take on this
        
       | nova22033 wrote:
       | >current complex capped-profit structure
       | 
       | Is OpenAI making a profit?
        
       ___________________________________________________________________
       (page generated 2025-05-05 23:00 UTC)