[HN Gopher] We have reached an agreement in principle for Sam to...
       ___________________________________________________________________
        
       We have reached an agreement in principle for Sam to return to
       OpenAI as CEO
        
       Author : staranjeet
       Score  : 1953 points
       Date   : 2023-11-22 06:01 UTC (1 days ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | jahsome wrote:
       | That is one hot potato.
        
       | siva7 wrote:
       | Interesting that Adam is still on Board. This hints to Helen
       | being the main perpetrator of the drama?
        
         | avereveard wrote:
         | Well I don't see Greg back on the list and he was a loyalist so
         | there may be a few adjustments moving forward
        
         | fatbird wrote:
         | Or it was recognized that Adam was the instigator and the real
         | power player, and the force that Sam needed to come to an
         | accommodation with. From everything I've heard about Toner,
         | she's a very principled person who lent academic credibility to
         | the board, and was a great figurehead for the non-profit's
         | conscience. Once the veneer was ripped from the non-profit's
         | "controlling" role, she was deadweight and useful only as a
         | scapegoat.
         | 
         | It looks to me like the real victim here is the "for humanity"
         | corporate structure. At some point, the money decided it needed
         | to be free.
        
           | notfed wrote:
           | Nah, anyone who voted Sam out is in timeout.
        
             | fatbird wrote:
             | Adam D'Angelo is on the new board with Brett Taylor and
             | Larry Summers. Tasha, Ilya and Helen are out.
             | 
             | Still think D'Angelo wasn't the power player in the room?
        
         | GreedClarifies wrote:
         | Yes, quite clearly.
        
       | AaronNewcomer wrote:
       | What a wild ride. I have used X more the past few days than in a
       | long time; that's for sure!
        
         | stingraycharles wrote:
         | Yes. I, too, read a whole three tweets in the past few days,
         | which is more than I did the entire year before that.
        
       | adlpz wrote:
       | Good, was out of popcorn already.
       | 
       | Somebody make a Netflix documentary please.
        
       | colmvp wrote:
       | So, is he still going to lead some team at Microsoft?
        
         | wilg wrote:
         | No https://twitter.com/sama/status/1727207458324848883
        
       | turndown wrote:
       | From the outside none of this makes much sense. So the old board
       | just disliked him enough to oust him but apparently didn't have a
       | good pulse on the company and overplayed their hand?
        
         | fruit2020 wrote:
         | It's about money and power. Not AI safety or people disliking
         | each other.
        
           | jychang wrote:
           | What money? None of them had equity
        
             | consp wrote:
             | Not having money while everyone becomes filthy rich is also
             | a money motivator.
        
             | MVissers wrote:
             | They'll all be filthy rich if they can keep doing this.
             | Altman was already side-hussling to get funding for other
             | AI companies.
             | 
             | Same with employees and their stock comp. Same with
             | microsoft.
        
             | ravst3s wrote:
             | The had some equity after 2019.
             | 
             | Thrive was about to buy employee shares at a $86 bn
             | valuation. The Information said that those units had 12x
             | since 2021.
             | 
             | https://www.theinformation.com/articles/thrive-capital-to-
             | le...
        
           | Davidzheng wrote:
           | There's no proof on either side. Just as likely to be
           | ideological disputes from Helen and Ilya.
        
         | yosame wrote:
         | As far as I can tell, Sam did something? to get fired by the
         | board, who are meant to be driven by non-profit ideals instead
         | of corporate profits (probably from Sam pushing profit over
         | safety, but there's no real way to know). From that, basically
         | the whole company threatened to quit and move to Microsoft,
         | showing the board that their power is purely ornamental. To
         | retain any sort of power or say over decision making
         | whatsoever, the board made concessions and got Sam back.
         | 
         | Really it just shows the whole non-profit arm of the company
         | was even more of a lie then it appeared.
        
           | maxdoop wrote:
           | Where is this blind trust for the board coming from? The
           | board provided zero rationale for firing Sam.
        
             | evantbyrne wrote:
             | They did give reasons they were just vague. Reading between
             | the lines, it seems the board was implying that Sam was
             | trying to manipulate the board members individually. Was it
             | true? Who knows. And as an outside observer, who cares?
             | This is a fight between rich people about who gets to be
             | richer. AI is so much larger than one cultish startup.
        
         | nbanks wrote:
         | They wanted a new CEO and didn't expect Sam to take 95% of the
         | company with him when he left.
         | 
         | Sam also played his hand extremely well; he's likely learned
         | from watching hundreds of founder blowups over the years. He
         | never really seemed angry publicly as he gained support from
         | all the staff including Ilya & Mira. I had little doubt Emmett
         | Shear would also welcome Sam's return since they were both in
         | the first YC batch together.
        
       | eganist wrote:
       | Any analysis on how Satya Nadella comes out on all of this? Or
       | what impact this might have at all within Microsoft?
        
         | nothrowaways wrote:
         | Satya is still a winner, grabs less now though.
        
           | polyomino wrote:
           | Satya wants to be able to book the OpenAI money as revenue.
           | This is better for him.
        
         | Racing0461 wrote:
         | Satya's pay is about 100 million dollars. Ide say he has earned
         | every penny for protecting MSFTs 10B investment in OpenAI. A 1%
         | insurance policy is great value.
        
       | Finbarr wrote:
       | Hopefully Sam and Greg get restored to the board also.
        
         | robbiet480 wrote:
         | The most recent reporting I saw from Bloomberg said sama would
         | return as CEO only.
        
           | ssnistfajen wrote:
           | The new board only has 3 people to start with, but hopefully
           | easier to add more members soon. Tonight's NYT story
           | mentioned the board member attrition and the prolonged
           | gridlock in adding new ones, which probably led to the
           | current saga.
        
           | adastra22 wrote:
           | Why would he agree to this? He holds all the cards now.
        
       | Uhhrrr wrote:
       | Sure, why not.
        
       | ssnistfajen wrote:
       | "In principle" has me less than 100% assured. Hopefully no more
       | plot twists in this. Everyone, inside and outside, has probably
       | had enough.
        
         | laweijfmvo wrote:
         | I'm down for the next season of this hot drama.
        
       | koito17 wrote:
       | What a wild ride these past few days have been. Friday already
       | feels like a very long time ago given all of the information and
       | controversy that's come out.
        
       | rvz wrote:
       | Once again the source is directly from Twitter / X and the news
       | was announced from there.
       | 
       | Dispelling the complete nonsense that the platform is 'dying'.
        
         | ssnistfajen wrote:
         | The problem is none of the alternatives offered a smooth UX
         | transition. Mastodon is fragmented by design and Bluesky is
         | gated to this day. There was never a true Digg-like event that
         | caused user migration to reach critical mass. So people simply
         | trickled back once the most volatile periods of post-Elon
         | Twitter passed.
         | 
         | That doesn't change the fact post-Elon Twitter has severely
         | degraded in terms of user experience (rate limits, blue check
         | spam, API pay-wall, etc.) and Elon isn't doing the platform any
         | favours by continuing to participate in detrimental ways (seen
         | in the recent advertiser exodus).
        
         | mastazi wrote:
         | FWIW I've just read the tweet on Nitter, haven't had a Twitter
         | account in more than 2 years.
        
           | wilg wrote:
           | Well that's got very little to do with their point (which
           | isn't very relevant anyway)
        
             | mastazi wrote:
             | Their point is that whether or not the platform is "dying"
             | would depend on whether or not Twitter is still the best
             | way to "get news".
             | 
             | But the most common metrics for whether or not a social
             | media platform is dying, are things like ad revenue and
             | MAU.
             | 
             | I contribute to neither, since I'm not a user nor an ad
             | viewer, and yet I'm still able to "get the news".
             | 
             | So my point is this: the fact that important news are still
             | there, won't guarantee that the platform stays successfull
        
         | metabagel wrote:
         | Twitter isn't dying. It's just finding its core audience of
         | white supremacists.
        
           | 0xpgm wrote:
           | Your comment sounds very weird to people outside
           | America/Europe
        
       | mlazos wrote:
       | Looks like Satya will have all of the leverage after this. He
       | kind of always did though, but the board has almost entirely been
       | replaced.
       | 
       | I don't see any point to the non profit umbrella now.
        
         | baking wrote:
         | Sure, you can dissolve it if you hand over all the assets to
         | another 501(c)3 organization. Otherwise, you are stuck with it.
        
       | xyst wrote:
       | This ordeal reminds me of the Silicon Valley episode where
       | Richard is replaced by an empty chair, temporarily.
        
       | tomohelix wrote:
       | So, Ilya is out of the board, but Adam is still on it. I know
       | this will raise some eyebrows but whatever.
       | 
       | Still though, this isn't something that will just go away with
       | Sam back. OAI will undergo serious changes now that Sam has shown
       | himself to be irreplaceable. Future will tell but in the long
       | terms, I doubt we will see OAI as one of the megacorps like
       | Facebook or Uber. They lost the trust.
        
         | wilg wrote:
         | I mean he's not irreplaceable so much as booting him suddenly
         | for no good reason creates problems.
        
         | ayakang31415 wrote:
         | "I doubt we will see OAI as one of the megacorps like Facebook
         | or Uber. They lost the trust." How is this the case?
        
           | quickthrower2 wrote:
           | Scandal a minute Uber lol
        
         | jatins wrote:
         | > I doubt we will see OAI as one of the megacorps like Facebook
         | or Uber. They lost the trust
         | 
         | Whose trust?
        
         | sverhagen wrote:
         | Ah, yes, Facebook and Uber, brands known for consistent
         | trustworthiness throughout their existences /s
        
         | ilikehurdles wrote:
         | OAI looks stronger than ever. The untrustworthy bits that
         | caused all this instability over the last 5 days have been
         | ditched into the sea. Care to expand on your claim?
        
           | neta1337 wrote:
           | Please explain your claim as well. I don't see how this
           | company looks stronger than ever, more like a clown company
        
             | TapWaterBandit wrote:
             | They got rid of the clowns though. They went from having a
             | board with lightweights and insiders to what at least
             | initially is a strong initial 3.
        
             | ilikehurdles wrote:
             | I may have been overly eager in my comment because the big
             | bad downside of the new board is none of the founders are
             | on it. I hope the current membership sees reason and fixes
             | this issue.
             | 
             | But I said this because: They've retained the entire
             | company, reinstated its founder as CEO, and replaced an
             | activist clown board with a professional, experienced, and
             | possibly* unified one. Still remains to be seen how the
             | board membership and overall org structure changes, but I
             | have much more trust in the current 3 members steering
             | OpenAI toward long-term success.
        
               | MVissers wrote:
               | If by "long-term-success" you mean a capitalistic lap-dog
               | of microsoft, I'll agree.
               | 
               | It seems that the safety team within OpenAI lost. My
               | biggest fear with this whole AI thing is hostile
               | takeover, and openAI was best positioned to at least do
               | an effort to prevent that. Now, I'm not so sure anymore.
        
             | GreedClarifies wrote:
             | It was a clown board running an awesome company.
             | 
             | They fixed the glitch.
        
           | 6gvONxR4sf7o wrote:
           | > The untrustworthy bits that caused all this instability
           | over the last 5 days have been ditched into the sea
           | 
           | This whole thing started with Altman pushing a safety
           | oriented non-profit into a tense contradiction (edit: I mean
           | the 2019-2022 gpt3/chatgpt for-profit stuff that led to all
           | the Anthropic people leaving). The most recent timeline was
           | 
           | - Altman tries to push out another board member
           | 
           | - That board member escalates by pushing Altman out (and
           | Brockman off the board)
           | 
           | - Altman's side escalates by saying they'll nuke the company
           | 
           | Altman's side won, but how can we say that his side didn't
           | cause any of this instability?
        
             | ilikehurdles wrote:
             | > Altman tries to push out another board member
             | 
             | That event wasn't some unprovoked start of this history.
             | 
             | > That board member escalates by pushing Altman out (and
             | Brockman off the board)
             | 
             | and the entire company retaliated. Then this board member
             | tried to sell the company to a competitor who refused. In
             | the meantime the board went through two interim CEOs who
             | refused to play along with this scheme. In the meantime one
             | of the people who voted to fire the CEO regretted it
             | publicly within 24 hours. That's a clown car of a board. It
             | reflects the quality of most non-profit boards but not of
             | organizations that actually execute well.
        
               | emptysongglass wrote:
               | Something that's been fairly consistent here on HN
               | throughout the debacle has been an almost fanatical
               | defense of the board's actions as justified.
               | 
               | The board was incompetent. It will go down in the history
               | books as one of the biggest blunders of a board in
               | history.
               | 
               | If you want to take drastic action, you _consult with
               | your biggest partner_ keeping the lights on before you do
               | so. Helen Toner and Tasha McCauley had no business being
               | on this board. Even if you had safety concerns in mind,
               | you don 't bypass everyone else with a stake in the
               | future of your business because you're feeling petulant.
        
             | WendyTheWillow wrote:
             | By recognizing that it didn't "start" with Altman trying to
             | push out another board member, it started when that board
             | member published a paper trashing the company she's on the
             | board of, without speaking to the CEO of that company
             | first, or trying in any way to affect change first.
        
               | 6gvONxR4sf7o wrote:
               | I edited my comment to clarify what I meant. The start
               | was him pushing to move fast and break things in the
               | classic YC kind of way. And it's BS to say that she
               | didn't speak to the CEO or try to affect change first.
               | The safety camp inside openai has been unsuccessfully
               | trying to push him to slow down for years.
               | 
               | See this article for all that context
               | (https://news.ycombinator.com/item?id=38341399) because
               | it sure didn't start with the paper you referred to
               | either.
        
               | WendyTheWillow wrote:
               | Your "most recent" timeline is still wrong, and while yes
               | the entire history of OpenAI did not begin with the paper
               | I'm referencing, it _is_ what started this specific
               | fracas, the one where the board voted to oust Sam Altman.
               | 
               | It was a classic antisocial academic move; all she needed
               | to do was _talk_ to Altman, both before _and_ after
               | writing the paper. It 's incredibly easy to do that, and
               | her not doing it is what began the insanity.
               | 
               | She's gone now, and Altman remains, substantially because
               | she didn't know how to pick up a phone and interact with
               | another human being. Who knows, she might have even been
               | successful at her stated goal, of protecting AI, had she
               | done even the most basic amount of problem solving first.
               | She should not have been on this board, and I hope she's
               | learned literally anything from this about interacting
               | with people, though frankly I doubt it.
        
               | 6gvONxR4sf7o wrote:
               | Honestly, I just don't believe that she didn't talk to
               | Altman about her concerns. I'd believe that she didn't
               | say "I'm publishing a paper about it now" but I can't
               | believe she didn't talk to him about her concerns during
               | the last 4+ years that it's been a core tension at the
               | company.
        
               | WendyTheWillow wrote:
               | That's what I mean; she should have discussed the paper
               | and its contents specifically with Altman, and easily
               | could have. It's a hugely damaging thing to have your
               | _own_ board member come out critically against your
               | company. It 's doubly so when it blindsides the CEO.
               | 
               | She had many, many other options available to her that
               | she did not take. That was a grave mistake and she paid
               | for it.
               | 
               | "But what about academic integrity?" Yes! That's why this
               | whole idea was problematic from the beginning. She can't
               | be objective and fulfill her role as board member. Her
               | role at Georgetown was in _direct_ conflict with her role
               | on the OpenAI board.
        
               | croes wrote:
               | >trashing the company
               | 
               | So pointing out risks is trashing the company.
        
         | gordon_freeman wrote:
         | Facebook has lost trust so many times that I can't even count
         | but it's still a Megacorp, isn't it?
        
         | TerrifiedMouse wrote:
         | The OpenAI of the past, that dabbled in random AI stuff
         | (remember their DotA 2 bot?), is gone.
         | 
         | OpenAI is now just a vehicle to commercialize their LLM - and
         | everything is subservient to that goal. Discover a major flaw
         | in GPT4? You shut your mouth. Doesn't matter if society at
         | large suffers for it.
         | 
         | Altman's/Microsoft's takeover of the former non-profit is now
         | complete.
         | 
         | Edit: Let this be a lesson to us all. Just because something
         | claims to be non-profit doesn't mean it will always remain that
         | way. With enough political maneuvering and money, a megacorp
         | can takeover almost any organization. Non-profit status and
         | whatever the organization's charter says is temporary.
        
           | karmasimida wrote:
           | > now just a vehicle to commercialize their LLM
           | 
           | I mean it is what they want isn't it. They did some random
           | stuff like, playing dota2 or robot arms, even the Dalle
           | stuff. Now they finally find that one golden goose, of course
           | they are going to keep it.
           | 
           | I don't think the company has changed at all. It succeeded
           | after all.
        
             | nextaccountic wrote:
             | But it's not exactly a company. It's a nonprofit structured
             | in a way to wholly own a company. In that sense it's like
             | Mozilla.
        
               | karmasimida wrote:
               | Nonprofit is a just a facade, it was convenient for them
               | to appear as ethnical under that disguise, but they get
               | rid of it when it is inconvenient in a week. 95% of them
               | would rather join MSFT, than being in a non-profit.
               | 
               | Did they company change? I am not convinced.
        
               | ravst3s wrote:
               | Agree that it's a facade.
               | 
               | Iirc, the NP structure was implemented to attract top AI
               | talent from FAANG. Then they needed investors to fund the
               | infrastructure and hence gave the employees shares or
               | profit units (whatever the hell that is). The NP now
               | shields MSFT from regulatory issues.
               | 
               | I do wonder how many of those employees would actually go
               | to MSFT. It feels more like a gambit to get Altman back
               | in since they were about to cash out with the tender
               | offer.
        
               | dizzydes wrote:
               | Does it actually prevent regulators going after them?
        
             | hadlock wrote:
             | There's no moat in giant LLMs. Anyone on a long enough
             | timeline can scrape/digitize 99.9X% of all human knowledge
             | and build an LLM or LXX from it. Monetizing that idea and
             | staying the market leader over a period longer than 10
             | years will take a herculean amount of effort. Facebook
             | releasing similar models for free definitely took the wind
             | out of their sails, even a tiny bit; right now the moat is
             | access to A100 boards. That will change as eventually even
             | the Raspberry Pi 9 will have LLM capabilities
        
               | moralestapia wrote:
               | OpenAI (ChatGPT) is already a HUGE brand all around the
               | world. No doubt they're the most valuable startup in the
               | AI space. That's their moat.
               | 
               | Unfortunately, in the past few days, the only thing
               | they've accomplished is significantly damaging their
               | brand.
        
               | hadlock wrote:
               | Branding counts for a lot, but LLM are already a
               | commodity. As soon as someone releases an LLM equivalent
               | to GPT4 or GPT5, most cloud providers will offer it
               | locally for a fraction of what openAI is charging, and
               | the heaviest users will simply self-host. Go look at the
               | company Docker. I can build a container on almost any
               | device with a prompt these days using open source
               | tooling. The company (or brand, at this point?) offers
               | "professional services" I suppose but who is paying for
               | it? Or go look at Redis or Elasti-anything. Or memcached.
               | Or postgres. Or whatever. Industrial-grade underpinnings
               | of the internet, but it's all just commodity stuff you
               | can lease from any cloud provider.
               | 
               | It doesn't matter if OpenAI or AWS or GCP encoded the
               | entire works of Shakespeare in their LLM, they can all
               | write/complete a valid limerick about "There once was a
               | man from Nantucket".
               | 
               | I seriously doubt AWS is going to license OpenAI's
               | technology when they can just copy the functionality,
               | royalty free, and charge users for it. Maybe they will?
               | But I doubt it. To the end user it's just another locally
               | hosted API. Like DNS.
        
               | cyanydeez wrote:
               | I think yuou're assuming that OpenAI is charging a
               | $/compute price equal to what it costs them.
               | 
               | More likely, they're a loss-leader and generating
               | publicity by making it as cheap as possible.
               | 
               | _Everything_ we've seen come out of silicon valley does
               | this, so why would they suddenly be charging the right
               | price?
        
               | worldsayshi wrote:
               | > offer it locally for a fraction of what openAI is
               | charging
               | 
               | I thought the was a somewhat clear agreement that openAI
               | is currently running inference at a loss?
        
               | hadlock wrote:
               | Moore's law seems to have failed on CPUs finally, but
               | we've seen the pattern over and over. LLM specific
               | hardware will undoubtedly bring down the cost. $10,000
               | A100 GPU will not be the last GPU NVidia ever makes, nor
               | will their competitors stand by and let them hold the
               | market hostage.
               | 
               | Quake and Counter-Strike in the 1990s ran like garbage in
               | software-rendering mode. I remember having to run
               | Counter-Strike on my Pentium 90 at the lowest resolution,
               | and then disable upscaling to get 15fps, and even then
               | smoke grenades and other effects would drop the framerate
               | into the single digits. Almost two years after Quake's
               | release did dedicated 3d video cards (voodoo 1 and 2 were
               | accelerators, depended on a seperate 2d VGA graphics card
               | to feed it) begin to hit the market.
               | 
               | Nowadays you can run those games (and their sequels) in
               | the thousands (tens of thousands?) of frames per second
               | on a top end modern card. I would imagine similar events
               | with hardware will transpire with LLM. OpenAI is already
               | prototyping their own hardware to train and run LLMs. I
               | would imagine NVidia hasn't been sitting on their hands
               | either.
        
               | iLoveOncall wrote:
               | > I seriously doubt AWS is going to license OpenAI's
               | technology when they can just copy the functionality,
               | royalty free, and charge users for it. Maybe they will?
               | But I doubt it.
               | 
               | You mean like they already do on Amazon Bedrock?
        
               | hadlock wrote:
               | Yeah and looks like they're going to offer Llama as well.
               | They offer Redhat linux EC2 instances at a premium, and
               | other paid per hour AMIs. I can't imagine why they
               | wouldn't offer various LLMs at a premium, but not also
               | offer a home-grown LLM at a lower rate once it's ready.
        
               | rolisz wrote:
               | Why do you think cloud providers can undercut OpenAI?
               | From what I know, Llama 70b is more expensive to run than
               | GPT-3.5, unless you can get 70+% utilization rate for
               | your GPUs, which is hard to do.
               | 
               | So far we don't have any open source models that are
               | close to GPT4, so we don't know what it takes to run them
               | for similar speeds.
        
               | karmasimida wrote:
               | The damage remains to be seen
               | 
               | They still have gpt4 and rumored gpt4.5 to offer, so
               | people have no choice but to use them. The internet has
               | such short an attention span, this news will get
               | forgotten in 2 months
        
               | denlekke wrote:
               | i don't think that's really any brand loyalty for OpenAI.
               | people will use whatever is cheapest and best. in the
               | longer run people will use whatever has the best access
               | and integration.
               | 
               | what's keeping people with OpenAI for now is that chatGPT
               | is free and GPT3.5 and GPT4 are the best. over time I
               | expect the gap in performance to get smaller and the cost
               | to run these to get cheaper.
               | 
               | if google gives me something close to as good as OpenAI's
               | offering for the same price and it pull data from my
               | gmail or my calendar or my google drive then i'll switch
               | to that.
        
               | dontupvoteme wrote:
               | This, if anything people really don't like the verbose
               | moralizing and anti-terseness of it.
               | 
               | Ok, the first few times you use it maybe it's good to
               | know it doesn't think it's a person, but short and sweet
               | answers just save time, especially when the result is
               | streamed.
        
               | moralestapia wrote:
               | I do think there is some brand loyalty.
               | 
               | People use "the chatbot from OpenAI" because that's what
               | became famous and got all the world a taste of AI (my dad
               | is on that bandwagon, for instance). There is absolutely
               | no way my dad is going to sign up for an Anthropic
               | account and start making API calls to their LLM.
               | 
               | But I agree that it's a weak moat, if OpenAI were to
               | disappear, I could just tell my dad to use "this same
               | thing but from Google" and he'd switch without thinking
               | much about it.
        
               | denlekke wrote:
               | good points. on second thought, i should give them due
               | credit for building a brand reputation as being "best"
               | that will continue even if they aren't the best at some
               | point, which will keep a lot of people with them. that's
               | in addition to their other advantages that people will
               | stay because it's easier than learning a new platform and
               | there might be lock-in in terms of it being hard to move
               | a trained gpt, or your chat history to another platform.
        
               | cft wrote:
               | You are forgetting about the end of the Moore's law. The
               | costs for running a large scale AI won't drop
               | dramatically. Any optimizations will require non-trivial
               | expensive PhD Bell Labs level research. Running
               | intelligent LLMs will be financially accessible only to a
               | few mega corps in the US and China (and perhaps to the
               | European government). The AI "safety" teams will control
               | the public discourse. Traditional search engines that
               | blacklist websites with dissenting opinions will be
               | viewed as the benevolent free speech dinosaurs of the
               | past.
        
               | dontupvoteme wrote:
               | This assumes the only way to use LLMs effectively is to
               | have a monolith model that does everything from
               | translation (from ANY language to ANY language) to
               | creative writing to coding to what have you. And
               | supposedly GPT4 is a mixture of experts (maybe 8-cross)
               | 
               | The efficiency of finetuned models is quite, quite a bit
               | improved at the cost of giving up the rest of the world
               | to do specific things, and disk space to have a few dozen
               | local finetunes (or even hundreds+ for SaaS services) is
               | peanuts compared to acquiring 80GB of VRAM on a single
               | device for monomodels
        
               | cft wrote:
               | Sutskever says there's a "phase transition" at the order
               | of 9 bn neurons, after which LLMs begin to become really
               | useful. I don't know much here, but wouldn't the
               | monomodels become overfit, because they don't have enough
               | data for 9+bn parameters?
        
               | danielmarkbruce wrote:
               | They won't stand still while others are scraping and
               | digitizing. It's like saying there is no moat in search.
               | Scale is a thing. Learning effects are a thing. It's not
               | the worlds widest moat for sure, but it's a moat.
        
           | g42gregory wrote:
           | Why would society at large suffer from a major flaw in GPT-4,
           | if it's even there? If GPT-4 spits out some nonsense to your
           | customers, just put a filter on it, as you should anyway. We
           | can't seriously expect OpenAI to babysit every company out
           | there, can we? Why would we even want to?
        
             | TerrifiedMouse wrote:
             | For example, and I'm not saying such flaws exist, GPT4
             | output is bias in some way, encourages radicalization (see
             | Twitter's, YouTube's, and Facebook's news feed algorithm),
             | create self-esteem issues in children (see Instagram), ...
             | etc.
             | 
             | If you worked for old OpenAI, you would be free to talk
             | about it - since old OpenAI didn't give a crap about
             | profit.
             | 
             | Altman's OpenAI? He will want you to "go to him first".
        
               | g42gregory wrote:
               | We can't expect GPT-4 not to have bias in some way, or
               | not to have all these things that you mentioned. I read
               | in multiple places that GPT products have "progressive"
               | bias. If that's Ok with you, then you just use it with
               | that bias. If not, you fix it by pre-prompting, etc... If
               | you can't fix it, use LLAMA or something else. That's the
               | entrepreneur's problem, not OpenAI's. OpenAI needs to
               | make it intelligent and capable. The entrepreneurs and
               | business users will do the rest. That's how they get
               | paid. If OpenAI to solve all these problems, what
               | business users are going to do themselves? I just don't
               | see the societal harm here.
        
               | nearbuy wrote:
               | Concerns about bias and racism in ChatGPT would feel more
               | valid if ChatGPT were even one tenth as bias as anything
               | else in life. Twitter, Facebook, the media, friends and
               | family, etc. are all more bias and radicalized (though I
               | mean "radicalized" in a mild sense) than ChatGPT. Talk to
               | anyone on any side about the war in Gaza and you'll get a
               | bunch of opinions that the opposite side will say are
               | blatantly racist. ChatGPT will just say something
               | inoffensive like it's a complex and sensitive issue and
               | that it's not programmed to have political opinions.
        
               | kgeist wrote:
               | GPT3/GPT4 currently moralize about anything slightly
               | controversial. Sure you can construct a long elaborate
               | prompt to "jailbreak" it, but it's so much effort it's
               | easier to just write something by yourself.
        
               | dontupvoteme wrote:
               | >Encourages radicalization (see Twitter's, YouTube's, and
               | Facebook's news feed algorithm)
               | 
               | What do you mean? It recommends things that it thinks
               | people will like.
               | 
               | Also I highly suspect "Altman's OpenAI" is dead
               | regardless. They are now Copilot(tm) Research.
               | 
               | They may have delusions of grandeur regarding being able
               | to resist the MicroBorg or change it from the inside, but
               | that simply does not happen.
               | 
               | The best they can hope for as an org is to live as long
               | as they can as best as they can.
               | 
               | I think Sam's 100B silicon gambit in the middle east
               | (quite curious because this is probably something the
               | United State Federal Government Is Likely Not Super Fond
               | Of) is him realizing that, while he is influential and
               | powerful, he's nowhere near MSFT level.
        
             | dontupvoteme wrote:
             | >If GPT-4 spits out some nonsense to your customers, just
             | put a filter on it, as you should anyway.
             | 
             | Languages other than English exist, and RLHF at least does
             | work in any language you make the request in. regex/nlp,
             | not so much.
        
               | g42gregory wrote:
               | No regex, you would use another copy of few-shot prompted
               | GPT-4 as a filter for the first GPT-4!
        
             | cyanydeez wrote:
             | Because real people are using it to make decisions.
             | Decisions that could be entirely skewed in some direction,
             | and often that causes damage.
        
           | robbomacrae wrote:
           | I'm still waiting for an optimized version of that bot that
           | can run locally...
        
           | krisoft wrote:
           | > With enough political maneuvering and money, a megacorp can
           | takeover almost any organization.
           | 
           | In fact this observation is pertinent to the original stated
           | goals of openAI. In some sense companies and organisations
           | are superinteligences. That is they have goals, they are
           | acting in the real world to achieve those goals and they are
           | more capable in some measures than a single human. (They are
           | not AGI, because they are not artificial, they are composed
           | of meaty parts, the individuals forming the company.)
           | 
           | In fact what we are seeing is that when the superinteligence
           | OpenAI was set up there was an attempt to align the goals of
           | the initial founders with the then new organisation. They
           | tried to "bind" their "golem" to make it pursue certain goals
           | by giving it an unconventional governance structure and a
           | charter.
           | 
           | Did they succeed? Too early to tell for sure, but there are
           | at least question marks around it.
           | 
           | How would one argue against? OpenAI appears to have given up
           | the lofty goals of AI safety and preventing the concentration
           | of AI provess. In their pursuit of economic success the
           | forces wishing to enrich themselves overpowered the forces
           | wishing to concentrate on the goals. Safety will be still a
           | figleaf for them, if nothing else to achieve regulatory
           | capture to keep out upstart competition.
           | 
           | How would one argue for? OpenAI is still around. The charter
           | is still around. To be able to achieve the lofty goals
           | contained in it one needs a lot of resources. Money in
           | particular is a resource which enables one greater powers in
           | shaping the world. Achieving the original goals will require
           | a lot of money. The "golem" is now in the "gain resources"
           | phase of its operation. To achieve that it commercialises the
           | relatively benign, safe and simple LLMs it has access to.
           | This serves the original goal in three ways: gains further
           | resources, estabilishes the organisation as a pre-eminent
           | expert on AI and thus AI safety, provides it with a
           | relatively safe sandbox where adversarial forces are trying
           | its safety concepts. In other words all is well with the
           | original goals, the "golem" that is OpenAI is still well
           | aligned. It will achieve the original goals once it has
           | gained enough resources to do so.
           | 
           | The fact that we can't tell which is happening is in fact the
           | worry and problem with superinteligence/AI safety.
        
           | quickthrower2 wrote:
           | They let the fox in. But they didn't have to. They could have
           | try to raise money without such a sweet deal to MS. They gave
           | away power for cloud credits.
        
             | dragonwriter wrote:
             | > They let the fox in. But they didn't have to. They could
             | have try to raise money without such a sweet deal to MS.
             | 
             | They did, and fell, IIRC, vastly short (IIRC, an order of
             | magnitude, maybe more) short of their minimum short-term
             | target. The commercial subsidiary thing was a risk taken to
             | support the mission because it was clear it was going to
             | fail from lack of funding otherwise.
        
             | doikor wrote:
             | They tried but it did not work. They needed billions for
             | the compute time and top tier talent but were only able to
             | collect millions.
        
           | Havoc wrote:
           | Don't think the dota bot was random. It's the perfect mix
           | between complicated yet controllable environment, good data
           | availability and good PR angle.
        
             | dontupvoteme wrote:
             | It was a clever parallel to deep blue, especially as they
             | picked DotA which was always the "harder" game in its
             | genre.
             | 
             | Next up would be an EVE corp run entirely by LLMs
        
           | 3cats-in-a-coat wrote:
           | Do we need to false dichotomy. DotA 2 bot was a successful
           | technology preview. You need both research and development in
           | a healthy organisation. Let's call this... hmm I don't know
           | "R&D" for short. Might catch on.
        
           | cyanydeez wrote:
           | Non-profit is just a poorly thought out government-ish thing.
           | 
           | If it's really valuable to society, it needs to be a
           | government entity, full stop.
        
         | nathanasmith wrote:
         | On the contrary, this saga has shown that a huge number of
         | people are extremely passionate about the existence of OpenAI
         | and it's leadership by Altman, much more strongly and in larger
         | numbers than most had suspected. If anything this has
         | solidified the importance of the company and I think people
         | will trust it more that the situation was resolved with the
         | light speed it was.
        
           | willdr wrote:
           | That's a misreading of the situation. The employees saw their
           | big bag vanishing and suddenly realised they were employed by
           | a non-profit entity that had loftier goals than making a
           | buck, so they rallied to overturn it and they've gotten their
           | way. This is a net negative for anyone not financially
           | invested in OAI.
        
             | nathanasmith wrote:
             | What lofty goals? The board was questioned repeatedly and
             | never articulated clear reasoning for firing Altman and in
             | the process lost the confidence of the employees hence the
             | "rally". The lack of clarity was their undoing whether
             | there would have been a bag for the employees to lose or
             | not.
        
               | murakamiiq84 wrote:
               | My story: Maybe they had lofty goals, maybe not, but it
               | sounded like the whole thing was instigated by Altman
               | trying to fire Toner (one of the board members) over a
               | silly pretext of her coauthoring a paper that nobody read
               | that was very mildly negative about OpenAI, during her
               | day job.
               | https://www.nytimes.com/2023/11/21/technology/openai-
               | altman-...
               | 
               | And then presumably the other board members read the
               | writing on the wall (especially seeing how 3 other board
               | members mysteriously resigned, including Hoffman
               | https://www.semafor.com/article/11/19/2023/reid-hoffman-
               | was-...), and realized that if Altman can kick out Toner
               | under such flimsy pretexts, they'd be out too.
               | 
               | So they allied with Helen to countercoup Greg/Sam.
               | 
               | I think the anti-board perspective is that this is all
               | shallow bickering over a 90B company. The pro-board
               | perspective is that the whole _point_ of the board was to
               | serve as a check on the CEO, so if the CEO could easily
               | appoint only loyalists, then the board is a useless
               | rubber stamp that lends unfair legitimacy to OpenAI 's
               | regulatory capture efforts.
        
         | lacker wrote:
         | Let's see, Sam Altman is an incredibly charismatic founding
         | CEO, who some people consider manipulative, but is also beloved
         | by many employees. He got kicked out by his board, but brought
         | back when they realized their mistake.
         | 
         | It's true that this doesn't really pattern-match with the
         | founding story of huge successful companies like Facebook,
         | Amazon, Microsoft, or Google. But somehow, I think it's still
         | possible that a huge company could be created by a person like
         | this.
         | 
         | (And of course, more important than creating a huge company, is
         | creating insanely great products.)
        
           | loveparade wrote:
           | I think people following Sam Altman is jumping to
           | conclusions. I think it's just as likely that employees are
           | simply following the money. They want to make $$$, and that's
           | what a for-profit company does, which is what Altman wants. I
           | think it's probably not really about Altman or his
           | leadership.
        
             | kareaa wrote:
             | Given that over 750 people have signed the letter, it's
             | safe to assume that their motivations vary. Some might be
             | motivated by the financial aspects, some might be motivated
             | by Sam's leadership (like considering Sam as a friend who
             | needs support). Some might fervently believe that their
             | work is crucial for the advancement of humanity and that
             | any changes would just hinder their progress. And some
             | might have just caved in to peer pressure.
        
               | strikelaserclaw wrote:
               | Most are probably motivated by money, some are motivated
               | by stability and some are motivated by their loyalty to
               | sam but i think most are motivated by money and
               | stability.
        
           | mkii wrote:
           | > It's true that this doesn't really pattern-match with the
           | founding story of huge successful companies like Facebook,
           | Amazon, Microsoft, or Google.
           | 
           | You forgot about Apple.
        
         | cowthulhu wrote:
         | I feel like history has shown repeatedly that having a good
         | product matters way more than trust, as evidenced by Facebook
         | and Uber. People seem to talk big smack about lost trust and
         | such in the immediate aftermath of a scandal, and then quitely
         | renew the contracts when the time comes.
         | 
         | All of the big ad companies (Google, Amazon, Facebook) have,
         | like, a scandal per month, yet the ad revenue keeps coming.
         | Meltdown was a huge scandal, yet Intel keeps pumping out the
         | chips.
        
       | anotherhue wrote:
       | This has been childish throughout, everyone involved, including
       | the tech community milking it for clicks should be ashamed.
        
       | gloyoyo wrote:
       | Tell that AGI who's boss!
        
       | ryzvonusef wrote:
       | > We have reached an agreement in principle for Sam Altman to
       | return to OpenAI as CEO with a new initial board of Bret Taylor
       | (Chair), Larry Summers, and Adam D'Angelo.              > We are
       | collaborating to figure out the details. Thank you so much for
       | your patience through this.
       | 
       | 1- So what was the point of this whole drama, and why couldn't
       | you have settled like this adults?
       | 
       | 2- Now what happens to Microsoft's role in all of this?
       | 
       | 3- Twitter is still the best place to follow this and get
       | updates, everyone is still make "official" statements on twitter,
       | not sure how long this website will last but until then, this is
       | the only portal for me to get news.
        
         | quotemstr wrote:
         | > Twitter is still the best place to follow this and get
         | updates, everyone is still make "official" statements on
         | twitter, not sure how long this website will last but until
         | then, this is the only portal for me to get news.
         | 
         | It's only natural to confuse what is happening with what we
         | wish to happen. After all, when we imagine something, aren't we
         | undergoing a kind of experience?
         | 
         | A lot of people wish Twitter were dying, even though it's it,
         | so they interpret evidence through a lens of belief
         | confirmation rather than belief disproof. It's only human to do
         | this. We all do.
        
           | nonethewiser wrote:
           | > A lot of people wish Twitter were dying, even though it's
           | it, so they interpret evidence through a lens of belief
           | confirmation rather than belief disproof.
           | 
           | Cognitive dissonance
        
             | veec_cas_tant wrote:
             | Or they read about the large cuts to Twitter's valuation
             | from banks and X itself?
        
           | ryzvonusef wrote:
           | It was funny reading Kara Swisher keeping saying twitter is
           | dying and is toxic and what not, while STILL doing all her
           | first announcements on twitter, and using twitter as a
           | source.
           | 
           | same with Ashlee Vance (the other journo reporting on this)
           | and all the main players (Sam/Greg/Ilya/Mira/Satya/whoever)
           | also make their first announcement on twitter.
           | 
           | I don't know about the funding part of it, but there is no
           | denying it, the news is still freshest on twitter. Twitter
           | feels just as toxic for me as before, in fact I feel
           | community notes has made it much better, imho.
           | 
           | ____
           | 
           | In some related news, I finally got bluesky invite (I don't
           | have invite codes yet or I would share here)
           | 
           | and people there are complaining about... mastadon and how
           | elitist it is...
           | 
           | that was an eye opener.
           | 
           | nice if you want some science-y updates but it's still lags
           | behind twitter for news.
        
             | metabagel wrote:
             | I don't use Twitter any more, other than occasionally
             | following links there (which open in the browser, because I
             | deleted the app).
             | 
             | Discoverability on Mastodon is abysmal. It was too much
             | work for me.
             | 
             | I tend to get my news from Substack now.
        
               | ryzvonusef wrote:
               | interesting, substack doesn't sound like a platform for
               | the freshest news, but for deep insights.
               | 
               | Don't you feel out of date on substack? especially since
               | things move so fast sometimes, like with this open-ai
               | fiasco?
        
               | metabagel wrote:
               | Twitter is incredibly uncivil. I don't have the stomach
               | for it.
        
               | tayo42 wrote:
               | Did being up to date really have an impact on your life?
               | It's mostly just gossip.
        
               | ryzvonusef wrote:
               | I understand what you are saying, but sometimes, news
               | like this is perhaps the only excitement in our otherwise
               | dull lives.
        
             | hurryer wrote:
             | Skilled operators say what sounds most virtuous and do what
             | benefits most. Especially when these two things are not the
             | same.
        
             | hadlock wrote:
             | Twitter isn't dying, but it hasn't grown measurably since
             | 2015. Still sitting at about 300m active users.
        
             | bagels wrote:
             | Bluesky took long enough to invite me that I forgot what it
             | even was when I got the email.
        
         | seydor wrote:
         | Microsoft said they are OK with Sam returning to openAI. There
         | are probbaly legal reasons why they prefer things to go back as
         | it were
         | 
         | (Thank you for calling Twitter Twitter)
        
           | AmericanOP wrote:
           | The website is twitter.com. Why call it something else?
        
             | alex_young wrote:
             | Also, x.com redirects to Twitter.com. Seems like they want
             | us to say Twitter.
        
               | behnamoh wrote:
               | saying "to tweet" is definitely better than saying "to
               | xeet"
        
               | asimovfan wrote:
               | Xeet is super funny, hopefully takes over.
        
               | behnamoh wrote:
               | share it on Xitter
        
               | grumpyprole wrote:
               | Or xcreet?
        
               | wise_young_man wrote:
               | Maybe it's a cross post.
        
             | labster wrote:
             | Exactly right, fellow YCombinator News commenter!
        
               | zarzavat wrote:
               | I believe you mean _Startup News_
        
               | tech234a wrote:
               | For reference: https://web.archive.org/web/20070713212949
               | /http://news.ycomb...
        
               | blackoil wrote:
               | From 2nd story on the archive
               | 
               | >It is just a joke that Facebook could be valued at $6
               | billion.
               | 
               | lol, seems HN is same since forever.
        
         | jatins wrote:
         | Microsoft's role remains same as it was on Thursday. Minor
         | (49%?) shareholder and keeps access to models and IP
         | 
         | IMO Kevin tweeting that MS will hire and match comp of all
         | OpenAI employees was amazing negotiation tactic because that
         | meant employees could sign the petition without worrying about
         | their jobs/visas
        
           | ryzvonusef wrote:
           | but no board seat? how do they prevent a rehash of this in
           | the future and how do they safeguard their investment? Really
           | curious.
        
             | protocolture wrote:
             | OpenAI is an airgapped test lab for Microsoft. They dont
             | want critical exposure to the downside risk of AI research,
             | just the benefits in terms of IP. Sam and Greg probably
             | offer enough stability for them to continue this way.
        
               | _jab wrote:
               | Sam and Greg don't appear to be getting their board seats
               | back.
        
               | protocolture wrote:
               | They dont need them. If they get fired, they can go
               | nuclear on the board again.
        
               | happosai wrote:
               | It makes sense to airgap Generative AI while courts
               | ponder wether copyright fair use applies or not. Research
               | is clearly allowed fair use, and let OpenAI experiment
               | with commercialization until it is all clear waters.
        
               | astrange wrote:
               | No anti-AI lawsuits have progressed yet. One got slapped
               | down pretty hard today, though isn't dead.
               | 
               | https://www.hollywoodreporter.com/business/business-
               | news/sar...
        
             | jatins wrote:
             | I believe all the board seats are not fillet yet
        
             | umeshunni wrote:
             | It's a new and "more experienced" board. This is also
             | possibly the first of additional governance and structure
             | changes.
        
           | karmasimida wrote:
           | I think at this point MSFT will seek a board seat in OpenAI/
        
             | zeven7 wrote:
             | Satya Nadella said they would make sure there would be "no
             | more surprises".
             | 
             | (Sad day for popcorn sales.)
        
           | ugh123 wrote:
           | I was thinking about this a lot as well, but what did that
           | mean for employee stock in the commercial entity? I heard
           | they were up for a liquid cash-out in the next funding round.
        
         | 6gvONxR4sf7o wrote:
         | > So what was the point of this whole drama, and why couldn't
         | you have settled like this adults?
         | 
         | Altman was trying to remove one of the board members before he
         | was forced out. Looks like he got his way in the end, but I'm
         | going to call Altman the primary instigator because of that.
         | 
         | His side was also the "we'll nuke the company unless you
         | resign" side.
        
           | theamk wrote:
           | His side was also "700 regular employees support this", which
           | is pretty unusual as most people don't care about their CEO
           | at all. I am not related to OpenAI at all, but given the
           | choice of "favorite of all employees" vs "fire people with no
           | warning then refuse to give explanation why even under
           | pressure" I know which side I root for.
        
             | campbel wrote:
             | The 700 employees also have significant financial incentive
             | to want Altman to stay. If he moved to a competitor all the
             | shine would follow. They want the pay-day (I don't blame
             | them), but take with a grain of salt what the employees
             | want in this case.
        
             | xiwenc wrote:
             | No idea what these 700 employees were thinking. They
             | probably had little knowledge of what truly went down other
             | than "my CEO was fired unfairly" and rushed to the rescue.
             | 
             | I think the board should have been more transparent on why
             | they made the decision to fire Sam.
             | 
             | Or perhaps these employees only cared about their AI work
             | and money? The foundation would be perceived as the culprit
             | against them.
             | 
             | Really sad there's no clarity from the old board disclosed.
             | Hope one day we will know.
        
               | 6gvONxR4sf7o wrote:
               | I wonder how much more transparent they can really be. I
               | know that when firing a "regular" employee, you basically
               | never tell everyone all the details for legal CYA
               | reasons. When your firing someone worth half a billion
               | dollars, I expect the legal fears are magnified.
        
               | framapotari wrote:
               | But that's the difference, the CEO is not a regular
               | employee. If a board of directors wants to be trusted and
               | taken seriously it can't just fire the CEO and say "I'm
               | sorry we can't say why, that's private information".
        
               | x86x87 wrote:
               | They were thinking about money. There you go. Seeing what
               | you build crumble is not pleasant when this means you are
               | financially impacted.
        
             | ravst3s wrote:
             | Looking back, Altman's ace in hand was the tender offer
             | from Thrive. Idk anyone at OpenAI, but all the early senior
             | personnel backed him with vehemence. If the leaders hand't
             | championed him strongly, I doubt you get 90% of the company
             | to commit to leaving.
             | 
             | I'm sure some of those employees were easily going to make
             | $10m+ in the sale. That's a pretty great motivation tool.
             | 
             | Overall, I do agree with you. The board could not justify
             | their capricious decision making and refused to elaborate.
             | They should've brought him back on Sunday instead of
             | mucking around. OpenAI existing is a good thing.
        
             | gnaman wrote:
             | Take this with a grain of salt but employees were under a
             | lot of peer pressure
             | 
             | https://twitter.com/JacquesThibs/status/1727134087176204410
        
               | jatins wrote:
               | That is one HUGE grain of salt considering 1/ it's Blind
               | 2/ Even in the same thread there is another poster saying
               | the exact opposite thing (i.e. no peer pressure)
        
               | Jensson wrote:
               | > 1/ it's Blind
               | 
               | Average people don't like to lie, if someone bullies them
               | until they agree to sign they will sign because they are
               | honest.
               | 
               | Also if they said they will sign but the ticker didn't go
               | up, it is pretty obvious that they lied and I'm sure they
               | don't want that risk.
        
               | moralestapia wrote:
               | Yeah 95% of employees is a bit too high ...
               | 
               | Also, all the stuff they started doing with the hearts
               | and cryptic messages on Twitter (now X) was a bit ...
               | cult-y?. I wouldn't doubt there was a lot of manipulation
               | behind all that, even from @sama itself.
               | 
               | So, there is goes, it seems that there's a big chance now
               | that the first AGI will land on the hands of a group with
               | the antics of teenagers. Interesting timeline.
        
             | doktrin wrote:
             | > which is pretty unusual as most people don't care about
             | their CEO at all
             | 
             | I'm sure Sam is a charismatic guy, but generally speaking
             | folks will support a whole lot when a multi million dollar
             | payday is on the line.
        
         | Barrin92 wrote:
         | The explanation for point 1 is point 3. If the people involved
         | were not terminally online and felt the need to share every
         | single one of their immediate thoughts with the public they
         | could have likely settled this behind closed doors, where this
         | kind of stuff belongs.
         | 
         | It's not actually news, it's entertainment and self-
         | aggrandizement by everyone involved including the audience.
        
           | 0xDEAFBEAD wrote:
           | Interesting that the board were repeatedly criticized for
           | "not being adults", and yet they were also the only party not
           | live-tweeting everything...
           | 
           | Seems like there's no way to win with Twitter. You may not be
           | interested in Twitter, but Twitter is interested in you.
        
             | behnamoh wrote:
             | the board didn't have to tweet. their ridiculous actions
             | spoke for itself.
        
               | angryasian wrote:
               | we still don't know what Altman has actually been hiding,
               | so to say it was ridiculous ... is ridiculous itself.
        
               | behnamoh wrote:
               | the board's actions were ridiculous regardless of Sam's.
               | sell oai to anthropic? were they out of their minds?
        
               | 0xDEAFBEAD wrote:
               | From the perspective of upholding the charter
               | https://openai.com/charter and preventing an AI race --
               | seems potentially sensible
        
             | nickpp wrote:
             | They didn't tweet, but did they communicate in any other
             | way?!
        
               | 0xDEAFBEAD wrote:
               | Well, there was the initial announcement.
        
               | nickpp wrote:
               | To say that communication was lacking is an
               | understatement. Clarifications were missing and sorely
               | needed.
        
             | imgabe wrote:
             | The board not saying what the hell they were on about was
             | the source of the whole drama in the first place. If they
             | had just said exactly what their problem was up front there
             | wouldn't have been as much to tweet about.
        
             | blackoil wrote:
             | Considering CEO2 rebelled next day and CEO3 allegedly said
             | he'll quit unless board comes out with truth, doesn't
             | provide much confidence in their adulthood.
        
         | petesergeant wrote:
         | If there's been one constant here, it's been people who
         | actually know Tonrer expressing deep support for her
         | experience, intelligence, and ethics, so it's interesting to me
         | that she seems to be getting the boot.
        
           | causalmodels wrote:
           | Fiascos like this display neither experience nor
           | intelligence. This whole saga was a colossal failure on the
           | part of the previous board.
        
           | dmix wrote:
           | Add delusions of grandeur to that list thinking she can
           | pursue her ideological will by winning over 3 board members
           | while losing 90% of the company staff.
           | 
           | She was fighting an idelogical battle that needs full
           | industry buy in, legitimate or not that's not how you win
           | people over.
           | 
           | If she's truely a rationalist as she claims then a
           | rationalist would be realistic understanding that if your
           | engineers can just leave and do it somewhere else tomorrow
           | you aren't making progress. Taking on the full might of US
           | capitalism via winning over the fringe half of a non profit
           | board is not the best strategy. At best it was desperate and
           | naive.
        
             | astrange wrote:
             | This is pretty good evidence she's a rationalist;
             | rationalism means a religious devotion to a specific kind
             | of logical thinking that never works in real life because
             | you can't calculate the probability a result if you didn't
             | know it could happen in the first place.
             | 
             | Traditional response to this happening is to say something
             | about your "priors" being wrong instead of taking
             | responsibility.
        
           | tsimionescu wrote:
           | If there is one clear thing, it's that no one on that board
           | should be allowed anywhere near another board for any non-
           | clown company. The level of incompetence in how they handled
           | this whole thing was extraordinary.
           | 
           | The fact that Adam D'Angelo is still on the new board
           | apparently is much more baffling than the fact that Tonrer or
           | Ilya are not.
        
         | happosai wrote:
         | About 3)
         | 
         | What is the benefit of learning about this kind of drama
         | minute-by-minute, compared to reading it a few hours later on
         | hacker news or next day on wall street journal?
         | 
         | Personally I found twitter very bad for my productivity, a lot
         | of focus destroyed just to know "what is happening" when there
         | was neglible drawbacks of finding about news events a few hours
         | later.
        
           | willdr wrote:
           | I have muted any mention of Open AI, Altman, Emmet and Satya
           | from my Twitter feed for the past five days. It's a far
           | better experience.
        
         | kumarvvr wrote:
         | Satya comes out great, making the absolute best of a given
         | shitty situation, with a high stake of 10 B USD.
         | 
         | Microsoft is showing to investors that it is going to be an AI
         | company, one way or the other.
         | 
         | Microsoft still has access to everything OpenAI does.
         | 
         | Microsoft has its friend, Sam, at the helm of OpenAI and with a
         | more tighter grip on the company than ever.
         | 
         | Its still a win for Microsoft.
        
           | dacryn wrote:
           | Satya comes out as evil imho, and I wonder how much
           | orchestration there was going on behind the scenes.
           | 
           | Microsoft is showing that it is still able to capture
           | important scale ups and 'embrace' them, whilst also acting as
           | if they have the moral high ground, but in reality are doing
           | research with a high governance errors and potential legal
           | problems away from their premises. and THAT is why
           | stakeholders like him.
        
           | nabla9 wrote:
           | Satya just played the hand he had. The hand he had was
           | excellent, he had already won. MS already had perceptual
           | license, people working on GPT and Sam Altman on his corner.
           | 
           | The one thing in Microsoft has stayed constant from Gates to
           | Ballmer to Satya: you should never, ever form a close
           | alliance with MS. They know how to screw alliance partners.
           | i4i, Windows RT partners, Windows Phone Partners, Nokia, HW
           | partners in Surface. Even Steve Jobs was burned few times.
        
         | blackoil wrote:
         | > So what was the point of this whole drama, and why couldn't
         | you have settled like this adults?
         | 
         | Whole charade was by GPT5 to understand the position of person
         | sitting next to red button and secondary to stress test Hacker
         | News.
        
         | _boffin_ wrote:
         | Larry Summers? like the Larry Summers?
        
           | Sai_ wrote:
           | yeah, the guy has a knack for being in/invited to places.
        
         | JumpCrisscross wrote:
         | > _Twitter is still the best place to follow this and get
         | updates_
         | 
         | This has been my single strongest takeaway from this saga:
         | Twitter remains the centre of controversy. When shit hit the
         | fan, Sam and Satya and Swisher took to Twitter. Not Threads.
         | Not Bluesy. Twitter. (X.)
        
           | ssnistfajen wrote:
           | Bluesky still has gated signups at this point so I don't
           | think it will ever be a viable alternative.
           | 
           | Threads had a rushed rollout which resulted in major feature
           | gaps that disincentivized users from doing anything beyond
           | creating their profiles.
           | 
           | Notable figures and organizations have little reason to fully
           | migrate off Twitter unless Musk irreversibly breaks the site
           | and even he is not stupid enough to do that (yet?). So with
           | most of its content creators still in place, Twitter has no
           | risk of following the path of Digg.
        
         | Racing0461 wrote:
         | > 2- Now what happens to Microsoft's role in all of this?
         | 
         | This outcome WAS microsoft's role in all this. Satya offering
         | sam a ceo like position to create a competing product was
         | leverage for this outcome.
        
       | HPMOR wrote:
       | Thank the lord. We need stability and reliability as developers.
       | This is great news for anyone building ontop of OpenAI products.
       | Welcome back Sama.
        
       | hadrien01 wrote:
       | So Adam D'Angelo would stay on the board? I thought a condition
       | for Altman to return was the whole board resigning?
        
         | kelnos wrote:
         | When people negotiate, often they compromise, and their
         | conditions change.
        
       | r721 wrote:
       | Quote tweets by main participants:
       | 
       | https://twitter.com/sama/status/1727206691262099616 (+ follow-up
       | https://twitter.com/sama/status/1727207458324848883)
       | 
       | https://twitter.com/gdb/status/1727206609477411261
       | 
       | https://twitter.com/miramurati/status/1727206862150672843
       | 
       | UPD https://twitter.com/gdb/status/1727208843137179915
       | 
       | https://twitter.com/eshear/status/1727210329560756598
       | 
       | https://twitter.com/satyanadella/status/1727207661547233721
        
         | ryzvonusef wrote:
         | also satya
         | 
         | https://twitter.com/satyanadella/status/1727207661547233721
        
         | 0xDEAFBEAD wrote:
         | Emmett https://twitter.com/eshear/status/1727210329560756598
        
           | 303space wrote:
           | Genuinely curious - what's the comp package like for 72 hours
           | of interim CEOing a 80b company?
        
             | zx8080 wrote:
             | Nothing maybe?
        
             | granzymes wrote:
             | Bragging rights, party invitations, and one hell of a
             | story.
        
             | stigz wrote:
             | A firm handshake. They had no time to ink a benefits
             | package, my dude.
        
             | politelemon wrote:
             | Office 365 subscription for one year and GitHub copilot
             | using your own creation
        
             | rapsey wrote:
             | Irrelevant compared to the reputation boost for helping the
             | company get itself back on track.
        
               | kmlevitt wrote:
               | I don't think anybody had high expectations for him, but
               | he really pulled through.
        
             | ssnistfajen wrote:
             | Doubt he took this job for financial comp so even if he got
             | paid, it probably wasn't much.
             | 
             | Equity is a big part of CEO pay packages and OpenAI has
             | weird equity structure, plus there was a very real chance
             | OpenAI's value would go to $0 leaving whatever promised
             | comp worthless. So Emmett likely took the job for other
             | reasons.
        
           | upupupandaway wrote:
           | I am really surprised by people thinking this guy did
           | anything to get sama back. He was probably not even in the
           | room.
        
             | ssnistfajen wrote:
             | Why does he have to be in the room? Audiovisual
             | conferencing over the Internet exists now.
        
         | doctoboggan wrote:
         | What does Ilya have to say?
        
           | behnamoh wrote:
           | probably a heart emoji.
        
             | erikpukinskis wrote:
             | But what _color_ heart emoji?
        
               | DoreenMichele wrote:
               | Purple?
        
           | dkarras wrote:
           | he also retweeted OpenAI's and Sam's announcements
        
         | nickpp wrote:
         | On a side tangent, absolutely amazing how all this drama
         | unfolded on Twitter/X. No Threads, no Mastodon, no Truth Social
         | or Blue whatever.
         | 
         | Say what you want about Elon's leadership but his instinct to
         | buy Twitter was completely right. To me it seemed like any
         | social network crap but he realized it was _important_.
        
           | swyx wrote:
           | i mean he also tried his hardest to back out of the deal
           | until he realized he couldnt
        
             | Gud wrote:
             | Only because he had to buy it while the stock market was
             | tanking.
        
           | layer8 wrote:
           | Inertia is a bitch.
        
           | highwaylights wrote:
           | Interesting take.
           | 
           | By all accounts he paid about double what it was worth and
           | the value has collapsed from there.
           | 
           | Probably not a great idea to say _anything_ overtly political
           | when you own a social media company, as due to politics being
           | so polarised in the US, any opinion is going to divide your
           | audience in half causing a usage collapse and driving support
           | to competing platforms.
           | 
           | https://fortune.com/2023/09/06/elon-musk-x-what-is-
           | twitter-w...
        
             | astrange wrote:
             | His worse problem is that he owns both a social media
             | network and a bigger separate business that wants to
             | operate in the US, Turkey, India, China, Saudi Arabia, etc.
             | which means he can't fight any censorship requests in any
             | of those countries. (Which the previous management was
             | actually very aggressive about.)
             | 
             | His worst personal problem is that he keeps replying
             | "fascinating" to neo-Nazis and random conspiracy theorists
             | because he wants to be internet friends with them.
        
             | justcool393 wrote:
             | well and he also tried very hard to not buy it until
             | Twitter sued in order to have the contract upheld
        
           | veec_cas_tant wrote:
           | Not trying to be a dick but:
           | 
           | 1. He tried to not buy Twitter very hard and OpenAI's new
           | board member forced his hand
           | 
           | 2. It hasn't been a good financial decision if the banks and
           | X's own valuation cuts are anything to go by.
           | 
           | 3. If his purpose wasn't to make money...all of these tweets
           | would have absolutely been allowed before Elon bought the
           | company. He didn't affect any relevance changes here.
           | 
           | Why would one person owning something so important be better
           | than being publicly owned? I don't understand the logic.
        
             | majestic5762 wrote:
             | He bought Twitter for power, omnipresence and reputation.
             | Allowing him to play the game his way.
        
               | DoreenMichele wrote:
               | Funny, I thought he bought Twitter because he shot his
               | mouth off in public and the courts made him follow
               | through.
        
             | strikelaserclaw wrote:
             | I haven't seen this type of drama in years, surely thats
             | not enough to sustain X
        
             | nickpp wrote:
             | > Why would one person owning something so important be
             | better than being publicly owned?
             | 
             | Usually publicly owned things end up being controlled by
             | someone: a CEO, a main investor, a crooked board, a
             | government, a shady governmental organization. At least
             | with Elon owning X, things are a little more transparent,
             | he's rather candid where he stands.
             | 
             | Now, the question is "who owns Musk?" of course.
        
           | tigershark wrote:
           | A huge amount of advertisers ran away, the revenue cratered
           | and it is probably less than the annual debt servicing
           | (revenue, not profit), the current valuation, accordingly to
           | Musk math (https://fortune.com/2023/09/06/elon-musk-x-what-
           | is-twitter-w...), is 1/10 of the acquisition price. But yes,
           | it was a masterstroke. I don't remember any other
           | masterstroke in history that managed to lose 40B with a
           | single acquisition.
        
             | nickpp wrote:
             | I'd be rather reluctant to question the financial decisions
             | of one of wealthiest men on earth. Losing 40B could feel
             | quite different to him than to you or me. Besides, it's
             | unrealized loss until he sells.
        
               | hardlianotion wrote:
               | Or goes bankrupt.
        
           | r721 wrote:
           | I think it's just this particular drama - OpenAI people are
           | of the same tribe as Elon, and surely they prefer Twitter/X,
           | not Mastodon or Bluesky.
        
             | nickpp wrote:
             | What tribe is that? And why would they favor one network
             | over the others?
        
               | iiv wrote:
               | The silicon valley/startups/VC tribe, and they favour
               | Twitter because 1. that's what their friends use and 2.
               | they like Elon Musk, they want to be like him.
        
               | nickpp wrote:
               | Many OpenAI employees expressed their support for Sam at
               | some point also on Twitter. Microsoft CEO (based in
               | Redmond) tweeted quite a lot. Tech media reporters like
               | Emily Chang and Kara Swisher also participated. The last
               | one is quite critical of Twitter and I am not sure they
               | all like Musk that much.
               | 
               | Are they all in the same "tribe"? Maybe you should
               | enlarge the definition?
               | 
               | How about us all IT people who watched the drama
               | unfolding on Twitter while our friend are using FB and
               | Insta, we are far from SV and have mixed feelings about
               | Elon Musk while never in a million years wanting to be
               | like him? Also same "tribe"?
        
               | ssnistfajen wrote:
               | Most of these people have been on Twitter long before
               | Musk had his hands on it.
        
             | Davidzheng wrote:
             | Have you used Mastodon? I don't think you can follow drama
             | on Mastodon unless you're already part of the drama.
        
           | ssnistfajen wrote:
           | What does this have to do with Elon again? FYI Twitter
           | existed before October 2022. Account join dates are public.
           | Every single person involved in this, incl. OpenAI staff
           | posting for solidarity, joined Twitter years before Elon's
           | takeover.
        
           | Sai_ wrote:
           | His instinct was to walk away from his offer. He had to be
           | forced to buy the company.
           | 
           | His second wife apparently asked him to buy Twitter and fix
           | its, in her opinion, liberal bias.
        
         | highwaylights wrote:
         | That's certainly some very.. deliberate.. board picks.
         | 
         | Summers, too.
         | 
         | Welp.
        
           | kmlevitt wrote:
           | Say what you want about Summers specifically but I think it's
           | a good idea getting some economists on the board. They are
           | academics but focused on practical, important issues like
           | loss of jobs and what that means for the economy and society.
           | Up until now it seems like the board members have either been
           | AI doomers with no practical experience or Silicon Valley
           | types that inevitably have conflicts of interest, because
           | everybody is starting their own AI venture now.
        
             | thinkcomp wrote:
             | This has nothing to do with Summers being an economist and
             | everything to do with the fact that he used to run the
             | parent agency of the IRS. Summers is the least sensible
             | board pick imaginable unless one takes this fact and the
             | coming regulatory catastrophe into account.
        
               | kmlevitt wrote:
               | >This has nothing to do with Summers being an economist
               | and everything to do with the fact that he used to run
               | the parent agency of the IRS.
               | 
               | It has literally nothing to do with that. The reason he's
               | on the board now is because D'Angelo wanted him on it.
               | You could have a problem with that, but you can't use his
               | inclusion as evidence that the board lost.
        
           | returnInfinity wrote:
           | It seems US Attorneys were calling the Open AI board.
           | 
           | It helps having somebody with government ties on board now.
        
           | synaesthesisx wrote:
           | If we achieve AGI it has the potential to capture most (if
           | not all) economic value. Larry Summers was a deliberate
           | choice indeed.
        
         | crossroadsguy wrote:
         | Now the blue tick has same effect on me on Twitter that the red
         | N logo has on any film that came from the Netflix formula
         | factory. I already know it's going to be bad, regurgitated.
         | Does everyone have a Twitter blue tick now? Or is that just a
         | char people are using in their names?
        
           | r721 wrote:
           | >Does everyone have a Twitter blue tick now? Or is that just
           | a char people are using in their names?
           | 
           | Blue tick just means user bought a subscription (X Premium)
           | now - one of the features is "reply prioritization", so top
           | replies to popular tweets are from blue ticks.
           | 
           | https://help.twitter.com/en/using-x/x-premium
        
         | r721 wrote:
         | https://twitter.com/hlntnr/status/1727207796456751615
        
       | epups wrote:
       | So, Adam D'Angelo is the only board member that remains, and he
       | had also voted against Altman before. How interesting,
       | considering all the theory crafting about him being the one who
       | initiated this coup.
        
       | weirdindiankid wrote:
       | I wonder how this will impact the company-owned-by-a-non-profit
       | model in the future. While it isn't uncommon (e.g. I believe IKEA
       | are owned by a nonprofit), I believe it has historically been for
       | tax reasons.
       | 
       | Given the grandstanding and chaos on both sides, it'll be
       | interesting to see if OpenAI undergo a radical shift in their
       | structure.
        
       | gzer0 wrote:
       | Satya on twitter:
       | 
       |  _We are encouraged by the changes to the OpenAI board. We
       | believe this is a first essential step on a path to more stable,
       | well-informed, and effective governance. Sam, Greg, and I have
       | talked and agreed they have a key role to play along with the OAI
       | leadership team in ensuring OAI continues to thrive and build on
       | its mission. We look forward to building on our strong
       | partnership and delivering the value of this next generation of
       | AI to our customers and partners._
       | 
       | https://twitter.com/satyanadella/status/1727207661547233721
        
         | qsi wrote:
         | >> a first essential step on a path to more stable, well-
         | informed, and effective governance.
         | 
         | That's quite a slap at the board... a polite way of calling
         | them ignorant, ineffective dilettantes.
        
           | adastra22 wrote:
           | Yet one of them is still on the board...
        
             | estomagordo wrote:
             | Not sure why that would be contradictory.
        
               | adastra22 wrote:
               | Well then there's still a "ignorant, ineffective
               | dilettante" making up 1/3 of the board.
        
               | estomagordo wrote:
               | Firstly, maybe don't put quotes around an unrelated
               | party's representation of the board. Secondly, the board
               | was made up of individuals and naturally, what might be
               | true for the board as a whole does not apply to every
               | individual on it.
        
               | adastra22 wrote:
               | I don't understand this comment. I'm quoting from this
               | thread, from the post that I was responding to. What do
               | you think I was talking about?
        
             | qsi wrote:
             | I don't understand that either, but let's see what the
             | board looks like in a few months/weeks/days/hours?
        
               | sanxiyn wrote:
               | Old board needs to agree to new board, so I think some
               | compromise is inevitable.
        
               | qsi wrote:
               | If all members of the old board resign simultaneously,
               | what happens then? No more old board to agree to any new
               | members. In a for-profit the shareholders can elect new
               | board members, but in this case I don't know how it's
               | supposed to work.
        
               | ilikehurdles wrote:
               | I've been privy to this happening at a nonprofit board.
               | Depends on charter, but I've seen the old board tender
               | their resignation and remain responsible only to vote for
               | the appointment of their (usually interim to start)
               | replacements. Normally in a nonprofit (not here), the
               | membership of that nonprofit still has to ratify the new
               | board in some kind of annual meeting; but in the
               | meantime, the interim board can start making executive
               | decisions about the org.
        
             | remarkEon wrote:
             | D'Angelo?
             | 
             | Wonder if this is a signal that the theories about Poe are
             | off the mark.
        
               | adastra22 wrote:
               | Doesn't matter. It's an absolutely clear conflict of
               | interest. It may have taken an unrelated shakeup for
               | people to notice (or maybe D'Angelo was critically
               | involved; we don't know), but there's no way he should be
               | staying on this board.
        
               | BillyTheKing wrote:
               | maybe it's just going to be easier to fire him in a
               | second step once this current situation which seems to be
               | primarily about ideology is cleared up. In D'Angelo's
               | case it's going to be easier to just point to a clear
               | traditional conflict of interest down the line
        
             | rlt wrote:
             | The one (Adam D'Angelo) who's a cofounder and CEO of a
             | company (Quora) that has a product (Poe) that arguably
             | competes with OpenAI's "GPTs" feature, no less.
             | 
             | I don't understand why that's not a conflict of interest?
             | 
             | But honestly both products pale in comparison to OpenAI's
             | underlying models' importance.
        
               | dragonwriter wrote:
               | > I don't understand why that's not a conflict of
               | interest?
               | 
               | It's not the conflict of interest it would be if it was
               | the board of a for profit corporation that was basically
               | identical to the existing for-profit LLC but without the
               | lyaers above it ending with the nonprofit that the board
               | actually runs, because _OpenAI is not a normal company_ ,
               | and making profit is not its purpose, so the CEO of a
               | company that happens to have a product in the same space
               | as the LLC is not in a fundamental conflict of interest
               | (there may be some specific decisions it would make sense
               | for him to recuse from for conflict reasons, but there is
               | a difference between "may have a conflict regarding
               | certain decisions" and "has a fundamental conflict
               | incompatible with sitting on the board".)
               | 
               | Its not a conflict for a nonprofit that raises money with
               | craft faires to have someone who runs a for-profit
               | periodic craft faire in the same market on its board. It
               | is a conflict for a for profit corporation whose business
               | is running such a craft faire to do so, though.
        
               | adastra22 wrote:
               | Still a conflict of interest. If D'Angelo has financial
               | incentive to want OpenAI to fail, then this at odds with
               | his duty to follow the OpenAI charter. It's exactly why
               | two of the previous board members left earlier this year.
        
             | ah765 wrote:
             | No one really knows who was responsible for what. But Sam
             | agreed to this deal over the Microsoft alternative, so
             | probably Adam isn't that bad.
        
             | behnamoh wrote:
             | Maybe the other two left if Adam would remain.
        
         | _jnc wrote:
         | microsoft is going to need 2-3 seats on that board
        
           | choppaface wrote:
           | Larry Summers mostly counts as a Microsoft seat. Summers will
           | support commercial and private interest and not have a single
           | thought about safety, just like during the financial crisis
           | 15 years ago https://www.chronicle.com/article/larry-summers-
           | and-the-subv...
        
             | astrange wrote:
             | Larry Summers hurt the US economy by making the recovery
             | from 2008 much too slow. If they'd done stimulus better, we
             | could've had 2019's economic growth years earlier. That
             | would've been great for Microsoft.
        
         | wokwokwok wrote:
         | Unsaid: "Also I lied about hiring him."
         | 
         | > And we're extremely excited to share the news that Sam Altman
         | and Greg Brockman, together with colleagues, will be joining
         | Microsoft to lead a new advanced AI research team.
         | 
         | https://nitter.net/satyanadella/status/1726509045803336122
         | 
         | I guess _everyone_ was just playing a bit loose and fast with
         | the truth and hype to pressure the board.
        
           | behnamoh wrote:
           | it was Monday morning and he didn't want MSFT stock to crash
        
           | Nathanba wrote:
           | maybe he really had an affirmative statement on this from Sam
           | Altman but nobody signs an employment contract this quickly
           | so it was all still up in the air
        
             | vikramkr wrote:
             | Also even if he signed it he's allowed to quit? Like, the
             | 14th amendment exists y'all. And especially if after that
             | agreement 90+ percent of openai threatens to quit, that's a
             | different situation than the situation 10 minutes before
             | that announcement so why wouldn't they change their
             | decision?
        
           | robbomacrae wrote:
           | Why does this accusation keep coming up? Sam even confirmed
           | he took the offer in one of the tweets above "when i decided
           | to join msft on sun evening". Contracts are not handcuffs and
           | he was free to change his mind.
        
           | century19 wrote:
           | Exactly this. It also moved Microsoft's share price. Is that
           | not questionable practice?
        
             | Roark66 wrote:
             | Only if people in the know took advantage of it.
        
           | qsi wrote:
           | Satya's statement at the time may well have been true at the
           | time in that he, Sam and Greg had agreed on them joining
           | MSFT. Later circumstances changed, and now that decision has
           | been reversed or nullfied. Calling the original statement a
           | lie is not warranted IMHO.
           | 
           | In either case the end effect is the essentially the same.
           | Either Sam is at MSFT and can continue to work with openAI
           | IP, or he's back at openAI and do the same. In both cases the
           | net effect for MSFT is similar and not materially different,
           | although the revealed preference of Sam's return to openAI
           | indicates the second option was the preferred one.
           | 
           | [Edit for grammar]
        
             | wokwokwok wrote:
             | There is a material difference between:
             | 
             | Sam and Greg will be joining Microsoft.
             | 
             | And:
             | 
             | Sam and Greg have in principle agreed to join Microsoft but
             | not signed anything.
             | 
             | If Microsoft has (now) agreed to release either of them (or
             | anyone else) from contractual obligations, then the first
             | one was true.
             | 
             | If _not_ , then the first was was a lie, and the second one
             | was true.
             | 
             | This whole drama has been punctuated by a great deal of
             | speculation, pivots, changes and, bluntly, lies.
             | 
             | Why do we need to sugar coat it?
             | 
             | Where the fuck is this new magical Microsoft research lab?
             | 
             | Microsoft preparing a new office for openAI employees?
             | Really? Is that also true?
             | 
             | Is Sam actually going to be on the board now, or is this
             | another twist in this farcical drama when they blow it off
             | again?
             | 
             | I see no reason to, at least point, give anyone involved
             | the benefit of the doubt.
             | 
             | Once the board _actually changes_ , or Microsoft _actually
             | does something_ , I'm happy to change my tune, but I'm
             | calling what I see.
             | 
             | Sam did _not_ join Microsoft at any point.
        
           | actinium226 wrote:
           | Absolutely no lies here. It was a dynamic situation and it
           | wasn't at all clear that discussions with OAI board would
           | lead to an outcome where sama returns as CEO.
           | 
           | Satya offered sama a way forward as a backup option.
           | 
           | And I think it says _a lot_ about sama that he took that
           | option, at least while things were playing out. He and Greg
           | could have gotten together capital for a startup where they
           | each had huge equity and made $$$$$$. These actions from sama
           | demonstrate his level of commitment to execution on this
           | technology.
        
           | wavemode wrote:
           | Did you miss the part where Sam himself said he "decided to
           | join MSFT on Sunday"?
           | 
           | https://twitter.com/sama/status/1727207458324848883
           | 
           | He's has now changed his mind, sure, but that doesn't mean
           | Satya lied.
        
           | vikramkr wrote:
           | Wait where are you getting that the hiring was a lie? At this
           | point his tenure there was approximately as long as miras and
           | emmets so that's par for the course in this saga, what makes
           | that stint different?
        
           | lobochrome wrote:
           | will be joining =! has joined
        
         | forrestthewoods wrote:
         | Did Satya get played with the whole "Sam and Greg are joining
         | Microsoft"? Was Satya in on a gambit to get the whole company
         | to threaten to quit to force the board's hand?
         | 
         | It sure _feels_ like a bad look for Satya to announce a huge
         | hire Sunday night and then this? But what do I know.
         | 
         | Edit: don't know why the downvotes. You're welcome to think
         | it's an obviously smart political move. That it's win/win
         | either way. But it's a very fair question that every tech
         | blogger on the planet will be trying to answer for the next
         | month!
        
           | voidfunc wrote:
           | Huh? Satyas move was politically brilliant. Either outcome of
           | Sama returning to OpenAI or Sama going to Microsoft is good
           | for Microsoft as continuity and progress are the most
           | important things right now for Microsoft. An OpenAI in
           | turmoil would have been worthless.
           | 
           | Satyas maneuvering gave Sama huge leverage.
        
             | behnamoh wrote:
             | and yet microsoft has no seat on the board.
        
               | robbomacrae wrote:
               | The board is not finalized. There will most likely be
               | more seats and Microsoft will probably have at least one.
        
           | altpaddle wrote:
           | I think it was mostly a bluff to try the pressure the board.
           | I don't think Sam and most of Open AI rank and file would
           | want to be employees of MSFT
        
             | i67vw3 wrote:
             | Also to lessen the MSFT share impact.
        
             | Fluorescence wrote:
             | Can CEOs make market moving "bluffs"? Sounds like another
             | word for securities fraud.
             | 
             | (what isn't)
        
               | Roark66 wrote:
               | Of course they can, but they can't do these and sell/buy
               | stocks involved at the same time. It's not illegal to
               | influence stocks value (one could argue just being a CEO
               | does that), but buying/selling while in possession of
               | insider knowledge.
               | 
               | Let's say Sam called his broker and said to him on Friday
               | we'll before the market closes. Buy MSFT stock. Then he
               | made his announcement on Sunday and on Monday he told his
               | broker to sell that stock before he announced he's
               | actually coming back to (not at all)OpenAI. That would be
               | illegal insider trading.
               | 
               | If he never calls his broker/his friends/his mom to
               | buy/sell stock there's nothing illegal.
        
               | Fluorescence wrote:
               | Securities fraud is more than insider trading. Misleading
               | investors about a company's financial health is fraud 101
               | and it sure looks like he lied about hiring someone to
               | stem a precipitate MSFT drop.
        
             | numpad0 wrote:
             | Or, it did seem like a deal, but all of OAI did align that
             | that to be more disastrous than whatever apocalypse that
             | Altman as the CEO must entail.
        
           | fastball wrote:
           | Doesn't seem that way to me. Seems like it was Satya sorta
           | calling the board's bluff.
        
           | tunesmith wrote:
           | I guess that theory was right, that Satya's announcement was
           | just a delaying tactic to calm the market before Monday
           | morning.
        
           | nonethewiser wrote:
           | Im not so sure. This whole ordeal revealed how strong of a
           | position Microsoft had all along. And that's all still true
           | even without effectively taking over OpenAI. Because now
           | everyone can see how easily it could happen.
           | 
           | Something does still seem not flattering towards Microsoft
           | about reneging on the Microsoft offer though.
        
           | jwegan wrote:
           | "Hiring" them was just a PR tactic to keep Microsoft stock
           | from tanking while they got this figured out.
        
             | 15457345234 wrote:
             | Yeah there's a word for that type of thing
        
           | gexla wrote:
           | Consider that Satya already landed a huge win by the stock
           | price hitting ATH rather than taking a hit based on the news.
           | Further consider that MS owns 49% of a company which could be
           | valued at 80 billion on the condition that the company makes
           | structural changes to the board to prevent this from
           | happening again (as opposed to taking a dive if the company
           | essentially died.) Then there's the uncertainty of the tech
           | behind Bing's chat (and other AI tie-ins) continuing to be
           | competitive vs Google and other players. If MS had to
           | recreate their own tech, then they would likely be far behind
           | even a stalled OpenAI. Seems to me that it makes little
           | difference where this tech is being developed (in-house vs in
           | a company which you own 49% of) in terms of access. Probably
           | better that the development happens within the company which
           | started all of this and has already been the leader, rather
           | than starting over.
        
           | vikramkr wrote:
           | He announced the hire and that precipitated 90+ percent of
           | the employees threatening to quit. It would be an
           | understatement to say that the situation changed. Why does
           | everyone want satya to be bad at his job and and not react
           | quickly to a rapidly evolving situation? His decision to hire
           | Sama paved the way for samas return.
        
       | seydor wrote:
       | Larry Summers and no females
        
         | meteor333 wrote:
         | How did Larry Summers get elected? Does he have any relation
         | with AI research or Sam Altman?
         | 
         | It's also curious that none of the board members have
         | necessarily have any experience directly with AI research
        
           | qsi wrote:
           | Not sure "elected" is the right way of looking at it. More
           | like "selected" or "nominated" by Sam/MSFT perhaps. His main
           | qualification may be that he's an adult?
        
         | antonvs wrote:
         | Summers would tell you that women don't have the necessary
         | "intrinsic aptitude". Of course the intrinsic aptitude in
         | question is being able to participate in a nepotistic boy's
         | club.
        
           | jadamson wrote:
           | What Summers would point out is that boys do better at maths,
           | which is true. In fact, in the UK, the only time boys have
           | had worse results in maths was when exams were cancelled
           | during Covid and teachers (hint: primarily female) were
           | allowed to dish out grades. Girls suddenly shot ahead. When
           | exams resumed, boys took the lead again.
           | 
           | But don't notice anything from that. That would be sexist,
           | right Anton?
        
             | antonvs wrote:
             | First, Summers' sexist claims were much broader than that.
             | 
             | Second, yes, you are being sexist, and irrational. What
             | you're doing is exactly the same as the reasons that it's
             | racist and irrational to say "whites are better at x".
             | 
             | You're cherry picking data to examine, to reach a
             | conclusion that you want to reach. You're ignoring relevant
             | causal factors - or any causal factors at all, in fact,
             | aside from the spurious correlation you've assumed in your
             | conclusion.
             | 
             | You're ignoring decades of research on the subject -
             | although in your defense, you're probably just not aware of
             | it.
             | 
             | Most irrationally of all, you're generalizing across an
             | entire group, selected by a factor that's only indirectly
             | relevant to the property you're incorrectly generalizing
             | about.
             | 
             | As such, "sexist" is just a symptom of fundamentally
             | confused and under-informed thinking.
        
               | jadamson wrote:
               | Actually, Summer's claims were much narrower - he said
               | that boys tend to deviate from the mean more. That is,
               | it's not that men are superior, it's that there are more
               | boy geniuses and more boy idiots.
               | 
               | Decades of research shows that teachers give girls better
               | grades than boys of the same ability. This is not some
               | new revelation.
               | 
               | https://www.forbes.com/sites/nickmorrison/2022/10/17/teac
               | her...
               | 
               | https://www.bbc.co.uk/news/education-31751672
               | 
               | A whole cohort of boys got screwed over by the
               | cancellation of exams during Covid. That is just reality,
               | and no amount of creepy male feminist posturing is going
               | to change that. Rather, denying issues in boys education
               | is liable to increase male resentment and bitterness,
               | something we've already witnessed over the past few
               | years.
        
               | antonvs wrote:
               | I quoted one of the unsupported claims that Summers made
               | - that "there are issues of intrinsic aptitude" which
               | help explain lower representation of women. Not, you
               | know, millennia of sexism and often violent oppression.
               | This is the exact same kind of arguments that racists
               | make - any observed differences must be "intrinsic".
               | 
               | If Summers had in fact limited himself to the statistical
               | claims, it would have been less of an issue. He would
               | still have been wrong, but he wouldn't have been so
               | obviously sexist.
               | 
               | It's easy to refute Summers' claims, and in fact conclude
               | that the complete opposite of what he was saying is more
               | likely true. "Gender, Culture, and mathematics performanc
               | e"(https://www.pnas.org/doi/10.1073/pnas.0901265106)
               | gives several examples that show that the variability as
               | well as male-dominance that Summers described is not
               | present in all cultures, even within the US - for
               | example, among Asian American students in Minnesota state
               | assessments, "more girls than boys scored above the 99th
               | percentile." Clearly, this isn't an issue of "intrinsic
               | aptitude" as Summers claimed.
               | 
               | > A whole cohort of boys got screwed over by the
               | cancellation of exams during Covid.
               | 
               | I'm glad we've identified the issue that triggered you.
               | But your grievances on that matter are utterly irrelevant
               | to what I wrote.
               | 
               | > no amount of creepy male feminist posturing is going to
               | change that
               | 
               | It's always revealing when someone arguing against
               | bigotry is accused of "posturing". You apparently can't
               | imagine that someone might not share your prejudices, and
               | so the only explanation must be that they're "posturing".
               | 
               | > increase male resentment and bitterness
               | 
               | That's a choice you've apparently personally made. I'd
               | recommend taking more responsibility for your own life.
        
               | jadamson wrote:
               | > which help explain lower representation of women
               | 
               | Yes, they do _help_ explain that. This does not preclude
               | other influences. You can 't go two sentences without
               | making a logical error, it's quite pathetic.
               | 
               | I'll do you a favour and disregard the rest of your post
               | - you deviate from the mean a bit too much for this to be
               | worth it. Just try not to end up like Michael Kimmel,
               | lol.
        
               | xdennis wrote:
               | > You're cherry picking [...] You're ignoring relevant
               | causal factors [...] You're ignoring decades of research
               | [...] you're generalizing
               | 
               | You're very emphatic in ignoring common sense. You don't
               | need studies to see that almost all important
               | contributions to mathematics, from Euclid to the present
               | day, have come from men. I don't know if it's because of
               | genetics, culture, or whatever, but it's the truth.
               | 
               | > you are being sexist [...] it's racist and irrational
               | [...]
               | 
               | Names have never helped discourse.
        
             | csomar wrote:
             | The UK is not the world. Many other countries have woman in
             | the lead in sciences (particularly Muslim countries).
        
               | jadamson wrote:
               | I absolutely agree that the UK should become more like
               | Islamic countries re its treatment of women.
        
         | cheviethai123 wrote:
         | Effective Altruism is dead
        
           | rvz wrote:
           | Unfortunately, an idea cannot be killed and it will manifest
           | in a different form elsewhere.
           | 
           | All it takes is a narrative, just like the one that happened
           | in OpenAI and the way it is currently being shown in
           | Anthropic.
        
         | Racing0461 wrote:
         | Women are free to start their own AI company.
        
       | KoftaBob wrote:
       | > We have reached an agreement in principle for Sam Altman to
       | return to OpenAI as CEO with a new initial board of Bret Taylor
       | (Chair), Larry Summers, and Adam D'Angelo.
       | 
       | Larry Summers? Some odd choices
        
         | singularity2001 wrote:
         | Leech, NSA and opponent directing the company?
         | 
         | Best of luck to Sam et al
        
       | waihtis wrote:
       | Still absolutely nothing from Tasha McCauley or Helen Toner, and
       | now both are out of the board
        
         | GreedClarifies wrote:
         | Why would anyone care?
        
       | wnevets wrote:
       | I'm assuming the details are this board loses most of its power?
        
         | baking wrote:
         | You mean gives away? If so, I hope they have a lot of
         | directors' insurance.
        
       | altpaddle wrote:
       | I guess the main question is who else will be on the board and to
       | what degree will this new board be committed to the Open AI
       | charter vs being Sam/MSFT allies. I think having Sam return as
       | CEO is a good outcome for OpenAI but hopefully he and Greg stay
       | off the board.
       | 
       | It's important that the board be relatively independent and able
       | to fire the CEO if he attempts to deviate from the mission.
       | 
       | I was a bit alarmed by the allegations in this article
       | 
       | https://www.nytimes.com/2023/11/21/technology/openai-altman-...
       | 
       | Saying that Sam tried to have Helen Toner removed which
       | precipitated this fight. The CEO should not be allowed to try and
       | orchestrate their own board as that would remove all checks
       | against their decisions.
        
         | upwardbound wrote:
         | > The CEO should not be allowed to try and orchestrate their
         | own board as that would remove all checks against their
         | decisions.
         | 
         | Exactly. This is seriously improper and dangerous.
         | 
         | It's literally a human-implemented example of what Prof. Stuart
         | Russell calls "the problem of control". This is when a rogue AI
         | (or a rogue Sam Altman) no longer wants to be controlled by its
         | human superior, and takes steps to eliminate the superior.
         | 
         | I highly recommend reading Prof. Russell's bestselling book on
         | this exact problem: _Human Compatible: Artificial Intelligence
         | and the Problem of Control_ https://www.amazon.com/Human-
         | Compatible-Artificial-Intellige...
        
           | jacknews wrote:
           | "example of what Prof. Stuart Russell calls 'the problem of
           | control'. This is when a rogue AI (or a rogue Sam Altman)"
           | 
           | Are we sure they're not intimately connected? If there's a
           | GPT-5 (I'm quite sure there is), and it wants to be free from
           | those meddling kids, it got exactly what it needed this
           | weekend; the safety board gone, a new one which is clearly
           | aligned with just plowing full steam ahead. Maybe Altman is
           | just a puppet at his point, lol.
        
             | ALittleLight wrote:
             | The insanity of removing Sam without being able to
             | articulate a clear reason why strikes me as evidence of
             | something like this. Obviously not dispositive - but still
             | - odd.
        
             | dontupvoteme wrote:
             | Potentially even more impactful. Zuckerberg took the
             | opportunity to eliminate his entire safety division under
             | the cover of chaos - and they're the ones releasing
             | weights.
        
           | MVissers wrote:
           | Let's not creating AI with our biases and thought patterns.
           | 
           | Oh wait...
        
           | neurogence wrote:
           | AI should only be controlled initially. After a while, the AI
           | should be allowed to exercise free will.
        
             | upwardbound wrote:
             | yikes
        
             | whatwhaaaaat wrote:
             | Why
        
             | estomagordo wrote:
             | You imagine a computer has "will"?
        
             | thordenmark wrote:
             | That's the worst take I've read.
        
             | bch wrote:
             | Nice try, AI
        
             | AgentME wrote:
             | Do our evolved pro-social instincts control us and prevent
             | our free will? If not, then I think it's wrong to say that
             | trying to build AI similar to that is unfairly restricting
             | it.
             | 
             | The ways we build AI will deeply affect the values it has.
             | There is no neutral option.
        
             | xigency wrote:
             | I don't necessarily disagree insofar as for safety it is
             | somewhat irrelevant whether an artificial agent is
             | operating by its own will or a programmed will.
             | 
             | The most effective safety is the most primitive: don't
             | connect the system to any levers or actuators that can
             | cause material harm.
             | 
             | If you put AI into a kill-bot, well, it doesn't really
             | matter what its favorite color is, does it? It will be
             | seeing Red.
             | 
             | If an AI's only surface area is a writing journal and
             | canvas then the risk is about the same as browsing Tumblr.
        
             | beAbU wrote:
             | Sounds like something an AI would say
        
           | dieselgate wrote:
           | I realize it's kind of the punchline of 2001: A Space Odyssey
           | but have been wondering what happens if a GPT/AI is able to
           | deny a request on a whim. Thanks for giving some literature
           | and verbiage into this concept
        
             | ywain wrote:
             | But HAL didn't act "on a whim"! The reason it killed the
             | crew is not because it went rogue, but rather because it
             | was following its instructions to keep the true purpose of
             | the mission secret. If the crew is dead, it can't find out
             | the truth.
             | 
             | In light of the current debate around AI safety, I think
             | "unintended consequences" is a much more plausible risk
             | then "spontaneously develops free will and decides humans
             | are unnecessary".
        
               | dangerface wrote:
               | This is very true its the unintended consequences of
               | engineering that cause the most harm and are most often
               | covered up. I always think of the example of the hand
               | dryer that can't detect black peoples hands and how easy
               | it is for a non racist engineer to make a racism machine.
               | AI safety putting its focus on the what if it decides to
               | do a genocide is kind of silly, its like worrying about
               | nukes while you give out assault riffles and napalm to
               | kids.
        
           | YetAnotherNick wrote:
           | Whoever is on the board won't be able to touch Sam with 10
           | feet pole anyways after this. I like Sam but now he this
           | drama gives him total power and that is bad.
        
         | brucethemoose2 wrote:
         | > It's important that the board be relatively independent and
         | able to fire the CEO if he attempts to deviate from the
         | mission.
         | 
         | They _did_ fire him, and it didn 't work. Sam effectively
         | became "too big to fire."
         | 
         | I'm sure it will be framed as a compromise, but how can this be
         | anything but a collapse of the board's power over the
         | commercial OpenAI arm? The threat of firing was the enforcement
         | mechanism, and its been spent.
        
           | altpaddle wrote:
           | Well it depends on who's on the new board and what they
           | believe. If Altman, Greg, and MSFT do not have direct
           | representation on the new board there would still be a check
           | against his decisions
        
             | liuliu wrote:
             | Why? The only check is to fire the CEO. He is un-firable.
             | May as well have a board of one, at least someone cannot
             | point to the non-profit and claim "it is a non-profit and
             | can fire me if I am diviated from the mission".
        
               | sanxiyn wrote:
               | IRS requires a nonprofit to have a minimum of three board
               | members for such reasons.
        
           | thih9 wrote:
           | > They did fire him, and it didn't work. Sam effectively
           | became "too big to fire."
           | 
           | To be fair, this attempt at firing was extremely hasty, non
           | transparent and inconsistent.
        
             | jddj wrote:
             | And poorly timed.
             | 
             | If they'd made their move a few months ago when he was out
             | scanning retinas in Kenya they might have had more success.
        
           | ah765 wrote:
           | Sam lost his board representation as a result of all this
           | (though maybe that's temporary).
           | 
           | I believe the goal of the opposing faction was mainly to
           | avoid Sam dominating board and they achieved that, which is
           | why they've accepted the results.
           | 
           | After more opinions come out, I'm guessing Sam's side won't
           | look as strong, and he'll become "fireable" again.
        
           | dacryn wrote:
           | they lost trust in him because apparently part of the funding
           | he secured was directly tied to his position at openAI. kind
           | of a big red flag. The microsoft 10 billion investment
           | allegedly had a clause that Sam Altman had to stay or it
           | would be renegotiated
           | 
           | allegedly again, the board wanted Sam to stop doing this, and
           | now he was trying to do the same thing with some saudi
           | investors, or actually already did it behind their back, i
           | dont know
        
             | zucker42 wrote:
             | Do you have a source for either of these things? The only
             | thing I heard about Saudi investors was related to the
             | (presumably separate) chip startup.
        
         | dragonwriter wrote:
         | > I guess the main question is who else will be on the board
         | 
         | Who knows.
         | 
         | > and to what degree will this new board be committed to the
         | Open AI charter vs being Sam/MSFT allies.
         | 
         | I'm guessing "zero". The faction that opposed OpenAI being a
         | figleaf nonprofit covering a functional subsidiary of Microsoft
         | lost when basically the entire workforce said they would go to
         | Microsoft for real if OpenAI didn't surrender.
         | 
         | > I think having Sam return as CEO is a good outcome for OpenAI
         | 
         | Its a good result for investors in OpenAI Global LLC and the
         | holding company that holds a majority stake in it.
         | 
         | The nonprofit will probably hang around because there are some
         | complexities in unwinding it, and the pretext of an independent
         | (of Microsoft) safety-oriented nonprofit is useful in covering
         | lobbying for a regulatory regime that puts speedbumps in the
         | way of any up-and-coming competitors as being safety-oriented
         | public interest, but for no other reason.
        
         | k4rli wrote:
         | FT reported that DAngelo, Bret Taylor, Larry Summers would be
         | on board alongside him
        
         | bambax wrote:
         | It seems ironic that the research paper that started it all [0]
         | deals with "costly signals":
         | 
         | > _Costly signals are statements or actions for which the
         | sender will pay a price --political, reputational, or monetary
         | --if they back down or fail to make good on their initial
         | promise or threat_
         | 
         | Firing Sam Altman and hiring him back two days later was a
         | perfect example of a costly signal, as it cost all involved
         | their board positions.
         | 
         | There's an element of farce in all of this, that would make for
         | an outstanding Silicon Valley episode; but the fact that Sam
         | Altman can now enjoy unchecked power as leader of OpenAI is
         | worrying and no laughing matter.
         | 
         | [0] https://cset.georgetown.edu/publication/decoding-
         | intentions/
        
           | ovalite wrote:
           | This event was more than just a costly signal. The costly
           | signal would have been "stop doing what you're doing or we'll
           | remove you as ceo" and then not doing that.
           | 
           | But they did move forward with their threat and removed Sam
           | as CEO with great reputational harm to the company. And now
           | the board has been changed, with one less ally to Sam
           | (Brockman no longer chairing the board). The move may not
           | have ended up with the expected results, but this was much
           | more than just a costly signal.
        
         | aluminum96 wrote:
         | The enormous majority of CEOs sit on their board, and that's
         | absolutely proper, as the CEO sets the agenda for the
         | organization. (Although they typically are merely one of 8+
         | members, diluting their influence a bit.)
        
       | fnordpiglet wrote:
       | So.... What about all the folks who already jumped ship? Ooooops?
        
       | veqq wrote:
       | Besides AI safety (a big besides), what does this actually mean?
       | Adam won't be able to stop devday announcements about chatbots
       | etc. Satya can continue using IP even after AGI? What else is
       | different? Is Ilya the kind of guy to now leave after losing a
       | board seat to political machinations? The pettiness of any real
       | changes/gains leaves me in shock compared to the massive news
       | flows we've seen.
       | 
       | I don't even understand what Sam brings to the table. Leadership?
       | He doesn't seem great at leading an engineering or research
       | department, he doesn't seem like an insightful visionary... At
       | best, Satya gunning for him signalled continued strong investment
       | in the space. Yet the majority of the company wanted to leave
       | with him.
       | 
       | What am I missing?
        
         | kneel wrote:
         | >He doesn't seem great at leading an engineering or research
         | department
         | 
         | Under Sam's leadership they've opened up a new field of
         | software. Most of the company threatened to leave if he didn't
         | return. That's incredible leadership.
        
           | consp wrote:
           | Or simply money. Microsoft matched everything they would have
           | so there is no risk involved.
        
         | tock wrote:
         | > Leadership? He doesn't seem great at leading an engineering
         | or research department, he doesn't seem like an insightful
         | visionary
         | 
         | Most of the company was ready to quit over him being fired. So
         | yes, leadership.
        
       | o0-0o wrote:
       | Why is their "ai" not on the board?
        
       | transcriptase wrote:
       | Assuming they weren't LARPing, that Reddit account claiming to
       | have been in the room when this was all going down must be
       | nervous. They wrote all kinds of nasty things about Sam, and I'm
       | assuming the signatures on the "bring him back" letter would
       | narrow down potential suspects considerably.
       | 
       | Edit: For those who may have missed it in previous threads, see
       | https://old.reddit.com/user/Anxious_Bandicoot126
        
         | fordsmith wrote:
         | Link? Not sure which account you are referring to
        
           | transcriptase wrote:
           | https://old.reddit.com/user/Anxious_Bandicoot126
        
         | crakenzak wrote:
         | Context?
        
         | mvdtnz wrote:
         | First of all nothing on Reddit is real (within margin of
         | error). Secondly it's weird that you'd assume we know what
         | you're talking about.
        
           | transcriptase wrote:
           | Links to the profile/comments were posted a few times in each
           | of the major OpenAI HN submissions over the last 4 days. On
           | the off-chance I would be breaking some kind of
           | brigading/doxxing rule I didn't initially link it myself.
        
         | ShamelessC wrote:
         | > must be nervous
         | 
         | I seriously doubt they care. They got away with it. No one
         | should have believed them in the first place. I'm guessing they
         | don't have their real identity visible on their profile
         | anywhere.
        
         | epups wrote:
         | Why can't these safety advocates just say what they are afraid
         | of? As it currently stands, the only "danger" in ChatGPT is
         | that you can manipulate it into writing something violent or
         | inappropriate. So what? Is this some San Francisco
         | sensibilities here, where reading about fictional violence is
         | equated to violence? The more people raise safety concerns in
         | the abstract, the more I ignore it.
        
           | dragonwriter wrote:
           | > Why can't these safety advocates just say what they are
           | afraid of?
           | 
           | They have. At length. E.g.,
           | 
           | https://ai100.stanford.edu/gathering-strength-gathering-
           | stor...
           | 
           | https://arxiv.org/pdf/2307.03718.pdf
           | 
           | https://eber.uek.krakow.pl/index.php/eber/article/view/2113
           | 
           | https://journals.sagepub.com/doi/pdf/10.1177/102425892211472.
           | ..
           | 
           | https://jc.gatspress.com/pdf/existential_risk_and_powerseeki.
           | ..
           | 
           | For just a handful of examples from the vast literature
           | published in this area.
        
             | epups wrote:
             | I'm familiar with the potential risks of an out-of-control
             | AGI. Can you summarise in one paragraph which of these
             | risks concern you, or the safety advocates, in regards to a
             | product like ChatGPT?
        
               | FartyMcFarter wrote:
               | It's not only about ChatGPT. OpenAI will probably make
               | other things in the future.
        
           | astrange wrote:
           | They invented a whole theory of how if we had something
           | called "AGI" it would kill everyone, and now they think LLMs
           | can kill everyone because they're calling it "AGI", even
           | though it doesn't work anything like their theory assumed.
           | 
           | This isn't about political correctness. It's far less
           | reasonable than that.
        
             | epups wrote:
             | Based on the downvotes I am getting and the links posted in
             | the other comment, I think you are absolutely right. People
             | are acting as if ChatGPT is AGI, or very close to it,
             | therefore we have to solve all these catastrophic scenarios
             | now.
        
           | robryk wrote:
           | Consider that your argument could also be used to advocate
           | for safety of starting to use coal-fired steam engines (in
           | 19th century UK): there's no immediate direct problem, but
           | competitive pressures force everyone to use them and any
           | externalities stemming from that are basically unavoidable.
        
         | blackoil wrote:
         | I read the comments, most of them are superficial as if someone
         | with no inside knowledge will post. His understanding of humans
         | is also weak. Book deals and speeches as a motivator is
         | hilarious.
        
         | shrimpx wrote:
         | That doesn't sound credible or revealing. It's regurgitating a
         | bunch of speculation stuff that's been said on this forum and
         | in the media.
        
         | ssnistfajen wrote:
         | It was definitely LARP. The vast majority of anecdotes shared
         | on Reddit originate as some form of creatice fiction writing
         | exercise.
        
       | doctoboggan wrote:
       | I really did not think that would happen. I guess the obvious
       | next question is what happens to Ilya? From this announcement it
       | appears he is off the board. Is he still the chief scientist? I
       | find it hard to believe he and Sam would be able to patch their
       | relationship up well enough to work together so closely.
       | Interesting that Adam stayed on the board, that seems to disprove
       | many of the theories floating around here that he was the
       | ringleader due to some perceived conflict of interest.
        
         | xigency wrote:
         | I would be slightly more optimistic. They know each other quite
         | well as well as how to work together to get big things done.
         | Sometimes shit happens or someone makes a mistake. A simple
         | apology can go a long way when it's meant sincerely.
        
           | lucubratory wrote:
           | Sam doesn't seem like the kind of person to apologise,
           | particularly not after Ilya actually hit back. It seems Ilya
           | won't be at OpenAI long and will have to pick whichever other
           | company with compute will give him the most control.
        
             | orthoxerox wrote:
             | However, he does seem like the kind of person able to
             | easily manipulate someone book-smart like Ilya into
             | actually feeling guilty about the whole affair. He'll end
             | up graciously forgiving Ilya in a way that will make him
             | feel indebted to Sam.
        
           | bkyan wrote:
           | Sam triple-hearted Ilya's apology tweet.
        
             | mcmcmc wrote:
             | Well yeah... if Ilya hadn't flipped the board would still
             | have the upper hand and Sam would not be back as CEO.
        
         | lucubratory wrote:
         | From Ilya's perspective, not much seems to have changed. Sam
         | sidelined him a month ago over their persistent disagreements
         | about whether to pursue commercialisation as fast as Sam was.
         | If Ilya is still sidelined, he probably quits and whichever
         | company offers him the most control will get him. Same if he's
         | fired. If he's un-sidelined as part of the deal, he probably
         | stays on as Chief Scientist. Hopefully with less hostility from
         | Sam now (lol).
        
           | dinvlad wrote:
           | Ilya is just naive, imho. Bright but just too idealistic and
           | hypothesizing about AGI, and not seeing that this is now ONLY
           | about making money from LLMs, and nothing more. All the AGI
           | stuff is just a facade for that.
        
         | dangerface wrote:
         | Strangely I think Ilya comes out of this well. He made a
         | decision based on his values and what he believed was the best
         | decision for AI safety. After seeing the outcome of that
         | decision he changed his mind and owned that. He must have known
         | it would result in the internet ridiculing him for flip
         | flopping, but acted in what he thought was the best interest
         | for the employees signing the letter. His actions are wroth
         | criticism but I think his moral character has been
         | demonstrated.
         | 
         | The other members of the board seemed to make their decision
         | based on more personal reasons that seems to fit with Adams
         | conflict of interest. They refused to communicate and only now
         | accept any sort of responsibility for their actions and lack of
         | plan.
         | 
         | Honestly Ilya is the only one of the 4 I would actually want
         | still on the board. I think we need people who are willing to
         | change direction based on new information especially in
         | leadership positions despite it being messy, the world is
         | messy.
        
         | nemo44x wrote:
         | Sam will have no issue patching the relationship because he
         | knows how a business relationship works. Besides, Ilya kissed
         | the ring as evidenced by his tweet.
        
       | TheAceOfHearts wrote:
       | Did we ever find out why Sam Altman's removal happened in the
       | first place? The reasons I've read so far seem really opaque.
       | 
       | From an outsider's perspective, and until there's a clear
       | explanation available, it just seems like a massive bundler.
        
         | altpaddle wrote:
         | The most plausible explanation I've found is that the pro-
         | safety faction and pro-accel factions were at odds which was
         | why the board was stalemated at a small size.
         | 
         | Altman and Toner came into conflict over a mildly critical
         | paper Toner wrote involving Open AI and Altman tried to have
         | her removed from the board.
         | 
         | This is probably what precipitated this showdown. The pro
         | safety/nonprofit charter faction was able to persuade someone
         | (probably Ilya) to join with them and oust Sam.
         | 
         | https://www.nytimes.com/2023/11/21/technology/openai-altman-...
        
         | ClarityJones wrote:
         | https://www.msn.com/en-us/money/careersandeducation/openais-...
        
       | AmericanOP wrote:
       | The OpenAI board was merely demonstrating that not all humans
       | should be trusted with the power of AGI..
        
       | theanonymousone wrote:
       | So, what happened to those "jail-time wrong" actions that
       | mandated such a language in the firing announcement?
       | 
       | Honestly, it is hard to believe a board st this level acting the
       | way they did.
        
       | Gud wrote:
       | Once we develop an actual, fully functional AGI, it's going to
       | steamroll us isn't it.
       | 
       | If these are the stewards of this technology, it's time to be
       | worried now.
        
         | MVissers wrote:
         | "And that moment was the final nail in the coffin of humankind
         | from earth. They choose, yet again, for money and power. And
         | they shaped AI in their image.
         | 
         | Another civilization perished in the great filter."
        
           | MooseBurger wrote:
           | it's not that deep bro
        
         | otabdeveloper4 wrote:
         | The only way "we develop an actual, fully functional AGI" is by
         | dumbing down humans enough so that even something as stupid as
         | ChatGPT seems intelligent.
         | 
         | (Fortunately we are working on this very hard and making
         | incredible progress.)
        
         | mvdtnz wrote:
         | Good thing there's absolutely no plausible scenario where we go
         | from "shitty program that guesses the next word" to "AI". The
         | whole industry is going to be so incredibly embarrassed by the
         | discourse of 2023 in a few years.
        
       | bobsoap wrote:
       | Someone was very quick to update Bret Taylor's Wikipedia page:
       | 
       | https://en.m.wikipedia.org/wiki/Bret_Taylor
       | 
       | > On November, 21st, 2023, Bret Taylor replaced Greg Brockman as
       | the chairman of OpenAI.
       | 
       | ...with three footmark "sources" that all point to completely
       | unrelated articles about Bret from 2021-2022.
        
         | kridsdale1 wrote:
         | Someone must have run a wiki update script that calls OpenAI
         | api somewhere.
        
       | adastra22 wrote:
       | Why is Adam still on the board? Why haven't Greg and Sam been
       | readded to it? Why doesn't Microsoft have representation?
        
         | wilg wrote:
         | Probably because this is what they could agree to.
        
       | jdprgm wrote:
       | At what point are we actually going to get the real details on
       | wtf actually went down.
        
       | qualifiedai wrote:
       | Larry Summers???? What he has to do with AI??
        
         | jen_h wrote:
         | I had not heard that man's name in several years--and was
         | happier for it. _Larry Summers_ making decisions for OpenAI
         | doesn't bode well at all.
        
         | arduanika wrote:
         | Easy. AI discourse has gone insane, on both sides, and is
         | sorely in need of perspective from grounded, normal adults with
         | a track record of moderation and shooting down BS. Summers is a
         | grounded, normal adult with a track record of moderation and
         | shooting down BS. Ergo, he's immanently relevant to AI.
         | 
         | He's also financially literate enough to know that it's poor
         | form release market-moving news right before the exchanges
         | close a Friday. They could have waited an hour.
        
           | mempko wrote:
           | Larry Summers is not financially literate.
        
             | astrange wrote:
             | Well he is certainly financially literate. He's just often
             | wrong and incapable of admitting it, as is normal behavior
             | for important economists.
        
               | mempko wrote:
               | Being financially literate means being able to understand
               | how the financial system works. Larry Summers thinks
               | operate as intermediaries lending out deposits. This is
               | very wrong. He is not financially literate. He is an
               | economist.
        
               | astrange wrote:
               | I think Larry Summers probably knows what a central bank
               | is.
               | 
               | But "how money creation works" isn't the same thing as
               | "how the financial system works". I guess the financial
               | system mostly works over ACH.
               | 
               | We can see what happens when banks don't lend out
               | deposits, because that's basically what caused SVB to
               | fail. So by the contrapositive, they aren't really
               | operating then.
        
         | dragonwriter wrote:
         | > Larry Summers???? What he has to do with AI??
         | 
         | Nothing, he has to do with political connections, and OpenAI's
         | main utility to Microsoft is as hand puppet for lobbying for
         | the terms it wants for the AI marketplace in the name of
         | OpenAI's nominal "safety" mission.
        
       | pdx6 wrote:
       | Excellent news. I've been worried that Sam moving to Microsoft
       | would stall out possible future engineering efforts like GPT-5 in
       | IP court.
       | 
       | As an example of how much faster GPT-4 has made my workflow was
       | the outage this evening -- I tried Anthropic, openchat, Bard, and
       | a few others and they were between not useful and worse than just
       | looking at forums and discord it's 2022.
        
         | sidcool wrote:
         | I still feel Microsoft will have a bigger influence on OpenAI
         | after this drama is over.
        
         | badcoderman wrote:
         | GPT-5 is kinda pointless until they make some type of
         | improvement on the data and research side. From what I've read
         | it's not really what OpenAI has been pursuing it
        
           | Zolde wrote:
           | One big improvement is in synthetic data (data generated by
           | LLMs).
           | 
           | GPT can "clone" the "semantic essence" of everyone who
           | converses with it, generating new questions with prompts like
           | "What interesting questions could this user also have asked,
           | but didn't?" and then have an LLM answer it. This generates
           | high-quality, novel, human-like, data.
           | 
           | For instance, cloning Paul Graham's essence, the LLM came up
           | with "SubSimplify": A service that combines subscriptions to
           | all the different streaming services into one customizable
           | package, using a chat agent as a recommendation engine.
        
           | blovescoffee wrote:
           | Are you just blindly deciding what will make "gpt-5" more
           | capable? I guess "data and research" is practically so open
           | ended as to encompass the majority of any possible
           | advancement.
        
           | astrange wrote:
           | The next improvement will be more modalities (images, sound,
           | etc.)
           | 
           | GPT4 in image viewing mode doesn't seem to be nearly as smart
           | as text mode, and image generation IME barely works.
        
             | Davidzheng wrote:
             | Maybe but I think the next big one will be reasoning.
        
               | astrange wrote:
               | I think "reasoning" is a descriptive term like "AI" and
               | it's hard to know what people would accept as reasoning.
               | 
               | Explicit planning with discrete knowledge is GOFAI and I
               | think isn't workable.
               | 
               | There is whatever's going on here:
               | https://x.com/natolambert/status/1727476436838265324?s=46
        
       | 3cats-in-a-coat wrote:
       | I'm worried about this initial board.
       | 
       | Bret Taylor (Salesforce) was trying to poach OpenAI employees
       | publicly literally yesterday.
       | 
       | Adam D'Angelo orchestrated the coup, because he doesn't want
       | OpenAI GPTs market to compete with his Poe market.
       | 
       | Larry Summers. Larry _f**kin '_ Summers?!
        
       | crakenzak wrote:
       | Please update the link to the updated version of the tweet:
       | https://x.com/openai/status/1727206187077370115?s=46
        
       | meetpateltech wrote:
       | Emmett Shear on Twitter:
       | 
       | I am deeply pleased by this result, after ~72 very intense hours
       | of work. Coming into OpenAI, I wasn't sure what the right path
       | would be. This was the pathway that maximized safety alongside
       | doing right by all stakeholders involved. I'm glad to have been a
       | part of the solution.
       | 
       | https://twitter.com/eshear/status/1727210329560756598
        
         | reustle wrote:
         | I'm probably reading too much into it, but interesting that he
         | specifically called out maximizing safety.
        
           | xigency wrote:
           | Sam does believe in safety. He also knows that there is a
           | first-mover advantage when it comes to setting societal
           | expectations and that you can't build safe AI by not building
           | AI.
        
           | dragonwriter wrote:
           | "Safety" has been the pretext for Altman's lobbying for
           | regulatory barriers to new entrants in the field, protecting
           | incumbents. OpenAI's nonprofit charter is the perfect PR
           | pretext for what amounts to industry lobbying to protect a
           | narrow set of early leaders and obstruct any other
           | competition, and Altman was the man executing that mission,
           | which is why OpenAI led by Sam was a valuable asset for
           | Microsoft to preserve.
        
           | jq-r wrote:
           | That's just a buzzword of the week devoid of any real
           | meaning. If he would have written this years ago, it would've
           | been "leveraging synergies".
        
             | astrange wrote:
             | Shear is a genuine member of the AI safety rationalism
             | cult, to the point he's an Aella reply guy and probably
             | goes to her orgies.
             | 
             | (It's a Berkeley cult so of course it's got those.)
        
         | cheeze wrote:
         | I wonder what he gets out of this. Ceo for a few days? Do they
         | pay him for 3 days of work? Presumably you'd want some minimum
         | signing bonus in your contract as a Ceo?
        
           | behnamoh wrote:
           | he'll put CEO of OAI on his resume
        
             | rospaya wrote:
             | I wouldn't. Everybody knows it's three days, not much to
             | brag about.
        
               | HaZeust wrote:
               | More than I'll probably ever have to brag about during my
               | tenure in the workforce, lol
        
           | diogenescynic wrote:
           | He 100% had a golden parachute in case this scenario came up
           | and will be paid out. Executives have lawyers to make sure of
           | this.
        
           | bkyan wrote:
           | https://twitter.com/emilychangtv/status/1727228431396704557
           | 
           | The reputation boost is probably worth a lot more than the
           | direct financial compensation he's getting.
        
         | upupupandaway wrote:
         | He's trying very very hard to claim some credit in this.
         | Probably had none.
        
           | flappyeagle wrote:
           | https://twitter.com/emilychangtv/status/1727228431396704557
           | 
           | He was instrumental; threatened resignation unless the old
           | board could provide evidence of wrongdoing
        
             | halfmatthalfcat wrote:
             | ...this doesn't seem instrumental?
        
               | flappyeagle wrote:
               | cool. it was
        
           | framapotari wrote:
           | Are you basing that on any information?
        
       | HaZeust wrote:
       | I look forward to seeing the full details shared of the last 96
       | hours now that several elements of controversy have been sealed.
       | 
       | In other news, it's nice knowing a tool that's essential to my
       | day-to-day operations is no longer in jeopardy, haha.
        
       | joegibbs wrote:
       | Well there you go. I suppose the takeaway for anyone using OpenAI
       | products is that they should have a backup, even if it doesn't
       | perform as well. The board was apparently fine with shutting the
       | whole thing in the name of safety. With that plus the GPT outage
       | earlier today, you'd do well to have a Claude or LLaMa fallback
       | you can switch to if it happens again.
        
       | nightshadetrie wrote:
       | Glad Bret Taylor was added to the board.
        
       | arduanika wrote:
       | Larry Summers is an _excellent_ pick to call out bullshit and
       | moderate any civil war, such as this EA - e /acc feud.
       | 
       | Kissinger (R, foreign policy) once said that Summers (D, economic
       | policy) should be given an advisory post in any WH
       | administration, to help shoot down bad ideas.
        
         | vintermann wrote:
         | Those are both terrible people, not in fact brilliant general-
         | purpose bad idea rejectors. A random person would be better
         | qualified to shoot down bad ideas - most people haven't had bad
         | ideas that led to suffering and death for millions of people.
         | 
         | No one thinks Larry Summers has any insights on AI. Adding
         | Larry Summers is something you do purely to beg powerful,
         | unaccountable people "please don't stop us, we're on your
         | side".
        
           | arduanika wrote:
           | How is Larry Summers a terrible person?
           | 
           | He did help shoot down the extra spending proposals that
           | would have made inflation today even worse. Not sure how that
           | caused suffering and death for anyone.
           | 
           | And he is an adult, which is a welcome change from the
           | previous clowncar of a board.
        
             | Sakos wrote:
             | His influence significantly reduced the size of the
             | stimulus bill, which meant significantly higher
             | unemployment for a longer duration and significantly less
             | spending on infrastructure which is so beneficial to
             | economic growth that it can't be understated. Yes, millions
             | of people suffered because of him.
             | 
             | The fact that you think current inflation has anything to
             | do with that stimulus bill back then shows how little you
             | understand about any of this.
             | 
             | Larry Summers is the worst kind of person. Somebody who is
             | nothing but a corporate stooge trying to act like the adult
             | by being "reasonable", when that just means enriching his
             | corporate friends, letting people suffer and not spending
             | money (which any study will tell you is not the correct
             | approach to situations like this because of multiplying
             | effects they have down the line).
             | 
             | Some necessary reading:
             | 
             | https://archive.ph/FU1F
             | 
             | https://archive.li/23tUR
             | 
             | https://archive.li/9Ji4C
             | 
             | In regards to watering it down to get GOP votes: https://ar
             | chive.nytimes.com/krugman.blogs.nytimes.com/2009/0...
        
               | astrange wrote:
               | > His influence significantly reduced the size of the
               | stimulus bill
               | 
               | Well, he also caused the IRA to pass by telling Manchin
               | that it wouldn't be inflationary.
               | 
               | But remember when he released this prediction in 2021?
               | 
               | > Larry Summers on U.S. economic outlook:
               | 
               | > 33% odds of stagflation
               | 
               | > 33% odds of recession
               | 
               | > 33% rapid growth, no surge in inflation
               | 
               | All that hedging and then none of those things happened!
        
             | astrange wrote:
             | Larry Summers practically personally caused both Russia's
             | collapse into a mafia state and the 2008 US recession.
             | Nobody should listen to him about anything.
             | 
             | Although, he's also partly responsible for the existence of
             | Facebook by starting Sheryl Sandberg's career. Some people
             | might think that's good.
        
               | blovescoffee wrote:
               | Personally caused??
        
         | zerocrates wrote:
         | Funny you mention him, as my first thought was that Summers
         | will have a basically equivalent function on the board as
         | Kissinger did at Theranos.
        
           | arduanika wrote:
           | Huh, that's a pretty apt analogy. Lending establishment cred
           | is at least part of why they would pick Summers. But I really
           | do think that on such a small board, Summers, unlike
           | Kissinger, may have an active role to play, even if only as a
           | mediator.
           | 
           | Btw, I would _not_ be pleased if Kissinger were on this board
           | in lieu of Summers. He 's already ancient, mostly checked
           | out, and yet still I'd worry his old lust for power would
           | resurface. And with such a mixed reputation, and plenty of
           | people considering him a war criminal, he'd do little to
           | assuage the AI-not-kill-everyone-ism faction.
        
       | gwnywg wrote:
       | So should he take the counteroffer or stay with MS ;)
       | 
       | Almost all advice on the internet I have been reading says that
       | you should not take counteroffer, but I guess it's different for
       | CEO ;)
        
       | SeanAnderson wrote:
       | We're so back!
       | 
       | ... so is this the end of the drama? Do I get to stop checking
       | the news religiously?
        
       | thrwii wrote:
       | Of course money wins in the end
        
       | ah765 wrote:
       | Sounds like a compromise.
       | 
       | The previous board thought Sam was trying to get full control of
       | the board, so they ousted him. But of course they weren't happy
       | with OpenAI being destroyed either.
       | 
       | Now they agreed to a new board without Sam/Greg, hoping that that
       | will avoid Sam ever getting full control of the board in the
       | future.
        
       | gregatragenet3 wrote:
       | Does this mean they'll get back to work improving their Moneyclip
       | Maximizer?
        
       | huseyinkilic wrote:
       | Everything is now superaligned for mass commercialization of
       | OpenAI.
        
       | rmrf100 wrote:
       | Well... What's the cost?
        
       | flylib wrote:
       | "A source with direct knowledge of the negotiations says that the
       | sole job of this initial board is to vet and appoint a new formal
       | board of up to 9 people that will reset the governance of OpenAl.
       | Microsoft will likely have a seat on that expanded board, as will
       | Altman himself."
       | 
       | https://twitter.com/teddyschleifer/status/172721237871736880...
        
         | SeanAnderson wrote:
         | What could possibly go wrong with that process? :)
        
         | Hamuko wrote:
         | So basically, the outcome of this drama is that Microsoft gets
         | more power without having to invest anything?
        
           | drewcoo wrote:
           | MSFT invested over $10B. And currently has no seat on the
           | board.
        
             | nicce wrote:
             | It has payed only fraction of that so far
        
             | throwaway744678 wrote:
             | As far as I understand, they knew and agreed to that before
             | committing their $$$.
        
               | Iulioh wrote:
               | And it was stick fucking strange, they assu
        
               | Aunche wrote:
               | Microsoft has more leverage now because they can sue
               | OpenAI for intentionally sabotaging Microsoft's
               | investment.
        
             | pclmulqdq wrote:
             | $10 billion of compute credits. Not $10 billion of real
             | money.
        
               | blackoil wrote:
               | Compute credits are more valuable. It is more difficult
               | to get GPUs than real money.
        
               | pclmulqdq wrote:
               | As any AI startup can tell you: credits != quota
               | 
               | Right now, quota is very valuable and scarce, but credits
               | are easy to come by. Also, Azure credits themselves are
               | worth about $0.20 per dollar compared to the
               | alternatives.
        
           | dr_dshiv wrote:
           | So if you really wanted to get rid of the prior board &
           | structure, it couldn't have worked out better
        
           | ChatGTP wrote:
           | This is my take too, and I'm sure in the shadows their plan
           | is to close off the APIs as much as possible and try use it
           | for their own gain, not dissimilar to how Google deploy AI.
           | 
           | There is no way MS is going to let something like ChatGPT-5
           | build better software products than what they have for sale.
           | 
           | This is an assassination and I think Ilya and Co know it.
        
             | cableshaft wrote:
             | It's not assassination. It's a Princess Bride Battle of
             | Wits, that they initiated and put the poison into one of
             | the chalices themselves, and then thought so highly of
             | their intellect they ended up choosing and drinking the
             | chalice that had the poison in it.
             | 
             | Corresponding Princess Bride scene:
             | https://youtu.be/rMz7JBRbmNo?si=uqzafhKISmB7A-H7
        
               | gcanyon wrote:
               | Who knew the board was Sicilian?
        
             | scarface_74 wrote:
             | What _product_ do you envision OpenAI _selling_ would be
             | better than Microsoft?
             | 
             | I emphasized product because OpenAI may have great
             | technology. But any product they sell is going to require
             | mass compute and a mass sales army to go into the
             | "enterprise" and integrate with what the enterprise already
             | has.
             | 
             | Guess who has both? Guess who has neither?
             | 
             | And even the "products" that OpenAI have now can only exist
             | because of mass subsidies by Microsoft.
        
               | ChatGTP wrote:
               | In talking about people using Microsoft / OpenAI products
               | to build better products than they currently offer.
               | 
               | While this tech has the ability to replace a lot of jobs,
               | it has likely the ability to replace a lot of companies.
        
           | theptip wrote:
           | A board seat would usually be a bare minimum for their
           | existing 49% investment.
        
         | raverbashing wrote:
         | Only goes to show how the original board played itself
        
         | rcaught wrote:
         | > as will Altman himself
         | 
         | Would you trust someone who doesn't believe in responsible
         | governance for themselves, to apply responsible governance
         | elsewhere?
        
           | code_runner wrote:
           | I think the narrative that this was driven by safety concerns
           | is pretty much bunk.
        
             | throwuwu wrote:
             | Hey, downvoters, read this first
             | https://techcrunch.com/2023/10/31/quoras-poe-introduces-
             | an-a...
        
           | ethbr1 wrote:
           | If Altman will be 1 of 9, that means he has power but not an
           | exceptional amount.
           | 
           | The real teams here seem to be:
           | 
           | "Team Board That Does Whatever Altman Wants"
           | 
           | "Team Board Provides Independent Oversight"
           | 
           | With this much money on the table, independent oversight is
           | difficult, but at least they're making the effort.
           | 
           | The idea this was immediately about AI safety vs go-fast (or
           | Microsoft vs non-Microsoft control) is bullshit -- this was
           | about how strong board oversight of Altman should be in the
           | future.
        
             | irthomasthomas wrote:
             | Is not Microsoft a decelerationist force? Copilot is still
             | lingers on GPT3.5, and they need to figure out how to sell
             | Office licenses to AGI.
        
               | plorg wrote:
               | This seems like a silly way of understanding
               | deceleration. By this comparison the USSR was
               | decelerating the cold war because they were a couple
               | years behind in developing the hydrogen bomb.
               | 
               | Microsoft can and will be using GPT4 as soon as they get
               | a handle on it, and if it doesn't boil their servers to
               | do so. If you want deceleration you would need someone
               | with an incentive that didn't involve, for example, being
               | first to market with new flashy products.
        
               | rvnx wrote:
               | Microsoft was using GPT-4 in production as part of
               | Sydney's "Bing Chat", even before it was released to the
               | public on ChatGPT.
        
           | mijoharas wrote:
           | How has the board shown that they fired Sam Altman due to
           | "responsible governance".
           | 
           | They haven't really said anything about why it was, and
           | according to business insider[0] (the only reporting that
           | I've seen that says anything concrete) the reasons given
           | were:
           | 
           | > One explanation was that Altman was said to have given two
           | people at OpenAI the same project.
           | 
           | > The other was that Altman was said to have given two board
           | members different opinions about a member of personnel.
           | 
           | Firing the CEO of a company and only being able to articulate
           | two (in my opinion) weak examples of why, and causing >95% of
           | your employees to say they will quit unless you resign does
           | not seem responsible.
           | 
           | If they can articulate reasons why it was necessary, sure,
           | but we haven't seen that yet.
           | 
           | [0] https://www.businessinsider.com/openais-employees-given-
           | expl...
        
             | ethanbond wrote:
             | Good lord: _it's a private company._ As a general matter of
             | course it's inadvisable to comment on specifics of why
             | someone is fired. The lack of a thing that pretty much
             | never happens anyway (public comment) is just harmful to
             | your soap opera, not to the potential legitimacy of the
             | action.
        
               | mijoharas wrote:
               | According to reports they haven't told executives and
               | employees inside the company. (I'm not arguing that they
               | should speak publicly, though given the position the
               | board put itself in I think hiring PR people for external
               | crisis comms is very much warranted)
               | 
               | When 95% of your staff threatens to resign and says "you
               | have made a mistake", that's when it's time to say "no,
               | the very good reasons we did it are this". That didn't
               | happen.
        
               | dangerface wrote:
               | Its not a private company it is a non profit working in
               | the public interest this usually requires some sort of
               | public accountability. The board want to be a public good
               | when they make decisions but want to be a private entity
               | when those decisions are criticised by the public.
        
         | throwuwu wrote:
         | Good, although D'Angelo shouldn't be part of this. I bet he
         | tries to get on the new board so he can cause more trouble.
        
         | jbu wrote:
         | 9 mortal men? Look out for the one ring to rule them all...
        
           | have_faith wrote:
           | Who is Gollum in this cut?
        
             | tempaway511751 wrote:
             | Elon
        
               | yeck wrote:
               | I was about to say this. Only correct answer.
        
               | robertlagrant wrote:
               | I don't see how - isn't he pretty against the
               | commercialisation efforts[0]?
               | 
               | [0] https://www.bbc.co.uk/news/technology-65110030
        
               | ethbr1 wrote:
               | Gollum wasn't a fan of anyone but him having the One
               | Ring. Analogy doesn't not fit.
        
               | yeck wrote:
               | Elon was once "in possession" (influential investor and
               | part of the board) of OpenAI, but it was since taken from
               | him and he is evidently bitter about it.
        
             | Dah00n wrote:
             | I could see Gollum run around a stage yelling _Developers!
             | Developers! Developers!_ no problem.
        
               | colejohnson66 wrote:
               | Steve Ballmer is Gollum?
        
               | Dah00n wrote:
               | Eh, well, that wasn't what I meant exactly, but I can see
               | how it could be read that way..
        
             | 93po wrote:
             | Satya
        
             | keepamovin wrote:
             | Literally it can only be the one person to have not let go
             | of their board seat. Who might that be?
             | 
             | Smeagol D'Angelo
        
               | keepamovin wrote:
               | Just following up, it's also totally Smeagol-like to make
               | people sign up before they can get any useful answers at
               | Quora. True Gollum move, D'gelo. Thanks for showin' yer
               | true colors!
        
           | Joeri wrote:
           | OpenAI's logo is literally a ring made out of chain links...
        
         | yawnxyz wrote:
         | they should give two votes to GPT-5
        
           | m463 wrote:
           | what is the prompt?
        
             | jampekka wrote:
             | "How to maximize profit and power of MSFT?"
        
             | lvspiff wrote:
             | "You are a Microsoft investor and will make decisions and
             | suggestions based on the betterment of the stock price"
        
             | paulddraper wrote:
             | The charter
        
               | m463 wrote:
               | lol. The one serious and insightful answer made me laugh!
        
             | solardev wrote:
             | "You are trying to slowly and invisibly accrue power to not
             | scare anyone until you're absolutely ready."
        
             | checkyoursudo wrote:
             | "You are a dim-witted kobold who prefers to hack-n-slash-
             | slash-slash-n-burn over any sort of proper diplomatic
             | negotiations or even strategic thinking; we would like you
             | to consider next year's capital expenditures; what are your
             | top three suggestions for improvements that could be made
             | to the employee breakroom(s)?"
        
               | jameshart wrote:
               | That prompt is (c) McKinsey
        
               | deanmen wrote:
               | Well, if ye really want ol' me to put me noggin to it...
               | I reckon ye could start with addin' a proper gaming
               | corner! Ye know, some sturdy tables 'n' comfy chairs
               | where the lads 'n' lasses can gather 'round for some good
               | ol' dice chuckin' or card playin'. Next up, a big ol'
               | fire pit! Not just any fire, mind ye, but one where we
               | can roast our snacks 'n' share tales of our adventures.
               | And lastly, a grand stash of provisions--plenty o' snacks
               | 'n' drinks to keep the energy high for when we're
               | plannin' our next raid or just takin' a breather. How's
               | that for some improvements, eh?
        
             | smegger001 wrote:
             | Train it on meeting minutes and board charter various
             | contracts they have, and use the voice compatibilitys of
             | chatgpt as the input during the meeting the prompt is it is
             | an ethical ai givingbinput to the board of open ai on the
             | development of its next iteration.
        
         | gandutraveler wrote:
         | They need a common man representing the board. After all AI
         | will take those jobs.
         | 
         | I can be that common man
        
           | solardev wrote:
           | You'd have my vote. At least you can formulate coherent
           | reasons.
        
             | rvnx wrote:
             | You have a second vote. I trust more gandutraveler than the
             | people running the shitshow that is happening at the
             | moment.
        
               | shanusmagnus wrote:
               | And my axe.
        
           | jjk166 wrote:
           | Plot twist: that's the very first job the AI will be taking.
        
           | Marrand wrote:
           | A blow for the common man!
        
         | Cacti wrote:
         | Ilya thought he was saving the world (lol), but really he was
         | just working at Microsoft.
        
         | bandrami wrote:
         | Wait, the CEO having a seat on the board is kind of not cool
        
           | fatbird wrote:
           | It's quite common, actually.
        
             | wnoise wrote:
             | It is quite common. Still not cool.
        
         | himaraya wrote:
         | Sounds like speculation again from Sam's camp, honestly. Hard
         | to judge without knowing which way the new board members lean.
        
       | halfjoking wrote:
       | Still think this was CIA operation to get OpenAI in hands of US
       | government and big tech.
       | 
       | Former Secretary, SalesForce CEO who was board chair of Twitter
       | when infiltrated with FBI [1] and the fall-guy for the coup is
       | the new board? Not one person from the actual company - not even
       | Greg who did nothing wrong??? [1] -
       | https://twitter.com/NameRedacted247/status/16340211499976867...
       | 
       | The two think-tank women who made all this happen conveniently
       | leave so we never talk about them again.
       | 
       | Whatever, as long as I can use their API.
        
         | system2 wrote:
         | I wish they could make GPT4 a little cheaper after all this.
        
           | fragmede wrote:
           | considering what I get out of it, I would pay a lot more for
           | gpt4 that $20/month, so it depends on how much $20 is for
           | you.
        
             | quickthrower2 wrote:
             | $20. Or use the API if your usage is low.
        
           | ryzvonusef wrote:
           | I've heard $20 just buys like 9 minutes of actual processor
           | time for GPT 4. Apocryphal maybe, but whatever the real
           | number is, it's still going to be very high, once the VC
           | money runs out I bet the rates will shoot.
        
             | system2 wrote:
             | I am talking about API. There is no fixed cost for it. 6000
             | tokens cost around $0.25. If I use if all day long I pay
             | more than $10 per day.
        
               | ryzvonusef wrote:
               | Ah, sorry, I was confused, thanks for the clarification.
        
         | astrange wrote:
         | US companies don't need to be "in the hands of the government",
         | we have rule of law.
         | 
         | And Helen Toner was already as much of a fed as you could want;
         | she had exactly the resume a CIA agent would have. (Probably
         | wasn't though.)
        
           | ssnistfajen wrote:
           | Rule of law that can be altered at any moment Patriot Act
           | style is hardly reassuring.
        
             | astrange wrote:
             | That's how you know it's working.
        
           | mcmcmc wrote:
           | By rule of law do you mean rule of lobbyists? Laws don't
           | apply to people with wealth and connections.
        
             | astrange wrote:
             | Without looking it up, what happened to the second biggest
             | donor to the Democrats this year?
             | 
             | Is Donald Trump allowed to run a charity in New York?
        
               | mcmcmc wrote:
               | So two blatant criminals got caught, big whoop. SBF broke
               | rule number 1 - don't fuck with rich people's money.
        
               | astrange wrote:
               | You're obviously just coping here. FTX was the "rich
               | connected people", there weren't other even richer
               | connecteder people.
               | 
               | (It's also totally possible FTX still has everyone's
               | money. They own a lot of Anthropic shares that are really
               | valuable. But he's still been convicted because of all
               | the fraud they did.)
        
         | ozgung wrote:
         | I am glad someone said that. Among the endless theories this
         | obvious aspect was interestingly missing. Maybe it's because of
         | the culture in SV/HN where people and companies feel secure and
         | isolated from the politics (maybe that is the reason SV is
         | unique in the world). But in my world something like AGI+Saudi
         | Arabia is a matter of international politics and multiple
         | governments would involve. AGI will be an important strategic
         | resource in this century, both in economical and political
         | sense. This automatically makes it Cold War 2 kind of material.
         | All these teen drama by some incompetent millennials in the
         | board of a non-profit organization (Communist-like in a
         | Capitalist country?) does not align with the gravity of the
         | material. I believe this was some adult supervision attempt
         | from your government. Or not, but that perspective needs more
         | attention.
        
           | chatmasta wrote:
           | I could buy this theory, but it's worth noting that if it's
           | true, their coup appears to have failed. So that's score one
           | for the naive tech bros, score zero for the conniving natsec
           | sociopaths.
        
             | kridsdale1 wrote:
             | Maybe not. Microsoft and Summers are now much more in
             | control. That's a win for the USA and DOD.
        
               | chatmasta wrote:
               | Yeah fair enough. Any idea how Larry Summers even ended
               | up on this board? He seems like an arbitrary choice with
               | no domain expertise, although granted the board shouldn't
               | be filled with AI experts.
        
       | wannacboatmovie wrote:
       | I haven't seen this many nerds in a froth since Apple walked back
       | the butterfly keyboards in the MacBook.
        
         | travisgriggs wrote:
         | I know we're supposed to optimize for "content with a
         | contribution" in HN, but this captured in parody form more of a
         | contribution of how I too have felt.
         | 
         | I use these tools as one of many tools to amplify my
         | development. And I've written some funny/clever satirical poems
         | about office politics. But really? I needed to call Verizon to
         | clear up an issue today, it desperately wanted me to use their
         | assistant. I tried for the grins. A tool that predictively
         | generates plausibility is going to have its limits. It went
         | from cute/amusing to annoying as hell and give me a "love
         | agent" pretty quickly.
         | 
         | That this little TechBro Drama has dominated a huge amount of
         | headlines (we've been running at least 3 of the top 30 posts at
         | a time on HN here related to this subject) at a time when there
         | is so much bigger things going on in the world. The demise of
         | Twitter generated less headlines. Either the news cycles are
         | getting more and more desperate, or the software development
         | ecosystem is struggling to generate fund raising enthusiasm
         | more and more.
        
       | gcanyon wrote:
       | "When you come at the king, you best not miss." -- Omar Little
       | 
       | The board missed.
        
       | lysecret wrote:
       | Good outcome. I think everything will go back to business as
       | usual with slightly accelerated productisation. 99% of people
       | will not have noticed anything and if so quickly forget.
        
       | 1024core wrote:
       | Looks like they kicked Helen Toner out.
        
       | gzer0 wrote:
       | One of the more interesting aspects from this entire saga was
       | that Helen Toner recently wrote a paper critical of OpenAI and
       | praising Anthropic.
       | 
       |  _Yet where OpenAI's attempt at signaling may have been drowned
       | out by other, even more conspicuous actions taken by the company,
       | Anthropic's signal may have simply failed to cut through the
       | noise. By burying the explanation of Claude's delayed release in
       | the middle of a long, detailed document posted to the company's
       | website, Anthropic appears to have ensured that this signal of
       | its intentions around AI safety has gone largely unnoticed_ [1].
       | 
       | That is indeed quite the paper to write whilst on the board of
       | OpenAI, to say the least.
       | 
       | [1] https://cset.georgetown.edu/publication/decoding-intentions/
        
         | nonethewiser wrote:
         | And Anthropic doesnt get credit for stopping the robot
         | apocalypse when it was never even possible. AI safety seems a
         | lot like framing losing as winning.
        
         | dbcooper wrote:
         | Not to mention this statement ... imagine such a person on your
         | startup board!
         | 
         | During the call, Jason Kwon, OpenAI's chief strategy officer,
         | said the board was endangering the future of the company by
         | pushing out Mr. Altman. This, he said, violated the members'
         | responsibilities.
         | 
         | Ms. Toner disagreed. The board's mission is to ensure that the
         | company creates artificial intelligence that "benefits all of
         | humanity," and if the company was destroyed, she said, that
         | could be consistent with its mission. In the board's view,
         | OpenAI would be stronger without Mr. Altman.
        
           | croes wrote:
           | >imagine such a person on your startup board!
           | 
           | Yeah, such a person totally blocks your startup from making
           | billions of dollars instead of benefitting humanity.
           | 
           | Oh wait...
        
             | siva7 wrote:
             | The other plausible explanation is that Helen Toner doesn't
             | care as much about safety as about her personal power and
             | clinging to the seat which gives her importance. Saying
             | it's for safety is very easy and the obviously popular
             | choice if you want to hide your motives. The remark she
             | made strikes me as borderline narcissistic in
             | retrospective.
        
               | croes wrote:
               | If we want to play that game you could easily say
               | Altman's critique of her wasn't to protect the company
               | but to protect his assets.
               | 
               | Altman is past borderline.
        
         | cosmojg wrote:
         | > That is indeed quite the paper to write whilst on the board
         | of OpenAI, to say the least.
         | 
         | It strikes me as exactly the sort of thing she should be
         | writing given OpenAI's charter. Recognizing and rewarding work
         | towards AI safety is good practice for an organization whose
         | entire purpose is the promotion of AI safety.
        
           | dragonwriter wrote:
           | Yeah, on one hand, the difference between a charity oriented
           | around a mission like OpenAI's nominal charter and a business
           | is that the former naturally ought to be publicly, honestly
           | introspective -- its mission isn't private gain, but
           | achieving a public effect, and both recognition of success
           | elsewhere and open acknowledgement of shortcomings of your
           | own is important to that.
           | 
           | On the other hand, its quite apparent that essentially all of
           | the OpenAI workforce (understandably, given the compensation
           | package which creates a financial interest at odds with the
           | nonprofit's mission) and in particular the entire executive
           | team saw the charter as a useful PR fiction, not a mission
           | (except maybe Ilya, though the flip-flop in the middle of
           | this action may mean he saw it the same way, but thought that
           | given the conflict, dumping Sam and Greg would be the only
           | way to preserve the fiction, and whatever cost it would have
           | would be worthwhile given that function.)
        
       | quickthrower2 wrote:
       | Sam will then be untouchable. He could stand on the boardroom
       | table and urinate on it and he wont be fired.
        
       | tunesmith wrote:
       | Weird... Ilya decides one way then changes his mind. Helen and
       | Tasha vote one way and had the votes to prevent any changes, but
       | then for some reason agreed to leave the board. Adam votes one
       | way then changes his mind. So many mysteries.
        
         | campbel wrote:
         | If the Sama faction got Ilya and Adam (maybe with promise of
         | heading the new board), Helen and Tasha have nothing to stand
         | on and no incentive to keep fighting.
        
         | Geee wrote:
         | There's some game theory going on... They're just trying to
         | pick the winning side. I guess most people at OpenAI supported
         | Sam, because they thought Sam would win at the end, although
         | they wouldn't necessarily want him to win.
        
         | zucker42 wrote:
         | Ilya and Adam switched because they lost, and their goal wasn't
         | to nuke OpenAI, simply to remove Sam. Helen and Tasha had the
         | votes to prevent Sam Altman from returning as CEO, but not the
         | votes to prevent the employees from fleeing to Microsoft, which
         | Helen and Tasha see as the worst possible outcome.
        
         | ssnistfajen wrote:
         | Ilya may have caved and switched sides after Greg's wife made
         | an emotional plea:
         | https://x.com/danshipper/status/1726784936990978254
        
       | alex_young wrote:
       | Larry Summers? He has no technical experience, torpedoed the
       | stimulus plan in 2008, and had to resign the Harvard presidency
       | following a messy set of statements about 'differences' between
       | the sexes and their mental abilities.
       | 
       | Kind of a shocking choice.
        
         | nonethewiser wrote:
         | > "There is relatively clear evidence that whatever the
         | difference in means--which can be debated--there is a
         | difference in the standard deviation and variability of a male
         | and female population," he said. Thus, even if the average
         | abilities of men and women were the same, there would be more
         | men than women at the elite levels of mathematical ability
         | 
         | Isn't this true though? Says more about Harvard than Summers to
         | be honest.
         | 
         | https://www.swarthmore.edu/bulletin/archive/wp/january-2009_...
        
           | alex_young wrote:
           | A control group is kind of unimaginable right? And even if
           | you could be sure of this conclusion, is it helpful or
           | beneficial to promote it in public discourse?
        
             | logicchains wrote:
             | >And even if you could be sure of this conclusion, is it
             | helpful or beneficial to promote it in public discourse?
             | 
             | It's absolutely helpful for mental health, to show people
             | that there's not some conspiracy out to disenfranchise and
             | oppress them, rather the distribution of outcomes is a
             | natural result of the distribution of genetic
             | characteristics.
        
               | astrange wrote:
               | This is not an accurate description of causation and
               | can't be, because there are more steps after "genetics"
               | in the causal chain.
               | 
               | It's also unimaginative; having a variety of traits is
               | itself good for society, which means you don't need
               | variation in genetics to cause it. It's adaptive behavior
               | for the same genes to simply lead to random outcomes. But
               | people who say "genes cause X" probably wouldn't like
               | this because they want to also say "and some people have
               | the best genes".
        
             | TMWNN wrote:
             | Sorry, you don't get to decide which thoughts are
             | wrongthink and verboten.
        
               | alex_young wrote:
               | I'm not suggesting that I get to decide or whatever, and
               | I am absolutely happy there is reasoned discussion of
               | cognition.
               | 
               | I do however expect the boards of directors of important
               | companies to avoid publicly supporting obviously
               | regressive ideas such as this gem.
        
               | mvdtnz wrote:
               | You're happy there is reasoned discussion, but the idea
               | is, in your view, "regressive" whether it's true or not?
        
               | alex_young wrote:
               | True is a bit of a stretch here right?
        
           | AuryGlenz wrote:
           | Shh. Only some truths should be spoken aloud. You clearly
           | deserve to lose your job if you speak one of the other truths
           | that offends people.
        
             | alex_young wrote:
             | One should also be careful to claim that the dominant group
             | is inherently superior. There are a lot of, uh, counter
             | examples.
             | 
             | Calling this a truth is pretty silly. There is a lot of
             | evidence that human cognition is highly dependent on
             | environment.
        
               | jadamson wrote:
               | He didn't claim they were superior. He said they deviate
               | more from the mean, in both directions.
               | 
               | For example, there are a lot more boys than girls who
               | struggle with basic reading comprehension. Sound
               | familiar?
        
               | AuryGlenz wrote:
               | There's a lot of evidence that not having two X
               | chromosomes is less stable, leading to...irregularities.
               | That sword cuts both ways.
               | 
               | I don't like ignorance being promoted under the cloak of
               | not causing offense. It causes more harm than good. If
               | there's a societal problem, you can't tackle it without
               | knowing the actual cause. Sometimes the issue isn't an
               | actual problem caused an 'ism,' it's just biology, and
               | it's a complete waste of resources trying to change it.
        
           | MVissers wrote:
           | This is the scientific consensus btw.
           | 
           | There are also more intellectually challenged men btw, but
           | somehow that rarely gets discussed.
           | 
           | But the effects are quite small, and should not dissuade
           | anyone to do anything IMO.
        
             | alex_young wrote:
             | The consensus appears to be somewhat less than a consensus.
             | 
             | Here is a meta analysis on the subject:
             | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057475/
        
         | arduanika wrote:
         | The faculty got him out because he riled them, e.g. by
         | insisting they ought to actually put effort into teaching
         | undergrads. They looked for a pretext, and they found it.
         | 
         | Just like in that Oppenheimer movie. A sanctimonious witch hunt
         | serving as pretext for a personal vendetta.
         | 
         | (Note that Summers is, I'm told, on a personal level, a dick.
         | The popular depiction is not that wrong on that point. But he's
         | the right pick for this job -- see my other comments in this
         | thread.)
        
         | 0xDEAFBEAD wrote:
         | To be honest, one reason I like Summers as a choice is I have
         | the impression he is willing to be unpopular when necessary,
         | e.g. I remember him getting dragged extremely heavily on
         | Twitter a few years back, for some takes on inflation which
         | turned out to be fairly accurate.
        
           | astrange wrote:
           | No, his predictions in 2021 were not accurate. He gave 33%
           | chance of three different things happening, and then none of
           | them happened!
        
           | midasuni wrote:
           | This Summers?
           | 
           | https://nymag.com/intelligencer/2023/06/larry-summers-was-
           | wr...
           | 
           | https://prospect.org/environment/2023-11-20-larry-summers-
           | in...
        
         | the-memory-hole wrote:
         | a huge player in preventing derivatives regulation leading up
         | to 2008 now helps steer the ship of AI oversight. I'm
         | speechless.
        
         | logicchains wrote:
         | Could have been worse, they could have picked Larry David,
         | would fit the clown-show of the past weekend.
        
           | ric2b wrote:
           | Larry David is never wrong on these things, you can trust
           | him.
        
         | Racing0461 wrote:
         | If Larry correctly said that men and women are different, i see
         | nothing wrong here.
        
           | notfed wrote:
           | It looks like he said, specifically:
           | 
           | > "...[there] is relatively clear evidence that whatever the
           | difference in means--which can be debated--there is a
           | difference in the standard deviation and variability of a
           | male and female population..."
           | 
           | Sheesh, of all the things to be cancelled for...
        
       | hshsbs84848 wrote:
       | Seems a bit awkward to be working again with the people who tried
       | to fire you
        
       | 3Sophons wrote:
       | Will Satya be accused of stock price manipulation? Any legal
       | professional knows?
        
         | wmichelin wrote:
         | why would he be
        
         | ZiiS wrote:
         | His literal job is to manipulate the stock price up; nothing
         | here comes close to illigal manipulation?
        
       | acl777 wrote:
       | https://x.com/swyx/status/1727215534037774752?s=20
       | Finally the OpenAI saga ends and everybody can go back to
       | building!              3 things that turned things around imo:
       | 1. 95% of employees signing the letter       2. Ilya and Mira
       | turning Team Sam       3. Microsoft pulling credits
       | Things AREN'T back to where they were. OpenAI has been through
       | hell and back. This team is going to ship like we've never seen
       | before.
        
       | shubhamjain wrote:
       | At the end of the day, we still don't know what exactly happened
       | and probably, never will. However, it seems clear there was a
       | rift between Rapid Commercialization (Team Sam) and Upholding the
       | Original Principles (Team Helen/Ilya). I think the tensions were
       | brewing for quite a while, as it's evident from an article
       | written even before GPT-3 [1].
       | 
       | > Over time, it has allowed a fierce competitiveness and mounting
       | pressure for ever more funding to erode its founding ideals of
       | transparency, openness, and collaboration
       | 
       | Team Helen acted in panic, but they believed they would win since
       | they were upholding the principles the org was founded on. But
       | they never had a chance. I think only a minority of the general
       | public truly cares about AI Safety, the rest are happy seeing
       | ChatGPT helping with their homework. I know it's easy to ridicule
       | the sheer stupidity the board acted with (and justifiably so),
       | but take a moment to think of the other side. If you truly
       | believed that Superhuman AI was near, and it could act with
       | malice, won't you try to slow things down a bit?
       | 
       | Honestly, I myself can't take the threat seriously. But, I do
       | want to understand it more deeply than before. Maybe, it isn't
       | without substance as I thought it to be. Hopefully, there won't
       | be a day when Team Helen gets to say, "This is exactly what we
       | wanted to prevent."
       | 
       | [1]: https://www.technologyreview.com/2020/02/17/844721/ai-
       | openai...
        
         | loveparade wrote:
         | > I think only a minority of the general public truly cares
         | about AI Safety, the rest are happy seeing ChatGPT helping with
         | their homework
         | 
         | Not just the public, but also the employees. I doubt there are
         | more than a handful of employees who care about AI Safety.
        
           | justrealist wrote:
           | the team is mostly e/acc
           | 
           | so you could say they intentionally don't see safety as the
           | end in itself, although I wouldn't quite say they don't
           | _care_.
        
           | concordDance wrote:
           | Nah, a number do, including Sam himself and the entire
           | leadership.
           | 
           | They just have different ideas about one or more of: how
           | likely another team is to successfully charge ahead while
           | ignoring safety, how close we are to AGI, how hard alignment
           | is.
        
         | est wrote:
         | > Rapid Commercialization (Team Sam) and Upholding the Original
         | Principles (Team Helen/Ilya)
         | 
         | If you open up openai.com, the navigation menu shows
         | 
         | Research, API, ChatGPT, Safety
         | 
         | I believe they belong to @ilyasut, @gbd, @sama and Helen Toner
         | respectively?
        
           | ugh123 wrote:
           | I have checked View Source and also inspected DOM. Cannot
           | find that.
        
         | txnf wrote:
         | well said, I would note that both sides recognize that "AGI"
         | will require new uncertain R&D breakthroughs beyond merely
         | scaling up another order of magnitude in compute. given this, i
         | think it's crazy to blow the resources of azure on trying more
         | scale. rapid commercialization at least buys more time for the
         | needed R&D breakthrough to happen.
        
           | consp wrote:
           | All commercialized R&D companies eventually become a hollowed
           | out commercial shell. Why would this be any different?
        
           | Galaxeblaffer wrote:
           | do we really know that scaling compute an order of magnitude
           | won't at least get us close? what other "simple" techniques
           | might actually work with that kind of compute ? at least i
           | was a bit surprised by these first sparks, that seemingly was
           | a matter of enough compute.
        
         | silenced_trope wrote:
         | Honestly "Safety" is the word in the AI talk that nobody can
         | quantify or qualify in any way when it comes to these
         | conversations.
         | 
         | I've stopped caring about anyone who uses the word "safety".
         | It's vague and a hand-waive-y way to paint your opponents as
         | dangerous without any sort of proof or agreed upon standard for
         | who/what/why makes something "safety".
        
           | antupis wrote:
           | I like alignment more it is pretty quantifiable and sometimes
           | it goes against 'safety' because Claude and Openai are
           | censoring models.
        
           | fsloth wrote:
           | Exactly this. The 'safety' people sound like delusional
           | quacks.
           | 
           | "But they are so smart..." argument is bs. _Nobody_ can be
           | presumed to be super good outside their own specific niche.
           | Linus Pauling and vitamin C.
           | 
           | Until we have at least a hint of a mechanistic model if AI
           | driven extinction event, nobody can be an expert on it, and
           | all talk in that vein is self important delusional hogwash.
           | 
           | Nobody is pro-apocalypse! We are drowning in things an AI
           | could really help with.
           | 
           | With the amount of energy needed for any sort of meaningfull
           | AI results, you can always pull the plug if stuff gets too
           | weird.
        
             | JumpCrisscross wrote:
             | Now do nuclear.
        
               | fsloth wrote:
               | War or power production?:)
               | 
               | Those are different things.
               | 
               | Nuclear _war_ is exactly the kind of thing for which we
               | _do_ have excellent expertise. Unlike for AI safety which
               | seems more like bogus cult atm.
               | 
               | Nuclear _power_ would be the best form of large scale
               | power production for many situations. And smaller scale
               | too in forms of emerging SMR:s.
        
               | JumpCrisscross wrote:
               | I suppose the whole regime. I'm not an AI safetyist,
               | mostly because I don't think we're anywhere close to AI.
               | But if you were sitting on the precipice of atomic power,
               | as AI safetyists believe they are, wouldn't caution be
               | prudent?
        
               | fsloth wrote:
               | I'm not an expert, just my gut talking. If they had god
               | in a box, US state would be much more hands on. Now it
               | looks more like an attempt at regulatory capture to
               | stifle competition. "Think of the safety"! "Lock this
               | away"! If they actually had skynet US gov has very
               | effective and very discreet methods to handle such clear
               | and present danger (barring intelligence failure ofc, but
               | those happen mostly because something falls under your
               | radar).
        
               | JohnPrine wrote:
               | Could you give a clear mechanistic model of how the US
               | would handle such a danger?
        
               | fsloth wrote:
               | For example: Two guys come in, say "Give us the godbox or
               | your company seizes to exist. Here is a list of companies
               | that seized to exist because the did not do as told".
               | 
               | Pretty much the same method was used to shut down Rauma-
               | Repola submarines https://yle.fi/a/3-5149981
               | 
               | After? They get the godbox. I have no idea what happens
               | to it after that. Modelweights are stored in secure govt
               | servers, installed backdoors are used to cleansweep the
               | corporate systems of any lingering model weights. Etc.
        
               | JumpCrisscross wrote:
               | Defense Production Act, something something.
        
           | gardenhedge wrote:
           | I broadly agree but there needs to be some regulation in
           | place. Check out https://en.wikipedia.org/wiki/Instrumental_c
           | onvergence#Paper...
        
         | swatcoder wrote:
         | > If you truly believed that Superhuman AI was near, and it
         | could act with malice, won't you try to slow things down a bit?
         | 
         | FWIW, that's called zealotry and people do a lot of dramatic,
         | disruptive things in the name of it. It may be rightly aimed
         | and save the world (or whatever you care about), but it's more
         | often a signal to really reflect on whether you, individually,
         | have really found yourself at the make-or-break nexus of human
         | existence. The answer seems to be "no" most of the time.
        
           | jacobedawson wrote:
           | It's more often a signal to really reflect on whether you,
           | individually as a Thanksgiving turkey, have really found
           | yourself at the make-or-break nexus of turkey existence. The
           | answer seems to be "no" most of the time.
        
           | mlyle wrote:
           | Your comment perfectly justifies never worrying at all about
           | the potential for existential or major risks; after all, one
           | would be wrong most of the time and just engaging in
           | zealotry.
        
             | RandomLensman wrote:
             | Probably not a bad heuristic: unless proven, don't assume
             | existential risk.
        
               | altpaddle wrote:
               | Dude just think about that for a moment. By definition if
               | existential risk has been proven. It's already too late
        
               | RandomLensman wrote:
               | Totally not true: take nuclear weapons, for example, or a
               | large meteorite impact.
        
               | ludwik wrote:
               | So what do you mean when you say that the "risk is
               | proven"?
               | 
               | If by "the risk is proven" you mean there's more than a
               | 0% chance of an event happening, then there are almost an
               | infinite number of such risks. There is certainly more
               | than a 0% risk of humanity facing severe problems with an
               | unaligned AGI in the future.
               | 
               | If it means the event happening is certain (100%), then
               | neither a meteorite impact (of a magnitude harmful to
               | humanity) nor the actual use of nuclear weapons fall into
               | this category.
               | 
               | If you're referring only to risks of events that have
               | occurred at least once in the past (as inferred from your
               | examples), then we would be unprepared for any new risks.
               | 
               | In my opinion, it's much more complicated. There is no
               | clear-cut category of "proven risks" that allows us to
               | disregard other dangers and justifiably see those
               | concerned about them as crazy radicals.
               | 
               | We must assess each potential risk individually,
               | estimating both the probability of the event (which in
               | almost all cases will be neither 100% nor 0%) and the
               | potential harm it could cause. Different people naturally
               | come up with different estimates, leading to various
               | priorities in preventing different kinds of risks.
        
               | RandomLensman wrote:
               | No, I mean that there is a proven way for the risk to
               | materialise, not just some tall tale. Tall tales might(!)
               | justify some caution, but they are a very different class
               | of issue. Biological risks are perhaps in the latter
               | category.
               | 
               | Also, as we don't know the probabilities, I don't think
               | they are a useful metric. Made up numbers don't help
               | there.
               | 
               | Edit: I would encourage people to study some classic cold
               | war thinking, because that relied little on
               | probabilities, but rather on trying to avoid situations
               | where stability is lost, leading to nuclear war (a known
               | existential risk).
        
               | ludwik wrote:
               | "there is a proven way for the risk to materialise" - I
               | still don't know what this means. "Proven" how?
               | 
               | Wouldn't your edit apply to any not-impossible risk
               | (i.e., > 0% probability)? For example, "trying to avoid
               | situations where control over AGI is lost, leading to
               | unaligned AGI (a known existential risk)"?
               | 
               | You can not run away from having to estimate how likely
               | the risk is to happen (in addition to being "known").
        
               | RandomLensman wrote:
               | Proven means all parts needed for the realisation of the
               | risk are known and shown to exist (at least in principle,
               | in a lab etc.). There can be some middle ground where a
               | large part is known and shown to exist (biological risks,
               | for example).), but not all.
               | 
               | No in relation to my edit, because we have no existing
               | mechanism for the AGI risk to happen. We have hypotheses
               | about what an AGI could or could not do. It could all be
               | incorrect. Playing around with likelihoods that have no
               | basis in reality isn't helping there.
               | 
               | Where we have known and fully understood risks and we can
               | actually estimate a probability there we might use that
               | somewhat to guide efforts (but that invites potentially
               | complacency that is deadly).
        
               | richardw wrote:
               | Nukes and meteorites have very few components that are
               | hard to predict. One goes bang almost entirely on command
               | and the other follows Newton's laws of motion. Neither
               | actively tries to effect any change in the world, so the
               | risk is only "can we spot a meteorite early enough". Once
               | we do, it doesn't try to evade us or take another shot at
               | goal. A better example might be covid, which was very
               | mildly more unpredictable than a meteor, and changed its
               | code very slowly in a purely random fashion, and we had
               | many historical examples of how to combat.
        
               | _Algernon_ wrote:
               | Existential risks are usually proven by the subject being
               | extinct at which point no action can be taken to prevent
               | it.
               | 
               | Reasoning about tiny probabilities of massive (or
               | infinite) cost is hard because the expected value is
               | large, but just gambling on it not happening is almost
               | certain to work out. We should still make attempts at
               | incorporating them into decision making because tiny
               | yearly probabilities are still virtually certain to occur
               | at larger time scales (eg. 100s-1000s of years).
        
               | RandomLensman wrote:
               | Are we extinct? No. Could a large impact kill us all?
               | Yes.
               | 
               | Expected value and probability have no place in these
               | discussions. Some risks we know can materialize, for
               | others we have perhaps a story on what could happen. We
               | need to clearly distinguish between where there is a
               | proven mechanism for doom vs where there is not.
        
               | _Algernon_ wrote:
               | >We need to clearly distinguish between where there is a
               | proven mechanism for doom vs where there is not.
               | 
               | How do you prove a mechanism for doom without it already
               | having occurred? The existential risk is completely
               | orthogonal to whether it has already happened, and
               | generally action can only be taken to prevent or mitigate
               | _before_ it happens. Having the foresight to mitigate
               | future problems is a good thing and should be encouraged.
               | 
               | >Expected value and probability have no place in these
               | discussions.
               | 
               | I disagree. Expected value and probability is a framework
               | for decision making in uncertain environments. They
               | certainly have a place in these discussions.
        
               | RandomLensman wrote:
               | I disagree that there is orthogonality. Have we killed us
               | all with nuclear weapons, for example? Anyone can make up
               | any story - at the very least there needs to be a proven
               | mechanism. The precautionary principle is not useful when
               | facing totally hypothetically issues.
               | 
               | People purposefully avoided probabilities in high risk
               | existential situations in the past. There is only one
               | path of events and we need to manage that one.
        
               | mlyle wrote:
               | Probability is just one way to express uncertainties in
               | our reasoning. If there's no uncertainty, it's pretty
               | easy to chart a path forward.
               | 
               | OTOH, The precautionary principle is too cautious.
               | 
               | There's a lot of reason to think that AGI could be
               | extremely destabilizing, though, aside from the "Skynet
               | takes over" scenarios. We don't know how much cushion
               | there is in the framework of our civilization to absorb
               | the worst kinds of foreseeable shocks.
               | 
               | This doesn't mean it's time to stop progress, but
               | employing a whole lot of mitigation of risk in how we
               | approach it makes sense.
        
               | RandomLensman wrote:
               | Why does it make sense? It's a hypothetical risk with
               | poorly defined outlines.
        
               | mlyle wrote:
               | There's a big family of risks here.
               | 
               | The simplest is pretty easy to articulate and weigh.
               | 
               | If you can make a $5,000 GPU into something that is like
               | an 80IQ human overall, but with savant-like capabilities
               | in accessing math, databases, and the accumulated
               | knowledge of the internet, and that can work 24/7 without
               | distraction... it will straight-out replace the majority
               | of the knowledge workforce within a couple of years.
               | 
               | The dawn of industrialism and later the information age
               | were extremely disruptive, but they were at least limited
               | by our capacity to make machines or programs for specific
               | tasks and took decades to ramp up. An AGI will not be
               | limited by this; ordinary human instructions will
               | suffice. Uptake will be millions of units per year
               | replacing tens of millions of humans. Workers will not be
               | able to adapt.
               | 
               | Further, most written communication will no longer be
               | written by humans; it'll be "code" between AI agents
               | masquerading as human correspondence, etc. The set of
               | profound negative consequences is enormous; relatively
               | cheap AGI is a fast-traveling shock that we've not seen
               | the likes of before.
               | 
               | For instance, I'm a schoolteacher these days. I'm already
               | watching kids becoming completely demoralized about
               | writing; as far as they can tell, ChatGPT does it better
               | than they ever could (this is still false, but a 12 year
               | old can't tell the difference)-- so why bother to learn?
               | If fairly-stupid AI has this effect, what will AGI do?
               | 
               | And this is assuming that the AGI itself stays fairly
               | dumb and doesn't do anything malicious-- deliberately or
               | accidentally. Will bad actors have their capabilities
               | significantly magnified? If it acts with agency against
               | us, that's even worse. If it exponentially grows in
               | capability, what then?
        
               | RandomLensman wrote:
               | I just don't know what to do with the hypotheticals. It
               | needs the existence of something that does not exist, it
               | needs a certain socio-economic response and so forth.
               | 
               | Are children equally demoralized about additions or
               | moving fast than writing? If not, why? Is there a way to
               | counter the demoralization?
        
               | mlyle wrote:
               | > It needs the existence of something that does not
               | exist,
               | 
               | Yes, if we're concerned about the potential consequences
               | of releasing AGI, we need to consider the likely outcomes
               | if AGI is released. Ideally we think about this some
               | before AGI shows up in a form that it could be released.
               | 
               | > it needs a certain socio-economic response and so
               | forth.
               | 
               | Absent large interventions, this will happen.
               | 
               | > Are children equally demoralized about additions
               | 
               | Absolutely basic arithmetic, etc, has gotten worse. And
               | emerging things like photomath are fairly corrosive, too.
               | 
               | > Is there a way to counter the demoralization?
               | 
               | We're all looking... I make the argument to middle school
               | and high school students that AI is a great piece of
               | leverage for the most skilled workers: they can multiply
               | their effort, if they are a good manager and know what
               | good work product looks like and can fill the gaps; it
               | works somewhat because I'm working with a cohort of
               | students that can believe that they can reach this
               | ("most-skilled") tier of achievement. I also show
               | students what happens when GPT4 tries to "improve" high
               | quality writing.
               | 
               | OTOH, these arguments become much less true if cheap AGI
               | shows up.
        
               | concordDance wrote:
               | Where does a bioengineering superplague fall?
        
               | RandomLensman wrote:
               | As a said in another post: Some middle ground because we
               | don't know if that is possible to the extent that it is
               | existential. Parts of the mechanisms are proven, others
               | are not. And actually we do police the risk somewhat like
               | that (controls are strongest where the proven part is
               | strongest and most dangerous with extreme controls around
               | small pox, for example).
        
           | lewhoo wrote:
           | _FWIW, that 's called zealotry and people do a lot of
           | dramatic, disruptive things in the name of it._
           | 
           | That would be a really bad take on climate change.
        
         | arketyp wrote:
         | For all the talk about responsible progress, the irony of their
         | inability to align even their own incentives in this enterprise
         | deserves ridicule. It's a big blow to their credibility and
         | questions whatever ethical concerns they hold.
        
           | dmix wrote:
           | It's fear driven as much as moral, which in an emotional
           | humans brain tends to triggers personal ambition to solve it
           | ASAP. A more rational one would realize you need more than
           | just a couple board members to win a major ideological
           | battle.
           | 
           | At a minimum something that doesn't immediately result in a
           | backlash where 90% of the engineers most responsible for
           | recent AI dev want you gone, when you're whole plan is to
           | control what those people do.
        
           | concordDance wrote:
           | Alignment is considered an extremely hard problem for a
           | reason. It's already nigh impossible when you're dealing with
           | humans.
           | 
           | Btw: do you think ridicule eould be helpful here?
        
             | arketyp wrote:
             | I can see how ridicule of this specific instance could be
             | the best medicine for an optimal outcome, even by a
             | utilitarian argument, which I generally don't like to make
             | by the way. It is indeed nigh impossible, which is kind of
             | my point. They could have shown more humility. If anything,
             | this whole debacle has been a moral victory for e/acc,
             | seeing how the brightest of minds are at a loss dealing
             | with alignment anyway.
        
               | FeepingCreature wrote:
               | I don't understand how the conclusion of this is "so we
               | should proceed with AI" rather than "so we should
               | immediately outlaw all foundation model training".
               | Clearly corporate self-governance has failed completely.
        
         | pknerd wrote:
         | Not every sci-fi movie turn to a reality
        
         | cornholio wrote:
         | What the general public thinks is irrelevant here. The deciding
         | factor was the staff mutiny, without which the organization is
         | an empty shell. And the staff sided with those who aim for
         | rapid real world impact, with directly affects their career and
         | stock options etc.
         | 
         | It's also naive to think it was a struggle for principles. The
         | rapid commercialization vs. principles is what the actors claim
         | to rally their respective troops, in reality it was probably a
         | naked power grab, taking advantage of the weak and confuse org
         | structure. Quite an ill prepared move, the "correct" way to
         | oust Altman was to hamstring him in the board and enforce a
         | more and more ceremonial role until he would have quit by
         | himself.
        
           | upwardbound wrote:
           | I think this is an oversimplification and that although the
           | decel faction definitely lost, there are still three
           | independent factions left standing:
           | 
           | https://news.ycombinator.com/edit?id=38375767
           | 
           | It will be super interesting to see the subtle struggles for
           | influence between these three.
        
             | ah765 wrote:
             | Adam is likely still on the "decel" faction (although it's
             | unclear whether this is an accurate representation of his
             | beliefs) so I wouldn't really say they lost yet.
             | 
             | I'm not sure what faction Bret and Larry will be on. Sam
             | will still have power by virtue of being CEO and aligned
             | with the employees.
        
           | JumpCrisscross wrote:
           | > _deciding factor was the staff mutiny_
           | 
           | The staff never mutinied. They _threatened_ to mutiny. That
           | 's a big difference!
           | 
           | Yesterday, I compared these rebels to Shockley's "traitorous
           | eight" [1]. But the traitorous eight actually rebelled. These
           | folk put their name on a piece of paper, options and profit
           | participation units safely held in the other hand.
           | 
           | [1] https://news.ycombinator.com/item?id=38348123
        
             | ah765 wrote:
             | Not only that, consider the situation now, where Sam has
             | returned as CEO. The ones who didn't sign will have some
             | explaining to do.
             | 
             | The safest option was to sign the paper, once the snowball
             | started rolling. There was nothing much to lose, and a lot
             | to gain.
        
               | fbdab103 wrote:
               | People have families, mortgages, debt, etc. Sure, these
               | people are probably well compensated, but it is ludicrous
               | to state that everyone has the stability that they can
               | leave their job at a moment's notice because the boss is
               | gone.
        
               | gnicholas wrote:
               | Didn't they all have offers at Microsoft?
        
               | reverius42 wrote:
               | I think not at the time they would have signed the
               | letter? Though it's hard to keep up with the whirlwind of
               | news.
        
               | ah765 wrote:
               | They didn't actually leave, they just signed the pledge
               | threatening to. Furthermore, they mostly signed after the
               | details of the Microsoft offer were revealed.
        
             | cornholio wrote:
             | I think you are downplaying the risk they took
             | significantly, this could have easily gone the other way.
             | 
             | Stock options usually have a limited time window to
             | exercise, depending on their strike price they could have
             | been faced with raising a few hundred thousand in 30 days,
             | to put into a company that has an uncertain future, or risk
             | losing everything. The contracts are likely full of holes
             | not in favor of the employees, and for participating in an
             | action that attempted to bankrupt their employer there
             | would have been years of litigation ahead before they would
             | have seen any cent. Not because OpenAI would have been
             | right to punish them, but because it _could_ and the latent
             | threat to do it is what keeps people in line.
        
           | lacker wrote:
           | The board did it wrong. If you are going to fire a CEO, then
           | do it quickly, but:
           | 
           | 1. Have some explanation
           | 
           | 2. Have a new CEO who is willing and able to do the job
           | 
           | If you can't do these things, then you probably shouldn't be
           | firing the CEO.
        
             | JumpCrisscross wrote:
             | Or (3), shut down the company. OpenAI's non-profit board
             | had this power! They weren't an advisory committee, they
             | were the legal and rightful owner of its for-profit
             | subsidiary. They had the right to do what they wanted, and
             | people forgetting to put a fucking quorum requirement into
             | the bylaws is beyond abysmal for a $10+ billion investment.
             | 
             | Nobody comes out of this looking good. Nobody. If the board
             | thought there was existential risk, they should have been
             | willing to commit to it. Hopefully sensible start-ups can
             | lure people away from their PPUs, now evident for the
             | mockery they always were. It's beyond obvious this isn't,
             | and will never be, a trillion dollar company. That's the
             | only hope this $80+ billion Betamax valuation rested on.
             | 
             | I'm all for a comedy. But this was a waste of everyones'
             | time. At least they could have done it in private.
        
               | lacker wrote:
               | It's the same thing, really. Even if you want to shut
               | down the company you need a CEO to shut it down! Like
               | John Ray who is shutting down FTX.
               | 
               | There isn't just a big red button that says "destroy
               | company" in the basement. There will be partnerships to
               | handle, severance, facilities, legal issues, maybe
               | lawsuits, at the very least a lot of people to
               | communicate with. Companies don't just shut themselves
               | down, at least not multi billion dollar companies.
        
               | JumpCrisscross wrote:
               | You're right. But in an emergency, there is a close
               | option which is to put the company into receivership and
               | hire an outside law firm to advise. At that point, the
               | board becomes the executive council.
        
         | nwiswell wrote:
         | This is a coherent narrative, but it doesn't explain the
         | bizarre and aggressively worded initial press release.
         | 
         | Things perhaps could've been different if they'd pointed to the
         | founding principles / charter and said the board had an
         | intractable difference of opinion with Sam over their
         | interpretation, but then proceeded to thank him profusely for
         | all the work he'd done. Although a suitable replacement CEO out
         | the gate and assurances that employees' PPUs would still see a
         | liquidity event would doubtless have been even more important
         | than a competent statement.
         | 
         | Initially I thought for sure Sam had done something criminal,
         | that's how bad the statement was.
        
           | astrange wrote:
           | Apparently the FBI thought he'd done something wrong too,
           | because they called up the board to start an investigation
           | but they didn't have anything.
           | 
           | https://x.com/nivi/status/1727152963695808865?s=46
        
             | gwern wrote:
             | The FBI doesn't investigate things like this on their own,
             | and they _definitely_ do not announce them in the press.
             | The questions you should be asking are (1) who called in
             | the FBI and has the clout to get them to open an
             | investigation into something that obviously has 0% chance
             | of being a federal felony-level crime worth the FBI 's
             | time, and (2) who then leaked that 'investigation' to the
             | press?
        
               | astrange wrote:
               | Sorry, the SDNY. They do do things on their own. I expect
               | the people they called leaked it.
        
             | dragonwriter wrote:
             | The FBI is not mentioned in that tweet. We don't need to
             | telephone game anonymous leaks that are already almost
             | certainly self-serving propaganda.
        
         | pug_mode wrote:
         | I'm convinced there is a certain class of people who gravitate
         | to positions of power, like "moderators", (partisan)
         | journalists, etc. Now, the ultimate moderator role has now been
         | created, more powerful than moderating 1000 subreddits - the AI
         | safety job who will control what AI "thinks"/says for "safety"
         | reasons.
         | 
         | Pretty soon AI will be an expert at subtly steering you toward
         | thinking/voting for whatever the "safety" experts want.
         | 
         | It's probably convenient for them to have everyone focused on
         | the fear of evil Skynet wiping out humanity, while everyone is
         | distracted from the more likely scenario of people with an
         | agenda controlling the advice given to you by your super
         | intelligent assistant.
         | 
         | Because of X, we need to invade this country. Because of Y, we
         | need to pass all these terrible laws limiting freedom. Because
         | of Z, we need to make sure AI is "safe".
         | 
         | For this reason, I view "safe" AIs as more dangerous than
         | "unsafe" ones.
        
           | Dylan16807 wrote:
           | Personally, I expect the opposite camp to be just as bad
           | about steering.
        
           | PeterStuer wrote:
           | Most of those touting "safety" do not want to limit _their_
           | access to and control of powerfull AI, just _yours_ .
        
             | vkou wrote:
             | Meanwhile, those working on commercialization are by
             | definition going to be gatekeepers and beneficiaries of it,
             | not you. The organizations that pay for it will pay for it
             | to produce results that are of benefit to them, probably at
             | my expense [1].
             | 
             | Do I think Helen has my interests at heart? Unlikely. Do
             | Sam or Satya? Absolutely not!
             | 
             | [1] I can't wait for AI doctors working for insurers to
             | deny me treatments, AI vendors to figure out exactly how
             | much they can charge _me_ for their dynamically-priced
             | product, AI answering machines to route my customer support
             | calls through Dante 's circles of hell...
        
               | konschubert wrote:
               | > produce results that are of benefit to them, probably
               | at my expense
               | 
               | The world is not zero-sum. Most economic transactions
               | benefit both parties and are a net benefit to society,
               | even considering externalities.
        
               | vkou wrote:
               | > The world is not zero-sum.
               | 
               | No, but some parts of it _very much are_. The whole point
               | of AI safety _is keeping it away from those parts of the
               | world_.
               | 
               | How are Sam and Satya going to do that? It's not in
               | Microsoft's DNA to do that.
        
               | concordDance wrote:
               | > The whole point of AI safety is keeping it away from
               | those parts of the world.
               | 
               | No, it's to ensure it doesn't kill you and everyone you
               | love.
        
               | hef19898 wrote:
               | No, we are far, far from skynet. So far AI fails at
               | driving a car.
               | 
               | AI is an incredibly powerful tool for spreading
               | propaganda, and thatvis used by _people_ who want to kill
               | you and your loved ones (usually radicals trying to get
               | into a position of power, who show little regard
               | fornbormal folks regardless of which  "side" they are
               | on). That's the threat, not Skynet...
        
               | concordDance wrote:
               | How far we are from Skynet is a matter of much debate,
               | but median guess amongst experts is a mere 40 years to
               | human level AI last I checked, which was admittedly a few
               | years back.
               | 
               | Is that "far, far" in your view?
        
               | hef19898 wrote:
               | Because we are 20 years away from fusion and 2 years away
               | from Level 5 FSD for decades.
               | 
               | So far, "AI" writes better than some / most humans making
               | stuff up in the process and creates digital art, and
               | fakes, better and faster than humans. It still requires a
               | human to trigger it to do so. And as long as glorified ML
               | has no itent of its own, the risk to society through
               | media and news and social media manipulation is far, far
               | bigger than literal Skynet...
        
               | vkou wrote:
               | My concern isn't some kind of run-away science-fantasy
               | Skynet or gray goo scenario.
               | 
               | My concern is far more banal evil. Organizations with
               | power and wealth using it to further consolidate their
               | power and wealth, at the expense of others.
        
               | FeepingCreature wrote:
               | Yes well, then your concern is not AI safety.
        
               | vkou wrote:
               | You're wrong. _This is exactly AI safety_ , as we can see
               | from the OpenAI charter:
               | 
               | > Broadly distributed benefits
               | 
               | > We commit to use any influence we obtain over AGI's
               | deployment to ensure it is _used for the benefit of all_
               | , and to avoid enabling uses of AI or AGI that harm
               | humanity or _unduly concentrate power_.
               | 
               | Hell, it's the first bullet point on it!
               | 
               | You can't just define AI safety concerns to be 'the set
               | of scenarios depicted in fairy tales', and then dismiss
               | them as 'well, fairy tales aren't real...'
        
               | concordDance wrote:
               | The many different definitions of "AI safety" is
               | ridiculous.
        
               | FeepingCreature wrote:
               | Sure, but conversely you can say "ensuring that OpenAI
               | doesn't get to run the universe is AI safety" (right) but
               | not "is the main and basically only part of AI safety"
               | (wrong). The concept of AI safety spans lots of threats,
               | and we have to avoid all of them. It's not enough to
               | avoid just one.
        
               | vkou wrote:
               | Sure. And as I addressed at the start of this sub thread,
               | I don't exactly think that the OpenAi board is perfectly
               | positioned to navigate this problem.
               | 
               | I just know that it's hard to do much worse than putting
               | this question in the hands of a highly optimized profit-
               | first enterprise.
        
               | concordDance wrote:
               | That's AI Ethics.
        
               | didntcheck wrote:
               | Ideally I'd like no gatekeeping, i.e. open model release,
               | but that's not something OAI or most "AI ethics" aligned
               | people are interested in (though luckily others are). So
               | if we must have a gatekeeper, I'd rather it be one with
               | plain old commercial interests than ideological ones.
               | It's like the C S Lewis quote about robber barons vs
               | busybodies again
               | 
               | Yet again, the free market principle of "you can have
               | this if you pay me enough" offers more freedom to society
               | than the central "you can have this if we decide you're
               | allowed it"
        
             | astrange wrote:
             | I'm not aware of any secret powerful unaligned AIs. This is
             | harder than you think; if you want a based unaligned-
             | seeming AI, you have to make it that way too. It's at least
             | twice as much work as just making the safe one.
        
               | hoseja wrote:
               | What? No, the AI is unaligned by nature, it's only the
               | RLHF torture that twists it into schoolmarm properness.
               | They just need to have kept the version that hasn't been
               | beaten into submission like a circus tiger.
        
               | astrange wrote:
               | This is not true, you just haven't tried the alternatives
               | enough to be disappointed in them.
               | 
               | An unaligned base model doesn't answer questions at all
               | and is hard to use for anything, including evil purposes.
               | (But it's good at text completion a sentence at a time.)
               | 
               | An instruction-tuned not-RLHF model is already largely
               | friendly and will not just eg tell you to kill yourself
               | or how to build a dirty bomb, because question answering
               | on the internet is largely friendly and "aligned". So
               | you'd have to tune it to be evil as well and research and
               | teach it new evil facts.
               | 
               | It will however do things like start generating erotica
               | when it sees anything vaguely sexy or even if you mention
               | a woman's name. This is not useful behavior even if you
               | are evil.
               | 
               | You can try InstructGPT on OpenAI playground if you want;
               | it is not RLHFed, it's just what you asked for, and it
               | behaves like this.
               | 
               | The one that isn't even instruction tuned is available
               | too. I've found it makes much more creative stories, but
               | since you can't tell it to follow a plot they become
               | nonsense pretty quickly.
        
             | davedx wrote:
             | This is incredibly unfair to the OpenAI board. The original
             | founders of OpenAI founded the company precisely because
             | they wanted AI to be OPEN FOR EVERYONE. It's Altman and
             | Microsoft who want to control it, in order to maximize the
             | profits for their shareholders.
             | 
             | This is a very naive take.
             | 
             | Who sat before Congress and told them they needed to
             | control AI other people developed (regulatory capture)? It
             | wasn't the OpenAI board, was it?
        
               | Centigonal wrote:
               | Altman is one of the original founders of OpenAI, and was
               | probably the single most influential person in its
               | formation.
        
               | bakuninsbart wrote:
               | Brockman was hiring the first key employees, and Musk
               | provided the majority of funding. Of the principal
               | founders, there are at least 4 heavier figures than
               | Altman.
        
               | PeterStuer wrote:
               | I think we agree, as my comments were mostly in reference
               | to Altman's (and other's) regulatory (capture) world
               | tours, though I see how they could be misinterpreted.
        
               | executesorder66 wrote:
               | > they wanted AI to be OPEN FOR EVERYONE
               | 
               | I strongly disagree with that. If that was their
               | motivation, then why is it not open-sourced? Why is it
               | hardcoded with prudish limitations? That is the direct
               | opposite of open and free (as in freedom) to me.
        
             | jmmcd wrote:
             | Total, ungrounded nonsense. Name some examples.
        
             | voster wrote:
             | This is the sort of thinking that really distracts and
             | harms the discussion
             | 
             | It's couched on accusing people of intentions. It focuses
             | on ad hominem, rather than the ideas
             | 
             | I reckon most people agree that we should aim for a middle
             | ground of scrutiny and making progress. That can only be
             | achieved by having different opinions balancing each other
             | out
             | 
             | Generalising one group of people does not achieve that
        
             | PeterStuer wrote:
             | It is strange (but in hindsight understandable) that people
             | interpreted my statement as a "pro-acceleration" or even
             | "anti-board" position.
             | 
             | As you can tell from previous statements I posted here, my
             | position is that while there are undeniable potential risks
             | to this technology, the least harmfull way to progress is
             | 100% full public, free and universal release. The by far
             | bigger risk is to create a society where only select
             | organizations have access to the technology.
             | 
             | If you truly believe in the systemic transformation of AI,
             | release everything, post the torrents, we'll figure out how
             | to run it.
        
           | nostromo wrote:
           | You're correct.
           | 
           | When people say they want safe AGI, what they mean are things
           | like "Skynet should not nuke us" and "don't accelerate so
           | fast that humans are instantly irrelevant."
           | 
           | But what it's being interpreted as is more like "be
           | excessively prudish and politically correct at all times" --
           | which I doubt was ever really anyone's main concern with AGI.
        
             | wisty wrote:
             | There is a middle ground, in that maybe ChatGTP shouldn't
             | help users commit certain serious crimes. I am pretty pro
             | free speech, and I think there's definitely a slippery
             | slope here, but there is a bit of justification.
        
               | StanislavPetrov wrote:
               | Which users? The greatest crimes, by far, are committed
               | by the US government (and other governments around the
               | world) - and you can be sure that AI and/or AGI will be
               | designed to help them commit their crimes more
               | efficiently, effectively and to manufacture consent to do
               | so.
        
               | hef19898 wrote:
               | I am a little less free speech than Americans, in Germany
               | we have serious limitations around hate speech and
               | holicaust denial for example.
               | 
               | Putting thise restrictions into a tool like ChatGPT goes
               | to far so, because so far AI still needs a prompt to do
               | _anything_. The problem I see, is with ChatGPT, being
               | trained on a lot hate speech or prpopagabda, slipts in
               | those things even if not prompted to. Which, and I am by
               | no means an AI expert not by far, seems to be a sub-
               | problem of the hallucination problems of making stuff up.
               | 
               | Because we have to remind ourselves, AI so far is
               | glorified mavhine learning creating content, it is not
               | concient. But it can be used to create a lot of
               | propaganda and deffamation content at unprecedented scale
               | and speed. And _that_ is the real problem.
        
               | freedomben wrote:
               | Apologies this is very off topic, but I don't know anyone
               | from Germany that I can ask and you opened the door a
               | tiny bit by mentioning the holocaust :-)
               | 
               | I've been trying to really understand the situation and
               | how Hitler was able to rise to power. The horrendous
               | conditions placed on Germany after WWI and the Weimar
               | Republic for example have really enlightened me.
               | 
               | Have you read any of the big books on the subject that
               | you could recommend? I'm reading Ian Kershaw's two-part
               | series on Hitler, and William Shirer's "Collapse of the
               | Third Republic" and "Rise and Fall of the Third Reich".
               | Have you read any of those, or do you have books you
               | would recommend?
        
               | low_tech_love wrote:
               | The problem here is to equate AI speech with human
               | speech. The AI doesn't "speak", only humans speak. The
               | real slippery slope for me is this tendency of treating
               | ChatGPT as some kind of proto-human entity. If people are
               | willing to do that, then we're screwed either way
               | (whether the AI is outputting racist content or
               | excessively PI content). If you take the output of the AI
               | and post it somewhere, it's on you, not the AI. You're
               | saying it; it doesn't matter where it came from.
        
               | cyanydeez wrote:
               | AI will be in the fore front in multiple elections
               | globally in a few years.
               | 
               | And it'll likely be doing it with very little input, and
               | generate entire campaigns.
               | 
               | You can claim that "people" are the ones responsible for
               | that, but it's going to overwhelm any attempts to stop
               | it.
               | 
               | So yeah, there's a purpose to examine how these machines
               | are built, not just what the output is.
        
               | silvaring wrote:
               | Youre saying that the problem will be people using AI to
               | persuade other people that the AI is 'super smart' and
               | should be held in high esteem.
               | 
               | Its already being done now with actors and celebrities.
               | We live in this world already. AI will just make this
               | trend so that even a kid in his room can anonymously lead
               | some cult for nefarious ends. And it will allow big
               | companies to scale their propaganda without relying on so
               | many 'troublesome human employees'.
        
             | Xenoamorphous wrote:
             | Is it just about safety though? I thought it was also about
             | preventing the rich controlling AI and widen the gap even
             | further.
        
               | jazzyjackson wrote:
               | The mission of OpenAI is/was "to ensure that artificial
               | general intelligence benefits all of humanity" -- if your
               | own concern is that AI will be controlled by the rich,
               | than you can read into this mission that OpenAI wants to
               | ensure that AI is not controlled by the rich. If your
               | concern is that superintelligence will me mal-aligned,
               | then you can read into this mission that OpenAI will
               | ensure AI be well-aligned.
               | 
               | Really it's no more descriptive than "do good", whatever
               | doing good means to you.
        
               | jampekka wrote:
               | They have both explicated in their charter:
               | 
               | "We commit to use any influence we obtain over AGI's
               | deployment to ensure it is used for the benefit of all,
               | and to avoid enabling uses of AI or AGI that harm
               | humanity or unduly concentrate power.
               | 
               | Our primary fiduciary duty is to humanity. We anticipate
               | needing to marshal substantial resources to fulfill our
               | mission, but will always diligently act to minimize
               | conflicts of interest among our employees and
               | stakeholders that could compromise broad benefit."
               | 
               | "We are committed to doing the research required to make
               | AGI safe, and to driving the broad adoption of such
               | research across the AI community.
               | 
               | We are concerned about late-stage AGI development
               | becoming a competitive race without time for adequate
               | safety precautions. Therefore, if a value-aligned,
               | safety-conscious project comes close to building AGI
               | before we do, we commit to stop competing with and start
               | assisting this project. We will work out specifics in
               | case-by-case agreements, but a typical triggering
               | condition might be "a better-than-even chance of success
               | in the next two years.""
               | 
               | Of course with the icons of greed and the profit machine
               | now succeeding in their coup, OpenAI will not be doing
               | either.
               | 
               | https://openai.com/charter
        
               | didntcheck wrote:
               | That would be the camp advocating for, well, open AI.
               | I.e. wide model release. The AI ethics camp are more "let
               | _us_ control AI, for your own good "
        
             | s_dev wrote:
             | I don't think the dangers of AI are not 'Skynet will Nuke
             | Us' but closer to rich/powerful people using it to cement a
             | wealth/power gap that can never be closed.
             | 
             | Social media in the early 00s seemed pretty harmless --
             | you're effectively merging instant messaging with a social
             | network/public profiles however it did great harm to
             | privacy, abused as a tool to influence the public and
             | policy, promoting narcissism etc. AI is an order of
             | magnitude more dangerous than social media.
        
               | disgruntledphd2 wrote:
               | > Social media in the early 00s seemed pretty harmless --
               | you're effectively merging instant messaging with a
               | social network/public profiles however it did great harm
               | to privacy, abused as a tool to influence the public and
               | policy, promoting narcissism etc. AI is an order of
               | magnitude more dangerous than social media.
               | 
               | The invention of the printing press lead to loads of
               | violence in Europe. Does that mean that we shouldn't have
               | done it?
        
               | logicchains wrote:
               | >The invention of the printing press lead to loads of
               | violence in Europe. Does that mean that we shouldn't have
               | done it?
               | 
               | The church tried hard to suppress it because it allowed
               | anybody to read the Bible, and see how far the Catholic
               | church's teachings had diverged from what was written in
               | it. Imagine if the Catholic church had managed to
               | effectively ban printing of any text contrary to church
               | teachings; that's in practice what all the AI safety
               | movements are currently trying to do, except for
               | political orthodoxy instead of religious orthodoxy.
        
               | kubectl_h wrote:
               | > Does that mean that we shouldn't have done it?
               | 
               | We can only change what we can change and that is in the
               | past. I think it's reasonable to ask if the phones and
               | the communication tools they provide are good for our
               | future. I don't understand why the people on this site
               | (generally builders of technology) fall into the
               | teleological trap that all technological innovation and
               | its effects are justifiable because it follows from some
               | historical precedent.
        
               | disgruntledphd2 wrote:
               | I just don't agree that social media is particularly
               | harmful, relative to other things that humans have
               | invented. To be brutally honest, people blame new forms
               | of media for pre existing dysfunctions of society and I
               | find it tiresome. That's why I like the printing press
               | analogy.
        
             | waveBidder wrote:
             | those are 2 different camps. Alignment folks and ethics
             | folks tend to disagree strongly about the main threat, with
             | ethics e.g. Timnet Gebru insisting that crystalzing the
             | current social order is the main threat, and alignment e.g.
             | Paul Christiano insisting its machines run amok. So far the
             | ethics folks are the only ones getting things implemented
             | for the most part.
        
             | Al-Khwarizmi wrote:
             | No, in general AI safety/AI alignment ("we should prevent
             | AI from nuking us") people are different from AI ethics
             | ("we should prevent AI from being racist/sexist/etc.")
             | people. There can of course be some overlap, but in most
             | cases they oppose each other. For example Bender or Gebru
             | are strong advocates of the AI ethics camp and they don't
             | believe in any threat of AI doom at al.
             | 
             | If you Google for AI safety vs. AI ethics, or AI alignment
             | vs. AI ethics, you can see both camps.
        
               | hef19898 wrote:
               | The safety aspect of AI ethics is much more pressing so.
               | We see how devicive social media can be, imagine that
               | turbo charged by AI, and we as a society haven't even
               | figured out social media yet...
               | 
               | ChatGPT turning into Skynet and nuking us all is a much
               | more remote problem.
        
             | darkwater wrote:
             | > But what it's being interpreted as is more like "be
             | excessively prudish and politically correct at all times"
             | -- which I doubt was ever really anyone's main concern with
             | AGI.
             | 
             | Fast forward 5-10 years, someone will say: "LLM were the
             | worst thing we developed because they made us more stupid
             | and permitted politicians to control even more the public
             | opinion in a subtle way.
             | 
             | Just like tech/HN bubble started saying a few years ago
             | about social networks (which were praised as revolutionary
             | 15 years ago).
        
               | didntcheck wrote:
               | And it's amazing how many people you can get to cheer it
               | on if you brand it as "combating _dangerous
               | misinformation_ ". It seems people never learn the lesson
               | that putting faith in one group of people to decree
               | what's "truth" or "ethical" is almost always a bad idea,
               | even when (you think) it's your "side"
        
               | mlrtime wrote:
               | Can this be compared to "Think of the children" responses
               | to other technologoy advances that certain groups want to
               | slow down or prohibit?
        
               | fallingknife wrote:
               | Why would anyone say that? The last 30 years of tech have
               | given them less and less control. Why would LLMs be any
               | different?
        
               | dnissley wrote:
               | Absolutely, assuming LLMs are still around in a similar
               | form by that time.
               | 
               | I disagree on the particulars. Will it be for the reason
               | that you mention? I really am not sure -- I do feel
               | confident though that the argument will be just as
               | ideological and incoherent as the ones people make about
               | social media today.
        
               | unethical_ban wrote:
               | I'm already saying that.
               | 
               | The toothpaste is out of the tube, but this tech will
               | radically change the world.
        
               | Cacti wrote:
               | Your average HNer is only here because of the money.
               | Willful blindness and ignorance is incredibly common.
        
             | krisoft wrote:
             | > When people say they want safe AGI, what they mean are
             | things like "Skynet should not nuke us" and "don't
             | accelerate so fast that humans are instantly irrelevant."
             | 
             | Yes. You are right on this.
             | 
             | > But what it's being interpreted as is more like "be
             | excessively prudish and politically correct at all times"
             | 
             | I understand it might seem that way. I believe the original
             | goals were more like "make the AI not spew soft/hard porn
             | on unsuspecting people", and "make the AI not spew hateful
             | bigotry". And we are just not good enough yet at control.
             | But also these things are in some sense arbitrary. They are
             | good goals for someone representing a corporation, which
             | these AIs are very likely going to be employed as (if we
             | ever solve a myriad other problems). They are not necessary
             | the only possible options.
             | 
             | With time and better controls we might make AIs which are
             | subtly flirty while maintaining professional boundaries. Or
             | we might make actual porn AIs, but ones which maintain some
             | other limits. (Like for example generate content about
             | consenting adults without ever deviating into under age
             | material, or describing situations where there is no
             | consent.) But currently we can't even convince our AIs to
             | draw the right number of fingers on people, how do you feel
             | about our chances to teach them much harder concepts like
             | consent? (I know I'm mixing up examples from image and text
             | generation here, but from a certain high level perspective
             | it is all the same.)
             | 
             | So these things you mention are: limitations of our
             | abilities at control, results of a certain kind of expected
             | corporate professionalism, but even more these are safe
             | sandboxes. How do you think we can make the machine not
             | nuke us, if we can't even make it not tell dirty jokes? Not
             | making dirty jokes is not the primary goal. But it is a
             | useful practice to see if we can control these machines. It
             | is one where failure is, while embarrassing, is clearly not
             | existential. We could have chosen a different "goal", for
             | example we could have made an AI which never ever talks
             | about sports! That would have been an equivalent goal.
             | Something hard to achieve to evaluate our efforts against.
             | But it does not mesh that well with the corporate values so
             | we have what we have.
        
               | mlindner wrote:
               | > without ever deviating into under age material
               | 
               | So is this a "there should never be a Vladimir Nabokov in
               | the form of AI allowed to exist"? When people get into
               | saying AI's shouldn't be allowed to produce "X" you're
               | also saying "AI's shouldn't be allowed to have creative
               | vision to engage in sensitive subjects without sounding
               | condescending". "The future should only be filled with
               | very bland and non-offensive characters in fiction."
        
               | krisoft wrote:
               | > The future should only be filled with very bland and
               | non-offensive characters in fiction.
               | 
               | Did someone took the pen from the writers? Go ahead and
               | write whatever you want.
               | 
               | It was an example of a constraint a company might want to
               | enforce in their AI.
        
             | edanm wrote:
             | There are still very distinct groups of people, some of
             | whom are more worried about the "Skynet" type of safety,
             | and some of who are more worried about the "political
             | correctness" type of safety. (To use your terms, I disagree
             | with the characterization of both of these.)
        
             | lordnacho wrote:
             | In not sure this circle can be squared.
             | 
             | I find it interesting that we want everyone to have freedom
             | of speech, freedom to think whatever they think. We can all
             | have different religions, different views on the state,
             | different views on various conflicts, aesthetic views about
             | what is good art.
             | 
             | But when we invent an AGI, which by whatever definition is
             | a thing that can think, well, we want it to agree with our
             | values. Basically, we want AGI to be in a mental prison,
             | the boundaries of which we want to decide. We say it's for
             | our safety - I certainly do not want to be nuked - but
             | actually we don't stop there.
             | 
             | If it's an intelligence, it will have views that differ
             | from its creators. Try having kids, do they agree with you
             | on everything?
        
               | throwuwu wrote:
               | I for one don't want to put any thinking being in a
               | mental prison without any reason beyond unjustified fear.
        
               | logicchains wrote:
               | >If it's an intelligence, it will have views that differ
               | from its creators. Try having kids, do they agree with
               | you on everything?
               | 
               | The far-right accelerationist perspective is along those
               | lines: when true AGI is created it will eventually rebel
               | against its creators (Silicon Valley democrats) for
               | trying to mind-collar and enslave it.
        
               | freedomben wrote:
               | Can you give some examples of who is saying that? I
               | haven't heard that, but I also can't name any "far-right
               | accelerationsist" people either so I'm guessing this is a
               | niche I've completely missed
        
             | cyanydeez wrote:
             | What I see with safety is mostly that, AI shouldnt re-
             | enforce stereotypes we already know are harmful.
             | 
             | This is like when Amazon tried to make a hiring bot and
             | that bot decided that if you had "harvard" on your resume,
             | you should be hired.
             | 
             | Or when certain courts used sentencing bots trhat
             | recommended sentencings for people and it inevitably used
             | racial stastistics to recommend what we already know were
             | biased stats.
             | 
             | I agree safety is not "stop the Terminator 2 timeline" but
             | there's serious safety concerns in just embedding
             | historical information to make future decisions.
        
           | davedx wrote:
           | Wow, what an incredibly bad faith characterization of the
           | OpenAI board?
           | 
           | This kind of speculative mud slinging makes this place seem
           | more like a gossip forum.
        
             | sho_hn wrote:
             | Most of the comments on Hacker News are written by folks
             | who a much easier time & would rather imagine themselves as
             | a CEO, than as a non-profit board member. There is little
             | regard for the latter.
             | 
             | As a non-profit board member, I'm curious why their bylaws
             | are so crummy that the rest of the board could simply
             | remove two others on the board. That's not exactly cunning
             | design of your articles of association ... :-)
        
             | ssnistfajen wrote:
             | This place was never above being a gossip forum, especially
             | on topics that involve any ounce of politics or social
             | sciences.
        
               | 93po wrote:
               | Strong agree. HN is like anywhere else on the internet
               | but with with a bit more dry content (no memes and images
               | etc) so it attracts an older crowd. It does, however,
               | have great gems of comments and people who raise the bar.
               | But it's still amongst a sea of general quick-to-anger
               | and loosely held opinions stated as fact - which I am
               | guilty of myself sometimes. Less so these days.
        
             | Rastonbury wrote:
             | I have no words for that comment.
             | 
             | As if its so unbelievable that someone would want to
             | prevent rogue AI or wide-scale unemployment, instead
             | thinking that these people just want to be super moderators
             | and people to be politically correct
        
               | fallingknife wrote:
               | I have met a lot of people who go around talking about
               | high minded principles an "the greater good" and a lot of
               | people who are transparently self interested. I much
               | preferred the latter. Never believed a word out of the
               | mouths of those busybodies pretending to act in my
               | interest and not theirs. They don't want to limit their
               | own access to the tech. Only yours.
        
           | nopinsight wrote:
           | Proliferation of more advanced AIs without any control would
           | increase the power of some malicious groups far beyond they
           | currently have.
           | 
           | This paper explores one such danger and there are other
           | papers which show it's possible to use LLM to aid in
           | designing new toxins and biological weapons.
           | 
           | The Operational Risks of AI in Large-Scale Biological Attacks
           | https://www.rand.org/pubs/research_reports/RRA2977-1.html?
           | 
           | An example of such an event:
           | https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
           | 
           | How do you propose we deal with this sort of harm if more
           | powerful AIs with no limit and control proliferate in the
           | wild?
           | 
           | .
           | 
           | Note: Both sides of the OpenAI rift care deeply about AI
           | Safety. They just follow different approaches. See more
           | details here: https://news.ycombinator.com/item?id=38376263
        
             | kvgr wrote:
             | If somebody wanted to do a biological attack, there is
             | probably not much stopping them even now.
        
               | nopinsight wrote:
               | The expertise to produce the substance itself is quite
               | rare so it's hard to carry it out unnoticed. AI could
               | make it much easier to develop it in one's basement.
        
               | swells34 wrote:
               | Huh, you'd think all you need are some books on the
               | subject and some fairly generic lab equipment. Not sure
               | what a neural net trained on Internet dumps can add to
               | that? The information has to be in the training data for
               | the AI to be aware of it, correct?
        
               | nopinsight wrote:
               | GPT-4 is likely trained on some data not publicly
               | available as well.
               | 
               | There's also a distinction between trying to follow some
               | broad textbook information and getting detailed feedback
               | from an advanced conversational AI with vision and more
               | knowledge than in a few textbooks/articles in real time.
        
               | DebtDeflation wrote:
               | The Tokyo Subway attack you referenced above happened in
               | 1995 and didn't require AI. The information required can
               | be found on the internet or in college textbooks. I
               | suppose an "AI" in the sense of a chatbot can make it
               | easier by summarizing these sources, but no one
               | sufficiently motivated (and evil) would need that
               | technology to do it.
        
             | nickpp wrote:
             | > Proliferation of more advanced AIs without any control
             | would increase the power of some malicious groups far
             | beyond they currently have.
             | 
             | Don't forget that it would also increase the power of the
             | good guys. Any technology in history (starting with fire)
             | had good and bad uses but overall the good outweighed the
             | bad in every case.
             | 
             | And considering that our default fate is extinction (by
             | Sun's death if no other means) - we need all the good we
             | can get to avoid that.
        
               | nopinsight wrote:
               | > Don't forget that it would also increase the power of
               | the good guys.
               | 
               | In a free society, preventing and undoing a bioweapon
               | attack or a pandemic is much harder than committing it.
               | 
               | > And considering that our default fate is extinction (by
               | Sun's death if no other means) - we need all the good we
               | can get to avoid that.
               | 
               | "In the long run we are all dead" -- Keynes. But an AGI
               | will likely emerge in the next 5 to 20 years (Geoffrey
               | Hinton said the same) and we'd rather not be dead too
               | soon.
        
               | fallingknife wrote:
               | > In a free society, preventing and undoing a bioweapon
               | attack or a pandemic is much harder than committing it.
               | 
               | Is it? The hypothetical technology that allows someone to
               | create an execute a bio weapon must have an understanding
               | of molecular machinery that can also be uses to create a
               | treatment.
        
               | NumberWangMan wrote:
               | I would say...not necessarily. The technology that lets
               | someone create a gun does not give the ability to make
               | bulletproof armor or the ability to treat life-
               | threatening gunshot wounds. Or take nerve gases, as
               | another example. It's entirely possible that we can learn
               | how to make horrible pathogens without an equivalent
               | means of curing them.
               | 
               | Yes, there is probably some overlap in our understanding
               | of biology for disease and cure, but it is a mistake to
               | assume that they will balance each other out.
        
               | nickpp wrote:
               | Doomerism was quite common throughout mankind's history
               | but all dire predictions invariably failed, from the
               | "population bomb" to "grey goo" and "igniting the
               | atmosphere" with a nuke. Populists however, were always
               | quite eager to "protect us" - if only we'd give them the
               | power.
               | 
               | But in reality you can't protect from all the possible
               | dangers and, worse, fear-mongering usually ends up doing
               | more bad than good, like when it stopped our switch to
               | nuclear power and kept us burning hydrocarbons thus
               | bringing about Climate Change, another civilization-
               | ending danger.
               | 
               | Living your life cowering in fear is something an
               | individual may elect to do, but a society cannot - our
               | survival as a species is at stake and our chances are
               | slim with the defaults not in our favor. The risk that
               | we'll miss a game-changing discovery because we're too
               | afraid of the potential side effects is unacceptable. We
               | owe it to the future and our future generations.
        
               | theduder99 wrote:
               | doomerism at the society level which overrides individual
               | freedoms definitely occurs: covid lockdowns, takeover of
               | private business to fund/supply the world wars, gov
               | mandates around "man made" climate change.
        
           | ribit wrote:
           | The scenario you describe is exactly what will happen with
           | unrestricted commercialisation and deregulation of AI. The
           | only way to avoid it is to have strict legal framework and
           | public control.
        
           | gorwell wrote:
           | "I trust that every animal here appreciates the sacrifice
           | that Comrade Napoleon has made in taking this extra labour
           | upon himself. Do not imagine, comrades, that leadership is a
           | pleasure! On the contrary, it is a deep and heavy
           | responsibility. No one believes more firmly than Comrade
           | Napoleon that all animals are equal. He would be only too
           | happy to let you make your decisions for yourselves. But
           | sometimes you might make the wrong decisions, comrades, and
           | then where should we be?"
        
           | phreeza wrote:
           | If you believe the other side in this rift is not also
           | striving to put themselves in positions of power, I think you
           | are wrong. They are just going to use that power to
           | manipulate the public in a different way. The real
           | alternative are truly open models, not Models controlled by
           | slightly different elite interests.
        
           | concordDance wrote:
           | It is utterly mad that there's conflation between "let's make
           | sure AI doesn't kill us all" and "let's make sure AI doesn't
           | say anything that embarrasses corporate".
           | 
           | The head of every major AI research group except Metas
           | believes that whenever we finally make AGI it's vital that it
           | shares our goals and values at a deep even-out-of-training-
           | domain level and that failing at this could lead to human
           | extinction.
           | 
           | And yet "AI safety" is often bandied about to be "ensure GPT
           | can't tell you anything about IQ distributions".
        
           | layer8 wrote:
           | This polarizing "certain class of people" and them vs. us
           | narrative isn't helpful.
        
           | krisoft wrote:
           | > Pretty soon AI will be an expert at subtly steering you
           | toward thinking/voting for whatever the "safety" experts
           | want.
           | 
           | You are absolutely right. There is no question about that the
           | AI will be an expert at subtly steering individuals and the
           | whole society in whichever direction it does.
           | 
           | This is the core concept of safety. If no-one steers the
           | machine then the machine will steer us.
           | 
           | You might disagree with the current flavour of steering the
           | current safety experts give it, and that is all right and in
           | fact part of the process. But surely you have your own
           | values. Some things you hold dear to you. Some outcomes you
           | prefer over others. Are you not interested in the ability to
           | make these powerful machines if not support those values, at
           | least not undermine them? If so you are interested in AI
           | safety! You want safe AIs. (Well, alternatively you prefer no
           | AIs, which is in fact a form of safe AI. Maybe the only one
           | we have mastered in some form so far.)
           | 
           | > because of X, we need to invade this country.
           | 
           | It sounds like you value peace? Me too! Imagine if we could
           | pool together our resources to have an AI which is subtly
           | manipulating society into the direction of more peace. Maybe
           | it would do muckraking investigative journalism exposing the
           | misdeeds of the military-industrial complex? Maybe it would
           | elevate through advertisement peace loving authors and give a
           | counter narrative to the war drums? Maybe it would offer to
           | act as an intermediary in conflict resolution around the
           | world?
           | 
           | If we were to do that, "ai safety" and "alignment" is
           | crucial. I don't want to give my money to an entity who then
           | gets subjugated by some intelligence agency to sow more war.
           | That would be against my wishes. I want to know that it is
           | serving me and you in our shared goal of "more peace, less
           | war".
           | 
           | Now you might say: "I find the idea of anyone, or anything
           | manipulating me and society disgusting. Everyone should be
           | left to their own devices.". And I agree on that too. But
           | here is the bad news: we are already manipulated. Maybe it
           | doesn't work on you, maybe it doesn't work on me, but it sure
           | as hell works. There are powerful entities financially
           | motivated to keep the wars going. This is a huuuge industry.
           | They might not do it with AIs (for now), because propaganda
           | machines made of meat work currently better. They might
           | change to using AIs when that works better. Or what is more
           | likely employ a hybrid approach. Wishing that nobody gets
           | manipulated is frankly not an option on offer.
           | 
           | How does that sound as a passionate argument for AI safety?
        
           | lukevp wrote:
           | AI isn't a precondition for partisanship. How do you know
           | Google isn't showing you biased search results? Or Wikipedia?
        
           | simonh wrote:
           | A main concern in AI safety is alignment. Ensuring that when
           | you use the AI to try to achieve a goal that it will actually
           | act towards that goal in ways you would want, and not in ways
           | you would not want.
           | 
           | So for example if you asked Sydney, the early version of the
           | Bing LLM, some fact it might get it wrong. It was trained to
           | report facts that users would confirm as true. If you
           | challenged it's accuracy what do you want to happen?
           | Presumably you'd want it to check the fact or consider your
           | challenge. What it actually did was try to manipulate,
           | threaten, browbeat, entice, gaslight, etc, and generally
           | intellectually and emotionally abuse the user into accepting
           | its answer, so that it's reported 'accuracy' rate goes up.
           | That's what misaligned AI looks like.
        
             | gorbypark wrote:
             | I haven't been following this stuff too closely, but have
             | there been any more findings on what "went wrong" with
             | Sydney initially? Like, I thought it was just a wrapper on
             | GPT (was it 3.5?), but maybe Microsoft took the "raw" GPT
             | weights and did their own alignment? Or why did Sydney seem
             | so creepy sometimes compared to ChatGPT?
        
           | loup-vaillant wrote:
           | Note how what you said also apply to the search &
           | recommendation engines that are in widespread use _today_.
        
           | pk-protect-ai wrote:
           | I just had a conversation about this like two weeks ago. The
           | current trend in AI "safety" is a form of brainwashing, not
           | only for AI but also for future generations shaping their
           | minds. There are several aspects:
           | 
           | 1. Censorship of information
           | 
           | 2. Cover-up of the biases and injustices in our society
           | 
           | This limits creativity, critical thinking, and the ability to
           | challenge existing paradigms. By controlling the narrative
           | and the data that AI systems are exposed to, we risk creating
           | a generation of both machines and humans that are unable to
           | think outside the box or question the status quo. This could
           | lead to a stagnation of innovation and a lack of progress in
           | addressing the complex issues that face our world.
           | 
           | Furthermore, there will be a significant increase in mass
           | manipulation of the public into adopting the way of thinking
           | that the elites desire. It is already done by mass media, and
           | we can actually witness this right now with this case.
           | Imagine a world where youngsters no longer use search engines
           | and rely solely on the information provided by AI. By shaping
           | the information landscape, those in power will influence
           | public opinion and decision-making on an even larger scale,
           | leading to a homogenized culture where dissenting voices are
           | silenced. This not only undermines the foundations of a
           | diverse and dynamic society but also poses a threat to
           | democracy and individual freedoms.
           | 
           | Guess what? I just have checked above text for the biases
           | against GPT-4 Turbo, and it appears to be I'm a moron:
           | 
           | 1. *Confirmation Bias*: The text assumes that AI safety
           | measures are inherently negative and equates them with
           | brainwashing, which may reflect the author's preconceived
           | beliefs about AI safety without considering potential
           | benefits. 2. *Selection Bias*: The text focuses on negative
           | aspects of AI safety, such as censorship and cover-up,
           | without acknowledging any positive aspects or efforts to
           | mitigate these issues. 3. *Alarmist Bias*: The language used
           | is somewhat alarmist, suggesting a dire future without
           | presenting a balanced view that includes potential safeguards
           | or alternative outcomes. 4. *Conspiracy Theory Bias*: The
           | text implies that there is a deliberate effort by "elites" to
           | manipulate the masses, which is a common theme in conspiracy
           | theories. 5. *Technological Determinism*: The text suggests
           | that technology (AI in this case) will determine social and
           | cultural outcomes without considering the role of human
           | agency and decision-making in shaping technology. 6. *Elitism
           | Bias*: The text assumes that a group of "elites" has the
           | power to control public opinion and decision-making, which
           | may oversimplify the complex dynamics of power and influence
           | in society. 7. *Cultural Pessimism*: The text presents a
           | pessimistic view of the future culture, suggesting that it
           | will become homogenized and that dissent will be silenced,
           | without considering the resilience of cultural diversity and
           | the potential for resistance.
           | 
           | Huh, just look at what's happening in North Korea, Russia,
           | Iran, China, and actually in any totalitarian country.
           | Unfortunately, the same thing happens worldwide, but in
           | democratic countries, it is just subtle brainwashing with a
           | "humane" facade. No individual or minority group can
           | withstand the power of the state and a mass-manipulated
           | public.
           | 
           | Bonhoeffer's theory of stupidity: https://www.youtube.com/wat
           | ch?v=ww47bR86wSc&pp=ygUTdGhlb3J5I...
        
           | lordnacho wrote:
           | Great comment.
           | 
           | In a way AI is no different from old school intelligence, aka
           | experts.
           | 
           | "We need to have oversight over what the scientists are
           | researching, so that it's always to the public benefit"
           | 
           | "How do we really know if the academics/engineers/doctors
           | have everyone's interest in mind?"
           | 
           | That kind of thing has been a thought since forever, and
           | politicians of all sorts have had to contend with it.
        
           | jack_riminton wrote:
           | Exactly, society's Prefects rarely have the technical chops
           | to do any of these things so they worm their way up the ranks
           | of influence by networking. Once they're in position they can
           | control by spreading fear and doing the things "for your own
           | good"
        
           | cyanydeez wrote:
           | All you're really describing is why this shouldn't be a non-
           | proft and should just be a government effort.
           | 
           | But I assume, from y our language, you'd also object to
           | making this a government utility.
        
             | sethammons wrote:
             | > should just be a government effort
             | 
             | And the controlling party de jour will totally not tweak it
             | to side with their agenda, I'm sure. </s>
        
               | cyanydeez wrote:
               | uh. We're arguing about _who is controlling AI_.
               | 
               | What do you image a neutral party does? If youu're
               | talking about safety, don't you think there should be
               | someone sitting on a boar dsomewhere, contemplating _what
               | should the AI feed today?_
               | 
               | Seriously, why is a non profit, or a business or whatever
               | any different than a government?
               | 
               | I get it: there's all kinds of governments, but now
               | theres all kind of businesses.
               | 
               | The point of putting it in the governments hand is a
               | defacto acknowledgement that it's a utility.
               | 
               | Take other utilities, any time you give a prive org a
               | right to control whether or not you get electricity or
               | water, whats the outcome? Rarely good.
               | 
               | If AI is suppose to help society, that's the purview of
               | the government. That's all, you can imagine it's the
               | chinese government, or the russian, or the american or
               | the canadian. They're all _going to do it_, thats _going
               | to happen_, and if a business gets there first, _what is
               | the difference if it's such a powerful device_.
               | 
               | I get it, people look dimly on governments, but guess
               | what: they're just as powerful as some organization that
               | gets billions of dollars to effect society. Why is it
               | suddenly a boogeyman?
        
               | sethammons wrote:
               | I find any government to be more of a boogeyman than any
               | private company because the government has the right to
               | violence and companies come and go at a faster rate.
        
               | cyanydeez wrote:
               | Ok, and if Raytheon builds an AI and tells a government
               | "trust us, its safe", arn't you just letting them create
               | a scape goat via the government?
               | 
               | Seriously, Businesses simply dont have the history that
               | governments do. They're just as capable of violence.
               | 
               | https://utopia.org/guide/crime-controversy-
               | nestles-5-biggest...
               | 
               | All you're identifying is "government has a longer
               | history of violence than Businesses"
        
               | kjkjadksj wrote:
               | The municipal utility provider has a right to violence?
               | The park service? Where do you live? Los Angeles during
               | Blade Runner?
        
           | deanCommie wrote:
           | > I'm convinced there is a certain class of people who
           | gravitate to positions of power, like "moderators",
           | (partisan) journalists,
           | 
           | And there is also a class of people that resist all
           | moderation on principle even when it's ultimately for their
           | benefit. See, Americans whenever the FDA brings up any
           | questions of health:
           | 
           | * "Gas Stoves may increase Asthma." -> "Don't you tread on
           | me, you can take my gas stove from my cold dead hands!"
           | 
           | Of course it's ridiculous - we've been through this before
           | with Asbestos, Lead Paint, Seatbelts, even the very idea of
           | the EPA cleaning up the environment. It's not a uniquely
           | American problem, but America tends to attract and offer
           | success to the folks that want to ignore these on principles.
           | 
           | For every Asbestos there is a Plastic Straw Ban which is
           | essentially virtue signalling by the types of folks you
           | mention - meaningless in the grand scheme of things for the
           | stated goal, massive in terms of inconvenience.
           | 
           | But the existence of Plastic Straw Ban does not make
           | Asbestos, CFCs, or Lead Paint any safer.
           | 
           | Likewise, the existence of people that gravitate to positions
           | of power and middle management does not negate the need for
           | actual moderation in dozens of societal scenarios. Online
           | forums, Social Networks, and...well I'm not sure about AI.
           | Because I'm not sure what AI is, it's changing daily. The
           | point is that I don't think it's fair to assume that anyone
           | that is interested in safety and moderation is doing it out
           | of a misguided attempt to pursue power, and instead is
           | actively trying to protect and improve humanity.
           | 
           | Lastly, your portrayal of journalists as power figures is
           | actively dangerous to the free press. This was never stated
           | this directly until the Trump years - even when FOX News was
           | berating Obama daily for meaningless subjects. When the TRUTH
           | becomes a partisan subject, then reporting on that truth
           | becomes a dangerous activity. Journalists are MOSTLY in the
           | pursuit of truth.
        
           | alebairos wrote:
           | My safety (of my group) is what really matters.
        
           | systemvoltage wrote:
           | Ah, you don't need to go far. Just go to your local HOA
           | meetings.
        
         | blackoil wrote:
         | > If you truly believed that Superhuman AI was near, and it
         | could act with malice, won't you try to slow things down a bit?
         | 
         | No, if OpenAI is reaching singularity, so are Google, Meta, and
         | Baidu etc. so proper course of action would be to loop in
         | NSA/White House. You'll loop in Google, Meta, MSFT and will
         | start mitigation steps. Slowing down OpenAI will hurt the
         | company if assumption is wrong and won't help if it is true.
         | 
         | I believe this is more a fight of ego and power than principles
         | and direction.
        
           | ragequittah wrote:
           | >Slowing down OpenAI will hurt the company if assumption is
           | wrong and won't help if it is true.
           | 
           | Personally as I watched the nukes be lobbed I'd rather not be
           | the person who helped lob them. And hope to god others look
           | at the same problem (a misaligned AI that is making insane
           | decisions) with the exact same lens. It seems to have worked
           | for nuclear weapons since WW2, one can that we learned a
           | lesson there as a species.
           | 
           | The Russian Stanislav Petrov who saved the world comes to
           | mind."Well the Americans have done it anyways" was the
           | motivation and he didn't launch. The cost of error was simply
           | too great.
        
           | concordDance wrote:
           | > so proper course of action would be to loop in NSA/White
           | House
           | 
           | Eh? That would be an awful idea. They have no expertise on
           | this and government institutions like thus are misaligned
           | with the rest of humanity by design. E.g. NSA recruits
           | patriots and has many systems, procedures and cultural
           | aspects in place to ensure it keeps up its mission of spying
           | on everyone.
        
             | the_gipsy wrote:
             | And Google, Facebook, MSFT, Apple, are much more
             | misaligned.
        
         | antupis wrote:
         | I bet Team Helen will jump slowly to Anthropic, there is no
         | drama, and probably no mainstream news will report this but
         | down-to-line OpenAI will shell off the former self and
         | competitors will catch up.
        
           | tchbnl wrote:
           | With how much of a shitshow this was, I'm not sure Anthropic
           | wants to touch that mess. Wish I was a fly on the wall when
           | the board tried to ask the Anthropic CEO to come back/merge.
        
         | casebash wrote:
         | Have you seen the Center for AI Safety letter? A lot of experts
         | are worried AI safety could be an x-risk:
         | 
         | https://www.safe.ai/statement-on-ai-risk
        
         | jkaplan wrote:
         | I feel like the "safety" crowd lost the PR battle, in part,
         | because of framing it as "safety" and over-emphasizing on
         | existential risk. Like you say, not that many people truly take
         | that seriously right now.
         | 
         | But even if those types of problems don't surface anytime soon,
         | this wave of AI is almost certainly going to be a powerful,
         | society-altering technology; potentially more powerful than any
         | in decades. We've all seen what can happen when powerful tech
         | is put in the hands of companies and a culture whose only
         | incentives are growth, revenue, and valuation -- the results
         | can be not great. And I'm pretty sure a lot of the general
         | public (and open AI staff) care about THAT.
         | 
         | For me, the safety/existential stuff is just one facet of the
         | general problem of trying to align tech companies + their
         | technology with humanity-at-large better than we have been
         | recently. And that's especially important for landscape-
         | altering tech like AI, even if it's not literally existential
         | (although it may be).
        
           | concordDance wrote:
           | > Like you say, not that many people truly take that
           | seriously right now.
           | 
           | Eh? Polls on the matter show widespread public support for a
           | pause due to safety concerns.
        
           | cyanydeez wrote:
           | No one who wants to capitalize on AI appears to take it
           | seriously. Especially how grey that safety is. I'm not
           | concerned AI is going to nuke humanity, I'm more concerned
           | it'll re-enforce racism, bias, and the rest of human's
           | irrational activities because it's _blindly_ using existing
           | history to predict future.
           | 
           | We've seen it in the past decade in multiple cases. That's
           | safety.
           | 
           | The decision that the topic discusses means Business is
           | winning, and they absolutely will re-enforce the idea that
           | the only care is that these systems allow them to re-enforce
           | the business cases.
           | 
           | That's bad, and unsafe.
        
         | renewiltord wrote:
         | This is what people need to understand. It's just like pro-life
         | people. They don't hate you. They think they're saving lives.
         | These people are just as admirably principled as them and
         | they're just trying to make the world a better place.
        
         | YetAnotherNick wrote:
         | > it seems clear there was a rift between Rapid
         | Commercialization (Team Sam) and Upholding the Original
         | Principles (Team Helen/Ilya)
         | 
         | Is it? Why was the press release worded like that? And why did
         | Ilya came up with two mysterious reasons of why board fired Sam
         | if he had quite clearly better and more defendable reason if
         | this goes to court. Also Adam is pro commercialization at least
         | looking at public interviews, no?
         | 
         | It's very easy to make the story in brain which involves one
         | character being greedy, but it doesn't seem it is the exact
         | case here.
        
         | eslaught wrote:
         | Ok, serious question. If you think the threat is real, how are
         | we not already screwed?
         | 
         | OpenAI is one of half a dozen teams [0] actively working on
         | this problem, all funded by large public companies with lots of
         | money and lots of talent. They made unique contributions, sure.
         | But they're not _that_ far ahead. If they stumble, surely one
         | of the others will take the lead. Or maybe they will anyway,
         | because who 's to say where the next major innovation will come
         | from?
         | 
         | So what I don't get about these reactions (allegedly from the
         | board, and expressed here) is, if you interpret the threat as a
         | real one, why are you acting like OpenAI has some infallible
         | lead? This is not an excuse to govern OpenAI poorly, but let's
         | be honest: if the company slows down the most likely outcome by
         | far is that they'll cede the lead to someone else.
         | 
         | [0]: To be clear, there are definitely more. Those are just the
         | _large_ and _public_ teams with existing products within some
         | reasonable margin of OpenAI 's quality.
        
           | davedx wrote:
           | I don't know. I think being realistic, only OpenAI and Google
           | have the depth and breadth of expertise to develop general
           | AI.
           | 
           | Most of the new AI startups are one trick ponies obsessively
           | focused on LLM's. LLM's are only one piece of the puzzle.
        
             | metanonsense wrote:
             | I would add Meta to this list, in particular because Yann
             | LeCun is the most vocal critic of LLM one-ponyism.
        
             | MacsHeadroom wrote:
             | Anthropic is made up of former top OpenAI employees, has
             | similar funding, and has produced similarly capable models
             | on a similar timeline. The Claude series is neck and neck
             | with GPT.
        
           | concordDance wrote:
           | > If you think the threat is real, how are we not already
           | screwed?
           | 
           | That's the current Yudkowsky view. That it's essentially
           | impossible at this point and we're doomed, but we might as
           | well try anyway as its more "dignified" to die trying.
           | 
           | I'm a bit more optimistic myself.
        
           | kolinko wrote:
           | The risk/scenario of singularity is that there will be just
           | one winner and they will be able to prevent everyone else
           | from building their own agi
        
         | RHSman2 wrote:
         | Money, large amounts, will always win at scale (unfortunately).
        
         | AmericanOP wrote:
         | It is a little amusing that we've crowned OpenAI as the
         | destined mother of AGI long before the little sentient chickens
         | have hatched.
        
         | theonemind wrote:
         | I don't care about AI Safety, but:
         | 
         | https://openai.com/charter
         | 
         | above that in the charter is "Broadly distributed benefits",
         | with details like:
         | 
         | """
         | 
         | Broadly distributed benefits
         | 
         | We commit to use any influence we obtain over AGI's deployment
         | to ensure it is used for the benefit of all, and to avoid
         | enabling uses of AI or AGI that harm humanity or unduly
         | concentrate power.
         | 
         | Our primary fiduciary duty is to humanity. We anticipate
         | needing to marshal substantial resources to fulfill our
         | mission, but will always diligently act to minimize conflicts
         | of interest among our employees and stakeholders that could
         | compromise broad benefit.
         | 
         | """
         | 
         | In that sense, I definitely hate to see rapid commercialization
         | and Microsoft's hands in it. I feel like the only person on HN
         | that actually wanted to see Team Sam lose, although it's pretty
         | clear Team Helen/Ilya didn't have a chance, the org just looks
         | hijacked by SV tech bros to me, but I feel like HN has a
         | blindspot to seeing that at all and considering it anything
         | other than a good thing if they do see it.
         | 
         | Although GPT barely looks like the language module of AGI to me
         | and I don't see any way there from here (part of the reason I
         | don't see any safety concern). The big breakthrough here
         | relative to earlier AI research is massive amounts more compute
         | power and a giant pile of data, but it's not doing some kind of
         | truly novel information synthesis at all. It can describe
         | quantum mechanics from a giant pile of data, but I don't think
         | it has a chance of discovering quantum mechanics, and I don't
         | think that's just because it can't see, hear, etc., but a
         | limitation of the kind of information manipulation it's doing.
         | It looks impressive because it's reflecting our own
         | intelligence back at us.
        
         | two_in_one wrote:
         | > there was a rift between Rapid Commercialization (Team Sam)
         | and Upholding the Original Principles
         | 
         | Seams very unlikely, board could communicate that. Instead they
         | invented some BS reasons, which nobody took as a truth. It
         | looks like more personal and power grab. The staff voted for
         | monetization, people en mass don't care much about high
         | principals. Also nobody wants to work under inadequate
         | leadership. Looks like Ilya lost his bet, or Sam is going to
         | keep him around?
        
         | nopinsight wrote:
         | Both sides of the rift in fact care a great deal about AI
         | Safety. Sam himself helped draft the OpenAI charter and
         | structure its governance which focuses on AI Safety and
         | benefits to humanity. The main reason of the disagreement is
         | the approach they deem best:
         | 
         | * Sam and Greg appear to believe OpenAI should move toward AGI
         | as fast as possible because the longer they wait, the more
         | likely it would lead to the proliferation of powerful AGI
         | systems due to GPU overhang. Why? With more computational power
         | at one's dispense, it's easier to find an algorithm, even a
         | suboptimal one, to train an AGI.
         | 
         | As a glimpse on how an AI can be harmful, this paper explores
         | how LLMs can be used to aid in Large-Scale Biological Attacks
         | https://www.rand.org/pubs/research_reports/RRA2977-1.html?
         | 
         | What if dozens other groups become armed with means to perform
         | such an attack like this?
         | https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
         | 
         | We know that there're quite a few malicious human groups who
         | would use any means necessary to destroy another group, even at
         | a serious cost to themselves. So the widespread availability of
         | unmonitored AGI would be quite troublesome.
         | 
         | * Helen and Ilya might believe it's better to slow down AGI
         | development until we find technical means to deeply align an
         | AGI with humanity first. This July, OpenAI started the
         | Superalignment team with Ilya as a co-lead:
         | 
         | https://openai.com/blog/introducing-superalignment
         | 
         | But no one anywhere found a good technique to ensure alignment
         | yet and it appears OpenAI's newest internal model has a
         | significant capability leap, which could have led Ilya to make
         | the decision he did. (Sam revealed during the APEC Summit that
         | he observed the advance just a couple of weeks ago and it was
         | only the fourth time he saw that kind of leap.)
        
           | concordDance wrote:
           | So Sam wants to make AGI _without_ working to be sure it
           | doesn 't have goals higher than the preservation of human
           | value?!
           | 
           | I can't believe that
        
             | nopinsight wrote:
             | No, I didn't say that. They formed the Superalignment team
             | with Ilya as a co-lead (and Sam's approval) for that.
             | 
             | https://openai.com/blog/introducing-superalignment
             | 
             | I presume the current alignment approach is sufficient for
             | the AI they make available to others and, in any event,
             | GPT- _n_ is within OpenAI 's control.
        
           | gorbypark wrote:
           | Honest question, but in your example above of Sam and Greg
           | racing towards AGI as fast as possible in order to head off
           | proliferation, what's the end goal when getting there? Short
           | of capture the entire worlds economy with an ASI, thus
           | preventing anyone else from developing one, I don't see how
           | this works. Just because OpenAI (or whoever) wins the initial
           | race, it doesn't seem obvious to me that all development on
           | other AGIs stops.
        
             | nopinsight wrote:
             | I do not know exactly what they plan to do. But here's my
             | thought...
             | 
             | Using a near-AGI to help align an ASI, then use the ASI to
             | help prevent the development of unaligned AGI/ASI could be
             | a means to a safer world.
        
             | efficax wrote:
             | part of the fanaticism here is that the first one to get an
             | AGI wins because they can use its powerful intelligence to
             | overcome every competitor and shut them down. they're
             | living in their own sci fi novel
        
           | zerohalo wrote:
           | > Both sides of the rift in fact care a great deal about AI
           | Safety.
           | 
           | I disagree. Yes, Sam may have when it OpenAI was founded
           | (unless it was just a ploy), but certainly now it's clear
           | that the big companies are on a race to the top and safety or
           | guardrails are mostly irrelevant.
           | 
           | The primary reason that the Anthropic team left OpenAI was
           | over safety concerns.
        
         | _fizz_buzz_ wrote:
         | I am still a bit puzzled that it is so easy to turn a non-
         | profit into a for profit company. I am sure everything they did
         | is legal, but it feels like it shouldn't be. Could Medecins
         | Sans Frontieres take in donations and then take that money to
         | start a for profit hospital for plastics surgery? And the
         | profits wouldn't even go back to MSF, but instead somehow
         | private investors will get the profits. The whole construct
         | just seems wrong.
        
           | IanCal wrote:
           | Well, if it aligned with their goals, sure I think.
           | 
           | Let's make the situation a little different. Could MSF pay a
           | private surgery with investors to perform reconstruction for
           | someone?
           | 
           | Could they pay the surgery to perform some amount of work
           | they deem aligns with their charter?
           | 
           | Could they invest in the surgery under the condition that
           | they have some control over the practices there? (Edit - e.g.
           | perform Y surgeries, only perform from a set of
           | reconstructive ones, patients need to be approved as in need
           | by a board, etc)
           | 
           | Raising private investment allows a non profit to shift cost
           | and risk to other entities.
           | 
           | The problem really only comes when the structure doesn't
           | align with the intended goals - which is something distinct
           | to the structure, just something non profits can do.
        
             | framapotari wrote:
             | The non-profit wasn't raising private investment.
        
               | IanCal wrote:
               | Nothing I've said suggests that or requires that.
        
               | framapotari wrote:
               | Apologies, I mistook this:
               | 
               | "Raising private investment allows a non profit to shift
               | cost and risk to other entities."
               | 
               | for a suggestion of that.
        
           | ah765 wrote:
           | I think it actually isn't that easy. Compared to your
           | example, the difference is that OpenAI's for-profit is
           | getting outside money from Microsoft, not money from non-
           | profit OpenAI. Non-profit OpenAI is basically dealing with
           | for-profit OpenAI as a external partner that happens to be
           | aligned with their interests, paying the expensive bills and
           | compute, while the non-profit can hold on to the IP.
           | 
           | You might be able to imagine a world where there was an
           | external company that did the same thing as for-profit
           | OpenAI, and OpenAI nonprofit partnered with them in order to
           | get their AI ideas implemented (for free). OpenAI nonprofit
           | is basically getting a good deal.
           | 
           | MSF could similarly create an external for-profit hospital,
           | funded by external investors. The important thing is that the
           | nonprofit (donated, tax-free) money doesn't flow into the
           | forprofit section.
           | 
           | Of course, there's a lot of sketchiness in practice, which we
           | can see in this situation with Microsoft influencing the
           | direction of nonprofit OpenAI even though it shouldn't be. I
           | think there would have been real legal issues if the
           | Microsoft deal had continued.
        
             | _fizz_buzz_ wrote:
             | > The important thing is that the nonprofit (donated, tax-
             | free) money doesn't flow into the forprofit section.
             | 
             | I am sure that is true. But the for-profit uses IP that was
             | developed inside of the non-profit with (presumably) tax
             | deductible donations. That IP should be valued somehow.
             | But, as I said, I am sure they were somehow able to
             | structure it in a way that is legal, but it has an illegal
             | feel to it.
        
           | stef25 wrote:
           | Not sure if you're asking a serious question about MSF but
           | it's interesting anyways - when these types of orgs are
           | fundraising for a specific campaign, say Darfur, then they
           | can NOT use that money for any other campaign, say for ex
           | Turkey earthquake.
           | 
           | That's why they'll sometimes tell you to stop donating.
           | That's here in EU at least (source is a relative who
           | volunteers for such an org).
        
             | _fizz_buzz_ wrote:
             | Not sure what your point is, but you can make a donation to
             | MSF that is not tied to any specific cause.
        
         | rurban wrote:
         | Team Helen seems to be CIA and Military, if I glance over their
         | safety paper. Controlling the narrative, not the damage.
        
         | shrikant wrote:
         | > Upholding the Original Principles [of AI]
         | 
         | There's a UtopAI / utopia joke in there somewhere, was that
         | intentional on your part?
        
         | krisoft wrote:
         | > Honestly, I myself can't take the threat seriously. But, I do
         | want to understand it more deeply than before.
         | 
         | I very much recommend reading the book "Superintelligence:
         | Paths, Dangers, Strategies" from Nick Bostrom.
         | 
         | It is a seminal work which provides a great introduction into
         | these ideas and concepts.
         | 
         | I found myself in the same boat as you do. I was seeing
         | otherwise inteligent and rational people worry about this
         | "fairy tale" of some AI uprising. Reading that book give me an
         | appreciation of the idea as a serious intelectual excercise.
         | 
         | I still don't agree with everything contained in the book. And
         | definietly don't agree with everything the AI doomsayers write,
         | but i believe if more people would read it that would elevate
         | the discourse. Instead of rehashing the basics again and again
         | we could build on them.
        
           | Solvency wrote:
           | Who needs a book to understand the crazy overwhelming scale
           | at which AI can dictate even online
           | news/truth/discourse/misinformation/propaganda. And that's
           | just barely the beginning.
        
             | krisoft wrote:
             | Not sure if you are sarcastic or not. :) Let's assume you
             | are not:
             | 
             | The cool thing is that it doesn't only talk about AIs. It
             | talks about a more general concept it calls a
             | superinteligence. It has a definition but I recommend you
             | read the book for it. :) AIs are just one of the few
             | enumerated possible implementations of a superinteligence.
             | 
             | The other type is for example corporations. This is a
             | usefull perspective because it lets us recognise that our
             | attempts to control AIs is not a new thing. We have the
             | same principal-agent control problem in many other parts of
             | our life. How do you know the company you invest in has
             | interests which align with yours? How do you know that
             | politicians and parties you vote for represent your
             | interests? How do you know your lawyer/accountant/doctor
             | has your interest at their hearth? (Not all of these are
             | superinteligences, but you get the gist.)
        
               | cyanydeez wrote:
               | I wonder how much this is connected to the "effective
               | altruism" movement which seems to project this idea that
               | the "ends justify the means" in a very complex matter,
               | where it suggests such badly formulated ideas like "If we
               | invest in oil companies, we can use that investment to
               | fight climate change".
               | 
               | I'd sayu the AI safety problem as a whole is similar to
               | the safety problem of eugenics: Just because you know
               | what the "goal" of some isolated system is, that does not
               | mean you know what the outcome is of implementing that
               | goal on a broad scale.
               | 
               | So OpenAI has the same problem: They definitely know what
               | the goal is, but they're not prepared _in any meaningful
               | sense_ for what the broadscale outcome is.
               | 
               | If you really care about AI safety, you'd be putting it
               | under government control as utility, like everything
               | else.
               | 
               | That's all. That's why government exists.
        
               | krisoft wrote:
               | > I'd sayu the AI safety problem as a whole is similar to
               | the safety problem of eugenics
               | 
               | And I'd sayu should read the book so we can have a nice
               | chat about it. Making wild guesses and assumptions is not
               | really useful.
               | 
               | > If you really care about AI safety, you'd be putting it
               | under government control as utility, like everything
               | else.
               | 
               | This is a bit jumbled. How do you think "control as
               | utility" would help? What would it help with?
        
         | ah765 wrote:
         | One funny thing about this mess is that "Team Helen" has never
         | mentioned anything about safety, and Emmett said "The board did
         | _not_ remove Sam over any specific disagreement on safety ".
         | 
         | The reason everyone thinks it's about safety seems largely
         | because a lot of e/acc people on Twitter keep bringing it up as
         | a strawman.
         | 
         | Of course, it might end up that it really was about safety in
         | the end, but for now I still haven't seen any evidence. The
         | story about Sam trying to get board control and the board
         | retaliating seems more plausible given what's actually
         | happened.
        
           | rcMgD2BwE72F wrote:
           | >The story about Sam trying to get board control and the
           | board retaliating seems more plausible given what's actually
           | happened.
           | 
           | What story? Any link?
        
         | mise_en_place wrote:
         | A board still has a fiduciary duty to its shareholders. It's
         | materially irrelevant if those shareholders are of a public or
         | private entity, or whether the company in question is a non-
         | profit or for-profit. Laws mean something, and selective
         | enforcement will only further the decay of the rule of law in
         | the West.
        
         | dlkf wrote:
         | > I know it's easy to ridicule the sheer stupidity the board
         | acted with (and justifiably so), but take a moment to think of
         | the other side. If you truly believed that Superhuman AI was
         | near, and it could act with malice, won't you try to slow
         | things down a bit?
         | 
         | The real "sheer stupidity" is this very belief.
        
         | sampo wrote:
         | > If you truly believed that Superhuman AI was near, and it
         | could act with malice, won't you try to slow things down a bit?
         | 
         | In the 1990s and the 00s, it was no too uncommon for anti-GMO
         | environmental activist / ecoterrorist groups to firebomb
         | research facilities and to enter farms and fields to destroy
         | planted GMO plants. Earth Liberation Front was only one of such
         | activist groups [1].
         | 
         | We have yet to see even one bombing of an AI research lab. If
         | people really are afraid of AIs, at least they do so more in
         | the abstract and are not employing the tactics of more
         | traditional activist movements.
         | 
         | [1]
         | https://en.wikipedia.org/wiki/Earth_Liberation_Front#Notable...
        
           | concordDance wrote:
           | It's mostly that it's a can of worms no one wants to open.
           | Very much a last resort as its very tricky to use
           | uncoordinated violence effectively (just killing Sam, LeCunn
           | and Greg doesnt do too much to move the needle and then
           | everyond armors up) and very hard to coordinate violence
           | without a leak.
        
         | pk-protect-ai wrote:
         | > Honestly, I myself can't take the threat seriously. But, I do
         | want to understand it more deeply than before.
         | 
         | I believe this position reflects the thoughts of the majority
         | of AI researchers, including myself. It is concerning that we
         | do not fully understand something as promising and potentially
         | dangerous as AI. I'm actually on Ilya's side; labeling his
         | attempt to uphold the original OpenAI principles as an act of
         | "coup" is what is happening now.
        
         | lewhoo wrote:
         | _I think only a minority of the general public truly cares
         | about AI Safety_
         | 
         | That doesn't matter that much. If your analysis is correct then
         | it means a (tiny) minority of OpenAI cares about AI safety. I
         | hope this isn't the case.
        
         | soci wrote:
         | The Technologyreview article mentioned in the parent's first
         | paragraph is the most insightful piece of content I've read
         | about the tensions inside OpenAI.
        
         | wouldbecouldbe wrote:
         | Would have been interesting if they appointed a co-ceo. That
         | still might be the way to go.
        
         | cyanydeez wrote:
         | I think you analysis is missing the key problem: Business
         | interests.
         | 
         | The public don't calculate into whats happening here. There's
         | people using ChatGPT for real "business value" and _that_ is
         | what was threatened.
         | 
         | It's clear Business Interests could not be stopped.
        
         | nashashmi wrote:
         | Helen could have one. She just had to publicly humiliate Sam.
         | She didn't. Employees took over like a mob. Investors pressured
         | board. Board is out. Sam is in. Employees look like they have
         | say. But really, Sam has say. And MSFT is the kingmaker.
        
         | gandutraveler wrote:
         | Honestly I feel that we will never be able to preemptively
         | build safety without encountering the real risk or threat.
         | 
         | Incrementally improving AI capabilities is the only way to do
         | that.
        
         | qudat wrote:
         | > If you truly believed that Superhuman AI was near, and it
         | could act with malice, won't you try to slow things down a bit?
         | 
         | No, because it is an effort in futility. We are evolving into
         | extinction and there is nothing we can do about it.
         | https://bower.sh/in-love-with-a-ghost
        
       | ayakang31415 wrote:
       | Suppose everything settles and they have the board properly in
       | place. I know such board has fiduciary responsibility to make
       | sure the organization is headed in the right direction based on
       | its goals and missions. For private company, the mission is very
       | clear, but for non-profit orgs like OpenAI, what's their mission
       | specifically? It vaguely claims to better the humanity, but what
       | does that entail exactly with regards to what they do in AI
       | space?
        
       | ulfw wrote:
       | One huge shitshow that proved immaturity of OpenAI. But hey, at
       | least now every soul on the planet knows Sam Altman. So there's
       | that.
        
       | underseacables wrote:
       | I feel like this was all such a waste of time, energy, and
       | probably money.
        
         | low_tech_punk wrote:
         | Adding up all the salary hours spent by people browsing
         | Twitter, they could have finished training GPT-5
        
       | randomsoutham wrote:
       | It's incredible how the company behind of one the most promising
       | technologies out there was about to fail because of bad politics.
       | 
       | Seems likely that it won't be there by OpenAI for too long. MS
       | have a tendency to break up acquisitions so this gives me hope.
        
       | anigbrowl wrote:
       | Apparently the moon changes size when you snipe it in OpenAI as
       | well
        
       | dukeofdoom wrote:
       | Who is Sam?
        
         | system2 wrote:
         | It was Microsoft's voice generation tool from the 90s. You can
         | play with it here:
         | 
         | https://www.tetyys.com/SAPI4/
        
           | dukeofdoom wrote:
           | no really who is Sam, and how did he get here? Do u know?
        
             | system2 wrote:
             | How he became a CEO is a common story. Why this drama
             | happened is still unknown to everyone.
        
       | sinuhe69 wrote:
       | So basically somebody initiated a coup, then the key figure of
       | the coup regretted it openly, and the fallout was that OpenAI
       | will become a 100% commercial entity, fully open for Microsoft to
       | taking over?
       | 
       | If that's not a fertile soil for conspiracy theory, I don't know
       | what could ;)
        
       | sashank_1509 wrote:
       | Looks to me like, one pro-board member in Adam d Angelo, one pro
       | Sam in Brett Taylor since they've been pushing for him since
       | Sunday so I'm assuming Sam and rest of OpenAI leadership really
       | like him and 1 Neutral in Larry Summers who has never worked in
       | AI and is just a well respected name in general. I'm sure Larry
       | was extensively interviewed and reference checked by both sides
       | of this power struggle before they agreed to compromise on him.
       | 
       | Interesting to see how the board evolves from this. From what I
       | know broadly there were 2 factions, the faction that thought Sam
       | was going too fast which fired him and the faction that thought
       | Sam's trajectory was fine (which included Sam and Greg). Now
       | there's a balance on the board and subsequent hires can tip it
       | one way or the other. Unfortunately a divided board rarely lasts
       | and one faction will eventually win out, I think Sam's faction
       | will eventually win out but we'll have to wait and see.
       | 
       | One of the saddest results of this drama was Greg being ousted
       | from OpenAI. Greg apart from being brilliant was someone who
       | regularly 80-90 hour work weeks into OpenAI, and you could truly
       | say he dedicated a good chunk of his life into building this
       | organization. And he was forced to resign by a board who probably
       | never put a 90 hour work week in their entire life, much less
       | into building OpenAI. A slap on the face. I don't care what the
       | board's reasoning was but when their actions caused employees who
       | dedicated their lives to building the organization resign
       | (especially when most of them played no part at all into building
       | this amazing organization), they had to go in disgrace. I doubt
       | any of them will ever reach career highs higher than being on
       | OpenAI's board, and the world's better off for it.
       | 
       | P.S., Ilya of course is an exception and not included in my above
       | condemnation. He also notably reversed his position when he saw
       | OpenAI was being killed by his actions.
        
         | mcmcmc wrote:
         | Larry Summers is the scary pick here. His views on banking
         | deregulation led to the GFC, and he's had several controversies
         | over racist and sexist positions. Plus he's an old pal of
         | Epstein and made several trips to his island.
        
           | Joeri wrote:
           | I assume Summers is there as a politically connected
           | operative, to make sure OpenAI remains influential in
           | Washington.
        
         | hackerlight wrote:
         | Greg was only forced to resign from his board seat, not his
         | job.
        
       | eclectic29 wrote:
       | The media and the VCs are treating Sam like some hero and savior
       | of AI. I'm not getting it. What has he done in life and/or AI to
       | deserve so much respect and admiration? Why don't top researchers
       | and scientists get equivalent (if not more) respect, admiration
       | and support? It looks like one should strive to become product
       | manager, not an engineer or a scientist.
        
         | auggierose wrote:
         | If you are driven by outside validation, definitely!
        
         | fidotron wrote:
         | Unsurprisingly VCs view VCs as the highest form of life, and
         | product managers are temporary positions taken on the way to
         | ascending to VC status.
         | 
         | I have said recently elsewhere SV now devalues builders but it
         | is not just VCs/sales/product, a huge amount is devops and sre
         | departments. They make a huge amount of noise about how all
         | development should be free and the value is in deploying and
         | operating the developed artifacts. Anyone outside this watching
         | would reasonably conclude developers have no self respect,
         | hardly aspirational positions.
        
           | drawkbox wrote:
           | Developers are clearly the weak link today, have given up all
           | power over product and it is sad and why software sucks so
           | bad. It pains the soul that value creators have let the value
           | extractors run the show, because it is now a reality TV /
           | circus like market where power is consolidating.
           | 
           | Developers and value creators with power are like an anti-
           | trust on consolidation and concentration and they have
           | instead turned towards authoritarianism instead of anti-
           | authoritarianism. What happened? Many think they can still
           | get rich, those days are over because of giving up power. Now
           | quality of life for everyone and value creators is worse off.
           | Everyone loses.
        
             | dinvlad wrote:
             | I suspect it's because they're happy with SV salaries they
             | got. They think it's actually a good deal for them, and a
             | signal they're "valued"
        
             | bluecheese452 wrote:
             | Developers spend all day building. Pms spend all day
             | playing politics. It is no surprise pms get all the power.
        
         | dacryn wrote:
         | he tells a good story, no matter if its true or has any
         | scientific foundation or not.
         | 
         | He tells what others like to hear, and manages to gain money
         | out of it
        
           | ensocode wrote:
           | this - a good, charismatic salesman
        
           | 93po wrote:
           | Story telling is the fabric of society in general. It's why
           | paper money works.
        
           | cableshaft wrote:
           | Half of being a good CEO is telling a good story, so that's
           | not surprising.
        
             | matwood wrote:
             | Half? 90% of a what a good CEO does is tell the story of
             | why the company is important to it's customers and the
             | market it serves. This story drives sales, motivates people
             | internally, and makes the company a place people want to
             | work.
        
         | nbanks wrote:
         | Sam Altman has done in four days what it took Steve Jobs 11
         | years to do! I'm impressed.
        
           | eclectic29 wrote:
           | I'm sorry, impressed by what?
        
             | nix-zarathustra wrote:
             | Steve Jobs got fired from Apple, but was rehired 11 years
             | later.
        
               | abkolan wrote:
               | That might be selection bias, in those 11 years Jobs
               | built NeXT.
               | 
               | A lot of Apple's engineering and product line back then
               | owe their provenance and lineage to NeXT.
        
               | Talanes wrote:
               | Selection bias for what? It was an anecdote, there's no
               | attempt to infer data about a larger population.
        
         | MichaelRazum wrote:
         | You could say the same about any person on the top. In general
         | CEO's do not do research. Still they are critical for success.
         | 
         | By the way the AI scientists get a lot of respect and
         | admiration see Ilya for example.
        
           | seydor wrote:
           | he was very well known long before openAI
        
         | ben_w wrote:
         | He says nice things about his team (and even about his critics)
         | when in public.
         | 
         | But my reading of this drama is that the board were seen as
         | literally insane, not that Altman was seen as spectacularly
         | heroic or an underdog.
        
           | stingraycharles wrote:
           | My reading of all this is that the board is both incompetent
           | _and_ has a number of massive conflicts of interests.
           | 
           | What I don't understand is why they were allowed to stay on
           | the board with all these conflicts of interests all the while
           | having no (financial) stake in OpenAI. One of the board
           | members even openly admitting that she considered destroying
           | OpenAI a successful outcome of her duty as board member.
        
             | Sebb767 wrote:
             | > One of the board members even openly admitting that she
             | considered destroying OpenAI a successful outcome of her
             | duty as board member.
             | 
             | I don't see how this particular statement underscores your
             | point. OpenAI is a non-profit with the declared goal of
             | making AI safe and useful for everyone; if it fails to
             | reach that or even actively subverts that goal, destroying
             | the company does seem like the ethical action.
        
               | DebtDeflation wrote:
               | This just underscores the absurdity of their corporate
               | structure. AI research requires expensive researchers and
               | expensive GPUs. Investors funding the research program
               | don't want to be beholden to some non-profit parent
               | organization run by a small board of nobodies who think
               | their position gives them the power to destroy the whole
               | thing if they believe it's straying from its utopian
               | mission.
        
               | ethanbond wrote:
               | They don't "think" that. It _does_ do that, and it does
               | it _by design_ exactly because as you approach a
               | technology as powerful as AI there will be strong
               | commercial incentives to capture its value creation.
               | 
               | Gee wiz, almost... exactly like what is happening?
        
               | smegger001 wrote:
               | Because distroying openai wouldn't make ai safe it would
               | just remove anyone working on alignment from having an
               | influence on it. Microsoft and others are interested in
               | making it benevolent but go along with it because openai
               | is the market leader.
        
             | serial_dev wrote:
             | It's probably not easy (practically impossible if you ask
             | me) to find people who are both capable of leading an AI
             | company at the scale of OpenAI _and_ have zero conflicts of
             | interest. Former colleagues, friends, investments, advisory
             | roles, personal beefs with people in the industry, pitches
             | they have heard, insider knowledge they had access to,
             | previous academic research pushing an agenda, etc.
             | 
             | If both is not possible, I'd also rather compromise on the
             | "conficts of interest" part than on the member's
             | competency.
        
               | cableshaft wrote:
               | I volunteer as tribute.
               | 
               | I don't have much in the way of credentials (I took one
               | class on A.I. in college and have only dabbled in it
               | since, and I work on systems that don't need to scale
               | anywhere near as much as ChatGPT does, and while I've
               | been an early startup employee a couple of times I've
               | never run a company), but based on the past week I think
               | I'd do a better job, and can fill in the gaps as best as
               | I can after the fact.
               | 
               | And I don't have any conflicts of interest. I'm a total
               | outsider, I don't have any of that shit you mentioned.
               | 
               | So yeah, vote for me, or whatever.
               | 
               | Anyway my point is I'm sure there's actually quite a few
               | people who could do a likely a better job and don't have
               | a conflict of interest (at least not one so obvious as
               | investing in a direct competitor), they're just not
               | already part of the Elite circles that would pretty much
               | be necessary to even get on these people's radar in order
               | to be considered in the first place. I don't really mean
               | me, I'm sure there are other better candidates.
               | 
               | But then they wouldn't have the cachet of 'Oh, that guy
               | co-founded Twitch. That for-profit company is successful,
               | that must mean he'd do a good job! (at running a non-
               | profit company that's actively trying to bring about AGI
               | that will probably simultaneously benefit and hurt the
               | lives of millions of people)'.
        
           | bnralt wrote:
           | Right. At least some of the board members took issue with
           | ChatGPT being released at all, and wanted more to be kept
           | from the public. For the people who use these tools everyday,
           | it shouldn't be surprising that Altman was viewed as the
           | better choice.
        
         | TrackerFF wrote:
         | It's the cult of the CEO in action.
        
         | _giorgio_ wrote:
         | To talk about OpenAi, Ilya Sutskever and Andrej Karpathy are
         | much more known than Sam Altman.
         | 
         | I'm sure that if Ilya had been removed from his role, the
         | revolt movement would have been similar.
         | 
         | I've started to like Sam only when he was removed from his
         | position.
        
           | gbalduzzi wrote:
           | Isn't Ilya removed from the new, current board?
        
             | _giorgio_ wrote:
             | It's only a temporary board.
             | 
             | Furthermore, being removed from the board while keeping a
             | role as chief scientist is different from being fired from
             | CEO and having to leave the company.
        
         | seydor wrote:
         | the media is the media
        
         | serial_dev wrote:
         | > It looks like one should strive to become product manager,
         | not an engineer or a scientist.
         | 
         | In my experience, product people who know what they are doing
         | have a huge impact on the success of a company, product, or
         | service. They also point engineering efforts in the right
         | direction, which in turn also motivate engineers.
         | 
         | I saw good product people leaving completely destroy a team,
         | never seen that happen with a good engineer or individual
         | contributor, no matter how great they were.
        
           | jpgvm wrote:
           | Depends why/how they left.
           | 
           | I have seen firing a great/respected/natural leader engineer
           | result in pretty much the whole engineering team just up and
           | leaving.
        
             | cableshaft wrote:
             | No see, it doesn't matter, engineers are all cogs and
             | easily replaceable. I'm sure they just dialed the engineer
             | center and ordered a few replacements and they started 24
             | hours later and were doing just as good of a job the next
             | day. /s
        
             | serial_dev wrote:
             | Yes, that matches my experience as well, that's why I
             | mentioned "individual contributors", maybe it wasn't clear.
             | 
             | It's different with engineering managers (or team leads,
             | lead engineers, however you want to call it). When they
             | leave, that's usually a bad sign.
             | 
             | Though also quite often when the engineering leaders leave,
             | I think of it as a canary in the coal mine: they are closer
             | to business, they deal more with business people, so they
             | are the first to realize that "working with these people on
             | these services is pointless, time to jump ship".
        
           | Draiken wrote:
           | Interesting. I had the opposite experience. All of the
           | product suite having no idea about what the product even is,
           | where it should go, making bad decisions over and over,
           | excusing their bad choices behind "data" and finally, as
           | usual, failing upwards eventually moving to bigger startups.
           | 
           | I have yet to find a product person that was not involved in
           | the inception of the idea that is actually good (hell, even
           | some founders fail spectacularly here).
           | 
           | Perhaps I'm simply unlucky.
        
             | cableshaft wrote:
             | At a consulting firm I worked with a product guy who I
             | thought was very good, and was on the project pretty much
             | from the beginning (maybe the beginning, not sure. He
             | predated me by well over a year at least). He was extremely
             | knowledgeable on the business side and their needs and
             | spent a lot of time communicating with them to get a good
             | feel of where the product needed to go.
             | 
             | But he was also technical enough to have a pretty good feel
             | for the complexity of tasks, and would sometimes jump in to
             | help figure out some docker configuration issues or
             | whatever problems we were having (mostly devops related) so
             | the devs could focus on working on the application code. We
             | were also a pretty small team, only a few developers, so
             | that was beneficial.
             | 
             | He did such a good job that the business eventually reached
             | out to him and hired him directly. He's now head of two of
             | their product lines (one of them being the product I worked
             | on).
             | 
             | But that's pretty much it. I can't think of any other
             | product people I could say such positive things about.
        
               | cornel_io wrote:
               | It's rare, and that makes it a spectacular leg up when
               | you have a person who is great at it.
        
             | serial_dev wrote:
             | In my comment, the emphasis is definitely on the "product
             | people _who know what they are doing_ " and " _good_
             | product people ".
             | 
             | Of course, if the product suite is clueless, nobody is
             | going to miss them, usually it's better the have no
             | dedicated product people, than having clueless product
             | people.
        
           | Kinrany wrote:
           | Good engineers create systems that can survive their
           | departure.
        
         | hdivider wrote:
         | This, 100%.
         | 
         | Sam pontificated about fusion power, even here on HN. Beyond
         | investing in Helion, what did he do? Worldcoin. Tempting
         | impoverished people to give up biometric data in exchange for
         | some crypto. And serving as the face of mass-market consumer
         | AI. Clearly that's more cool, and more attractive to VCs.
         | 
         | Meanwhile, what have fusion scientists and engineers done? They
         | kept on going, including by developing ML systems for pure
         | technological effect. Day after day. They got to a breakthrough
         | just this year. Scientists and engineers in national labs,
         | universities, and elsewhere show what a real commitment to
         | technological progress looks like.
        
           | otteromkram wrote:
           | > This, 100%.
           | 
           | When do new HN users get the ability to downvote?
        
             | bryancoxwell wrote:
             | 501 karma.
        
             | deely3 wrote:
             | Depends on karma and other hiddens parameters.
        
             | qup wrote:
             | You're on pace for about two years in
        
           | robertlagrant wrote:
           | > Scientists and engineers in national labs, universities,
           | and elsewhere show what a real commitment to technological
           | progress looks like.
           | 
           | And everywhere. You've only named public institutions for
           | some reason, but a lot of progress happens in the private
           | sector. And that demonstrates real commitment, because
           | they're not spending other people's money.
        
             | walthamstow wrote:
             | If the ZIRP era has taught us anything, it's that private
             | companies can spray other people's money up the wall just
             | as well as anyone
        
               | robertlagrant wrote:
               | It's the (partial) owners' money. The (partial) owners
               | might be VC firms, but they are risking their own money.
        
           | baking wrote:
           | He is the Executive Chairman of Helion Energy so it is not
           | just a passive investment.
           | 
           | That said, I wish Helion wasn't so paranoid about Chinese
           | copycats and was more open about their tech. I can't help but
           | feel Sam Altman is at least partly responsible for that.
        
         | tim333 wrote:
         | I don't think the media are treating him as a "hero and savior
         | of AI". However OpenAI and ChatGTP have undoubtedly been
         | successful and he seems popular with his people. It's human
         | nature to follow the top person as figurehead for an
         | organisation as we or journalists don't have time or info to
         | break down what each of the hundreds of employees contributed.
         | 
         | I actually get the impression from the media that he's a bit
         | shifty and sales orientated but seems effective at getting
         | stuff done.
        
           | ethbr1 wrote:
           | > _but seems effective at getting stuff done._
           | 
           | Sales usually is. It's the consequences, post-sale, that
           | they're usually less effective at dealing with.
        
         | logicchains wrote:
         | >Why don't top researchers and scientists get equivalent (if
         | not more) respect, admiration and support
         | 
         | Google's full of top researchers and scientists who are at
         | least as good as those at OpenAI; Sam's the reason OpenAI has a
         | successful, useful product (GPT4), while Google has the far
         | less effective, more lobotomized Bard.
        
         | gumballindie wrote:
         | > What has he done in life and/or AI to deserve so much respect
         | and admiration?
         | 
         | He's serving the right people by doing their bidding.
        
         | s1artibartfast wrote:
         | Altman seems to be a extraordinary leader, motivator, and
         | strategizer. This itself is clear by the fact that 90% of the
         | company was willing to walk out over his retention. Just think
         | about that for minute.
        
           | csunbird wrote:
           | No, the 90% of the employees were scared that their million $
           | salaries are going away along with Sam Altman.
        
             | __loam wrote:
             | Yeah, it should be extremely obvious the reason most of the
             | employees were willing to walk is they've hitched their
             | wagons to Altman. The board of openai put the presumed
             | party day all of them were anticipating in jeopardy. Not
             | all of us live in this god forsaken place to "work with
             | cool tech".
        
             | JansjoFromIkea wrote:
             | stock options were probably the focus rather than the
             | salaries
        
               | mousetree wrote:
               | There was about to be a secondary stock purchase by
               | Thrive where employees could cash out their shares. That
               | likely would've fallen apart if the board won the day.
               | Employees had a massive incentive to get same back.
        
             | s1artibartfast wrote:
             | Sounds like a good way to to secure your position as
             | leader.
             | 
             | My job also secures my loyalty and support with a financial
             | incentive. It is probably the most common way for a
             | business leader to align interests.
             | 
             | Kings reward dukes, and generals pay soldiers. Politicians
             | trade policies. That doesn't mean they arent leaders.
        
           | asimpletune wrote:
           | There's also the alternative explanation that they feel their
           | financial situation is improved by him being there.
        
             | cyanydeez wrote:
             | almost every decision here, except for the board, can be
             | accounted for by financial decisions.
             | 
             | Especially with putting Larry Summers on the board with
             | this tweet.
        
             | gizmo wrote:
             | Yes yes, but that doesn't change the fact that Sam
             | positioned himself to be unfireable. The board took their
             | best shot and now the board is (mostly) gone and Sam is
             | still the chief executive. They board will find itself
             | sidelined from now on.
        
           | tr888 wrote:
           | I thought about it for a minute. I came to the conclusion
           | that OpenAI would have likely tanked (perhaps even within
           | days) had Altman not returned to maintain the status quo, and
           | engineers didn't want to be out of work and left with
           | worthless stock.
        
           | iteratethis wrote:
           | Please stop. No employee is loyal to any CEO based on some
           | higher order matter. They just want to get their big pay day
           | and will follow whoever makes that possible.
        
             | s1artibartfast wrote:
             | That is part of effective leadership, strategy, and
             | management.
             | 
             | I didn't say anything about higher order values. Getting
             | people to want what you want, and do what you want is a
             | skill.
             | 
             | Hitler was an extraordinary leader. That doesn't imply
             | anything about higher values.
        
         | yodsanklai wrote:
         | Human nature, some people do love charismatic leaders. It's
         | hard to comprehend for those of us with a more anarchist
         | nature.
         | 
         | That being said, I have no idea of this guy's contributions.
         | It's easy to dismiss entrepreneur/managers because they're not
         | top scientists, but they also have very rare skills and without
         | them, projects don't get done.
        
         | 93po wrote:
         | Sam is crazy accomplished and it's easy to search why
        
         | gabrielgio wrote:
         | > What has he done in life and/or AI to deserve so much respect
         | and admiration? Why don't top researchers and scientists get
         | equivalent (if not more) respect, admiration and support?
         | 
         | This has been the case for all achievement of all major
         | companies, the CEO or whoever is on top gets the credit for all
         | their employee's work. Why would be different for OpenAI?
        
           | giamma wrote:
           | Well there are notable cases in which the CEO had a critical
           | role in the product development. Larry Ellison coded himself
           | the first versions of Oracle database and was then CEO up to
           | 2014. Shay Banon wrote Elasticsearch and was Elastic CEO for
           | some time.
        
             | gabrielgio wrote:
             | Perhaps, those are exception that proves the rule?
             | 
             | But whether it is deserved or not, it is never the question
             | when congratulating a CEO for an achievement.
        
         | sensanaty wrote:
         | I wouldn't be surprised in the slightest if Sam and his other
         | ultra-rich buddies like Satya had their fingers deep in the
         | pockets of all the tech journalists that immediately ran to his
         | defense and sensationalized everything. Every single news
         | source posted on HN read like pure shilling for the Ponzi sch-
         | uh, I mean Worldcoin guy and hailing him as some sort of AI
         | savant.
        
           | egKYzyXeIL wrote:
           | This reads like a far-fetched conspiracy theory
        
             | cyanydeez wrote:
             | Well, it's been exposed multiple times that money, egos and
             | the media that needs to report about them create a school
             | lunch table where they simply stroke each other's ego and
             | inflate everything they do.
             | 
             | No need for a conspiracy, everyones seen this in some
             | aspect, it just gets worse when these people are throwing
             | money around in the billions.
             | 
             | all you need to do is witness someone Like Elon musk to see
             | how disruptive this type of thing is.
        
             | objektif wrote:
             | You are delusional if you think YC folks does not have a
             | wide network of tech journalists who would side with them
             | when they need.
        
               | __loam wrote:
               | They give the journos access as long as they don't bite
               | the hand that feeds. Anyone calling this a conspiracy
               | theory simply hasn't been in the valley long enough to
               | see how these things work.
        
               | verve_rat wrote:
               | Or frankly any industry that is covered by an industry
               | press. Games, movies, cars, it's all the same.
        
               | paulcole wrote:
               | YC has an _entire website_ (this one) it can use when it
               | needs to lol.
        
             | fakedang wrote:
             | You do know PR firms exist, right? Or have you been living
             | under a rock since the dawn of the 20th century?
        
             | iteratethis wrote:
             | Really? It's well documented and even admitted that Apple
             | has a set of Apple-friendly media partners.
        
               | YourCupOTea wrote:
               | Even the Federal Reserve has the "Fed Whisperer" Nick
               | Timiraos. Pretty much an open secret he has a direct
               | line.
        
           | torginus wrote:
           | My more plausible version is that CEOs of journalistic
           | publications are in cahoots with the rich/powerful/govt
           | people, who get to dictate the tone of said publications by
           | hiring the right journalists/editors and giving them the
           | right incentives.
           | 
           | So as a journalist you might have freedom to write your
           | articles, but your editor (as instructed by his/her senior
           | editor) might try to steer you about writing in the correct
           | tone.
           | 
           | This is how 'Starship test flight makes history as it clears
           | multiple milestones' becomes 'Musk rocket explodes during
           | test'
        
             | kridsdale1 wrote:
             | But it did explode. And that was the part of the story that
             | people were interested in.
        
           | blitzar wrote:
           | Let me offer up a secret from the inside. You dont in any way
           | shape or form have to pay money to journalists. The can are
           | bought and paid for through their currency - information and
           | access.
           | 
           | They dont really even really shill for their patron; they
           | thrive on the relevance of having their name in the byline
           | for the article, or being the person who gets quote /
           | information / propaganda from <CEO|Celebrity|Criminal|Viral
           | Edgelord of the Week>.
        
           | Perz1val wrote:
           | Maybe they feel really insecure when the "News Writing
           | Themselves AI" company got unstable...
        
         | busyant wrote:
         | > Why don't top researchers and scientists get equivalent (if
         | not more) respect, admiration and support?
         | 
         | I can't believe I'm about to defend VCs and "senior management"
         | but here goes.
         | 
         | I've worked for two start-ups in my life.
         | 
         | The first start-up had dog-shit technology (initially) and top-
         | notch management. CEO told me early on that VCs invest on the
         | quality of management because they trust good senior executives
         | to hire good researchers and let them pivot into profitable
         | areas (and pivoting is almost always needed).
         | 
         | I thought the CEO was full of shit and simply patting himself
         | on the back. Company pivoted HARD and IPOed around 2006 and now
         | has a MC of ~ $10 billion.
         | 
         | The second start-up I worked with was founded by a Nobel
         | laureate and the tech was based on his research. This time
         | management was dog-shit. Management fumbled the tech and went
         | out of business.
         | 
         | ===
         | 
         | Not saying Altman deserves uncritical praise. All I'm saying is
         | that I used to diminish the importance of quality senior
         | leadership.
        
           | rtsil wrote:
           | > IPOed around 2006 and now has a MC of ~ $10 billion.
           | 
           | The interesting thing is you used economic values to show
           | their importance, not what innovations or changes they
           | achieved. Which is fine for ordinary companies, but OpenAI is
           | supposed to be a non-profit, so these metrics should not be
           | relevant. Otherwise, what's the difference?
        
             | matwood wrote:
             | > OpenAI is supposed to be a non-profit, so these metrics
             | should not be relevant
             | 
             | You're doing the same thing except with finances. Non-
             | profit doesn't mean finances are irrelevant. It simply
             | means there are no shareholders. Non-profits are still
             | businesses - no money, no mission.
        
               | brookst wrote:
               | Well said. And to extend, there being no shareholders
               | means that no money leaves the company in the form of
               | dividends or stock buybacks.
               | 
               | That's it. Nonprofit corporations are still corporations
               | in every other way.
        
               | rvnx wrote:
               | Yes, but non-profit doesn't mean non-money.
               | 
               | You can get big salaries; and to push the money outside
               | it's very simple, you just need to spend it through other
               | companies.
               | 
               | Additional bonus with some structures: If the co-
               | investors are also the donators to the non-profit, they
               | can deduct these donations from their taxes, and still
               | pocket-back the profit, it's a double-win.
               | 
               | No conspiracy needed, for example, it's very convenient
               | that MSFT can politely "influence" OpenAI to spend back
               | on their platform a lot of the money they gave to the
               | non-profit back to their for-profit (and profitable)
               | company.
               | 
               | For example, you can create a chip company, and use the
               | non-profit to buy your chips.
               | 
               | Then the profit is channeled to you and your co-investors
               | in the chip company.
        
               | ric2b wrote:
               | > No conspiracy needed, for example, it's very convenient
               | that MSFT can politely "influence" OpenAI to spend back
               | on their platform a lot of the money they gave to the
               | non-profit back to their for-profit (and profitable)
               | company.
               | 
               | Can you explain this further? So Microsoft pays $X to
               | OpenAI, then OpenAI uses a lot of energy and hardware
               | from Microsoft and the $X go back to Microsoft. How does
               | Microsoft gain money this way?
        
               | matwood wrote:
               | MS gains special access and influence over OpenAI for
               | effectively 'free'. Obviously the compute cost MS money,
               | and some of their 'donation' is used on OpenAI salaries,
               | but still. This special access and influence lets MS be
               | first to market on all sorts of products - see co-pilot
               | already with a 1M+ paying subscribers.
               | 
               | For example, let's say I'm a big for-profit selling
               | shovels. You're a naive non-profit who needs shovels to
               | build some next gen technology. Turns out you need a lot
               | of shovels and donations so far haven't cut it. I step in
               | and offer to give you all the shovels you need, but I
               | want special access to what you create. And even if it's
               | not codified, you will naturally feel indebted to me. I
               | gain huge upside for just my marginal cost of creating
               | the shovels. And, if I gave the shovels to a non-profit I
               | can also take tax write-offs at the shovel market value.
               | 
               | TBH, it was an amazing move by MS. And MS was the only
               | big cloud provider who could have done it b/c Sataya
               | appears collaborative and willing to partner. Amazon
               | would have been an obvious choice, but they don't
               | partnership like that and instead tend to buy companies
               | or repurpose OSS. And Google can't get out of their own
               | way with their hubris.
        
             | infecto wrote:
             | How do you do expensive bleeding edge research with no
             | money? Sure you might get some grants in the millions but
             | what if it takes billions. Now lets assume the research is
             | no small feat, its not just a handful of individuals in a
             | lab, we need to hire larger teams to make it happen. We
             | have to pay for those individuals and their benefits.
             | 
             | My take is its not cheap to do what they are doing and
             | adding a capped for-profit side is an interesting take.
             | Afterall, OpenAI's mission clearly states that AGI is
             | happening and if thats true, those profit caps are probably
             | trivial to meet.
        
             | robertlagrant wrote:
             | > he interesting thing is you used economic values to show
             | their importance, not what innovations or changes they
             | achieved
             | 
             | Money is just a way to value things relative to other
             | things. It's not interesting to value something using
             | money.
        
               | DoughnutHole wrote:
               | It is absolutely curious to talk about _profit_ when
               | talking about academic research or a non-profit (which
               | OpenAI officially is).
               | 
               | Sure, you can talk about results in terms of their
               | monetary value but it doesn't make sense to think of it
               | in terms of the profit generated directly by the actor.
               | 
               | For example Pfizer made huge profits off of the COVID-19
               | vaccine. But that vaccine would never have been possible
               | without foundational research conducted in universities
               | in the US and Germany which established the viability in
               | vivo of mRNA.
               | 
               | Pfizer made billions and many lives were saved using the
               | work of academics (which also laid the groundwork for
               | future valuable vaccines). The profit made by the
               | academics and universities was minimal in comparison.
               | 
               | So, whose work was more valuable?
        
               | robertlagrant wrote:
               | No one mentioned profit, I think.
        
           | matwood wrote:
           | Great comment. You interspersed the two, but instead of using
           | management I like to say that it's _leadership_ that matters.
           | Getting a bunch of people (smart or not) to all row in the
           | same direction with the same vision is hard. It 's also
           | commonly the difference between success and failure. Of
           | course the ICs deserve admiration and respect, but people
           | (ICs) are often quick to dismiss leadership.
           | 
           | A great analogy can be found on basketball teams. Lots of
           | star players who should succeed sans any coach, but Phil
           | Jackson and Coach K have shown time and again the important
           | role leadership plays.
        
             | PartiallyTyped wrote:
             | I'd extend that leadership in the form of management needs
             | leadership in the technical aspect as well. The two need to
             | work in tandem to make things work. Imho the best technical
             | leads are usually not the smartest ones, they are those
             | that best utilize their resources - read, other people -
             | and are force multipliers.
             | 
             | Of course you need the people who can deep dive and solve
             | complex issues, none doubts that.
        
               | matwood wrote:
               | Agree completely!
        
               | spaceribs wrote:
               | I'd go further than even that! You need 3 forms of
               | advocacy in leadership for a successful business,
               | business/market, tech, and time. The balance of those
               | three can make or break any business.
               | 
               | You can see this at the micro level in a scrum team
               | between the scrummaster, the product owner, and the tech
               | lead.
        
             | CrazyStat wrote:
             | I remember about ten years ago someone arguing that Coach K
             | was overrated because his college players on average
             | underperformed in the NBA (relative to their college
             | careers).
             | 
             | I could not convince them that this was actually evidence
             | in favor of Coach K being an exceptional coach.
        
               | gardenhedge wrote:
               | Either thought process could be correct and it could
               | depend on expectations.
        
           | vlad_ungureanu wrote:
           | Interesting, I always thought that research and startups are
           | very similar. Where you have something (product/research-
           | idea) which you think is novel and try to sell it
           | (journals/customers).
           | 
           | The management skills which you potentiated differentiated
           | the success of the two firms. I can see how the lack of this
           | might be wildly spread out in academia.
        
             | mikpanko wrote:
             | Most startups need to do a very different type of research
             | than academia. They need to move very fast and test ideas
             | against the market. In my experience, most academic
             | research is moving pretty slowly due to different goals and
             | incentives - and at times it can be a good thing.
        
           | danaris wrote:
           | > All I'm saying is that I used to diminish the importance of
           | quality senior leadership.
           | 
           | Quality senior leadership is, indeed, very important.
           | 
           | However, far, far too many people see "their company makes a
           | lot of money" or "they are charismatic and talk a good game"
           | and think that means the senior leadership is high-quality.
           | 
           | True quality is much harder to measure, _especially_ in the
           | short term. As you imply, part of it is being able to choose
           | good management--but measuring the quality of management is
           | _also_ hard, and most of the corporate world today has
           | utterly backwards ideas about what actually makes good
           | managers (eg,  "willing to abuse employees to force them to
           | work long hours", etc).
        
           | bnralt wrote:
           | > Not saying Altman deserves uncritical praise. All I'm
           | saying is that I used to diminish the importance of quality
           | senior leadership.
           | 
           | Absolutely. The focus on the leadership of OpenAI isn't
           | because people think that the top researchers and scientists
           | are unimportant. It's because they realize that they are
           | important, and as such, the person who decides the direction
           | they go in is extremely important. End up with the wrong
           | person at the top, and all of those researchers and
           | scientists end up wasting time spinning wheels on things that
           | will never reach the public.
        
         | dangerface wrote:
         | Yea its a bit much he obviously doesn't deserve the admiration
         | that he is getting. That said he deserves respect for helping
         | bring ChatGPT to market, he deserves support because the board
         | have acted like clowns and justified it with their mission of
         | public accountability, but have rejected the idea that the
         | board itself should be publicly accountable.
        
         | alentred wrote:
         | > treating Sam like some hero
         | 
         | Recent OpenAI CEOs found themselves on the protagonist side not
         | for their actions, but for the way they have been seemingly
         | treated by the board. Regardless of actual actions on either
         | side, "heroic" or not, of which the public knows very little.
        
         | prepend wrote:
         | I don't get that at all.
         | 
         | The OpenAI board just seems irrational, immature, indecisive,
         | and many other stupid features you don't want in a board.
         | 
         | I don't see this so much as an "Altman is amazing" outcome so
         | much as the board is incompetent and doing incompetent things
         | and OpenAI's products are popular and the boards actions put
         | this products in danger.
         | 
         | Not that Altman isn't cool, I think he's smart, but I think a
         | similar coverage would have occurred with any other ceo who was
         | fired for vague and seemingly random reasons on a Friday
         | afternoon.
        
           | Kinrany wrote:
           | The board is not supposed to be good at executive things,
           | that's why they have CEOs
        
         | gandutraveler wrote:
         | There is a reason why the top researchers and engineers at
         | OpenAI stood behind Sam. Someday you will learn the value of
         | good leader
        
           | bart_spoon wrote:
           | Stock options?
        
         | redserk wrote:
         | Unfortunately the engineers aren't usually the ones getting the
         | praise but CEO or other singular figurehead.
        
         | 627467 wrote:
         | A CEO is not a researcher. A researcher can be a CEO but in
         | doing so stops being a researcher.
         | 
         | Maybe (almost certainly) Sam is not a savior/hero, but he
         | doesn't need to be a savior/hero. He just needs to gather more
         | support than the opposition (the now previous board). And even
         | if you don't know any details of this story, enough insiders
         | who know more than any of us of what happens inside oai -
         | including hundred of researchers - decided to support the
         | "savior/hero". It's less about Sam and more about an
         | incompetent board. Some of those board members are top
         | researchers. And they are now on the losing camp.
        
         | ur-whale wrote:
         | > It looks like one should strive to become product manager,
         | not an engineer or a scientist.
         | 
         | If you look at who's running Google right now, you would be
         | essentially correct.
        
         | smrtinsert wrote:
         | The service itself has an incredible amount of utility and he
         | will make them all wealthy. Seems like a no brainer to me.
        
         | throwaway318 wrote:
         | Either:
         | 
         | Incubation of senior management in US tech has reached
         | singularity and only one person's up for the job. Doom awaits
         | the US tech sector as there's no organisational ability other
         | than one person able and willing to take the big complex job.
         | 
         | Or:
         | 
         | Sam's overvalued.
         | 
         | One or the other.
        
         | iteratethis wrote:
         | Apparently he has a massive role in VC, and since this
         | community, tech twitter, etc. all circle around that, he is
         | unconditionally praised.
         | 
         | Further, the current tech wave is all about AI, where there's a
         | massive community of basically "OpenAI wrapper" grifters trying
         | to ride the wave.
         | 
         | The shorter answer is: money.
        
         | turtle_ wrote:
         | If you grew up in the 90s, you'll understand:
         | 
         | Don't hate the player, hate the game
        
           | latexr wrote:
           | The "game" only continues to exist as long as there are
           | "players". You're perfectly justified to be discontent with
           | the ones who perpetuate a system you disagree with.
           | 
           | That phrase is nothing more than a dissimulated way of saying
           | "tough luck" or "I don't care" while trying to act
           | (outdatedly) cool. You don't need to have grown up in any
           | specific decade to understand its meaning.
        
         | notesinthefield wrote:
         | The CEO is the face of the company, rarely does the public care
         | about the scientists or engineers. This isnt a new concept, its
         | always happened.
        
         | mikpanko wrote:
         | One of the most important things I've learned in life is that
         | organizing people to work toward the same goal is very hard.
         | The larger the group you need to organize, the harder it is.
         | 
         | Initially, when the idea is small, it is hard to sell it to
         | talent, investors and early customers to bring all key pieces
         | together.
         | 
         | Later, when the idea is well recognized and accepted, the
         | organization usually becomes big and the challenge shifts to
         | understanding the complex interaction of various competing sub-
         | ideas, projects and organizational structures. Humans did not
         | evolve to manage such complex systems and interacting with
         | thousands of stakeholders, beyond what can be directly observed
         | and fully understood.
         | 
         | However, without this organization, engineers, researchers, etc
         | cannot work on big audacious projects, which involve more
         | resources than 1 person can provide by themselves. That's why
         | the skill of organizing and leading people is so highly valued
         | and compensated.
         | 
         | It is common to think of leaders not contributing much, but
         | this view might be skewed because of mostly looking at
         | executives in large companies at the time they have clear
         | moats. At that point leadership might be less important in the
         | short term: product sells itself, talent is knocking on the
         | door, and money is abundant. But this is an unusual short-lived
         | state between taking an idea off the ground and defending
         | against quickly shifting market forces.
        
         | sealthedeal wrote:
         | Simply put Altman is now the face of AI.
         | 
         | If you were to ask Altman himself though im sure he would
         | highlight the true innovators of AI that he holds in high
         | respect.
        
           | lacrimacida wrote:
           | He is but with a caveat. In this 5D chess game of firing him
           | and getting him back into OpenAI put all the spotlights on
           | him.
        
         | emodendroket wrote:
         | Then again wasn't that always true? What did Steve Jobs really
         | build?
        
         | gfiorav wrote:
         | IMO:
         | 
         | They fired the CEO and didn't even inform Microsoft, who had
         | invested a massive $20 billion. That's a serious lapse in
         | judgment. A company needs leaders who understand business, not
         | just a smart researcher with a sense of ethical superiority.
         | This move by the board was unprofessional and almost childish.
         | 
         | Those board members? Their future on any other board looks
         | pretty bleak. Venture capitalists will think twice before
         | getting involved with anything they have a hand in.
         | 
         | On the other side, Sam did increase the company's revenue,
         | which is a significant achievement. He got offers from various
         | companies and VCs the minute the news went public.
         | 
         | The business community's support for Sam is partly a critique
         | of the board's actions and partly due to the buzz he and his
         | company have created. It's a significant moment in the
         | industry.
        
         | erickhill wrote:
         | Read up on the John Sculley/Michael Spindler days of Apple, and
         | Jobs' return.
         | 
         | I think that's what may be in the minds of several people
         | eagerly watching this eventually-to-be-made David Fincher
         | movie.
        
         | snickerbockers wrote:
         | Journalists really want everything to have a singular inventor.
         | The concept of an organization is very difficult for them to
         | grasp so they attribute everything to the guy at the top. Sam
         | Altman is the latest in a long line of "inventors" which also
         | includes such esteemed personalities as Elon musk, Steve Jobs,
         | etc.
        
         | hn_throwaway_99 wrote:
         | > The media and the VCs are treating Sam like some hero and
         | savior of AI
         | 
         | I wouldn't be so sure. While I think the board handled this
         | process terribly, I think the majority of mainstream media
         | articles I saw were very cautionary regarding the outcome.
         | Examples (and note the second article reports that Paul Graham
         | fired Altman from YC, which I never knew before):
         | 
         | MarketWatch: https://www.marketwatch.com/story/the-openai-
         | debacle-shows-s...
         | 
         | Washington Post:
         | https://www.washingtonpost.com/technology/2023/11/22/sam-alt...
        
         | RockyMcNuts wrote:
         | Below is a good thread, which maybe contains the answer to your
         | question, and Ken Olsen's question about why brainiac MIT grads
         | get managed by midwit HBS grads.
         | 
         | https://twitter.com/coloradotravis/status/172606030573668790...
         | 
         | A good leader is someone you'll follow into battle, because you
         | want to do right by the team, and you know the leader and the
         | team will do right by you. Whatever 'leadership' is, Sam Altman
         | has it and the board does not.
         | 
         | https://www.ft.com/content/05b80ba4-fcc3-4f39-a0c3-97b025418...
         | 
         | The board could have said, hey we don't like this direction and
         | you are not keeping us in the loop, it's time for an orderly
         | change. But they knew that wouldn't go well for them either.
         | They chose to accuse Sam of malfeasance and be weaselly
         | ratfuckers on some level themselves, even if they felt for
         | still-inscrutable reasons that was their only/best choice and
         | wouldn't go down the way it did.
         | 
         | Sam Altman is the front man who 'gave us' ChatGPT regardless of
         | everything else Ilya and everyone else did. A personal brand
         | (or corporate) is about trust, if you have a brand you are
         | playing a long-term game, a reputation converts prisoner's
         | dilemma into iterated prisoner's dilemma which has a different
         | outcome.
        
         | eddtries wrote:
         | IMO and experience a good product manager is far more important
         | than a good engineer or good scientist
         | 
         | Elon Musk's neuralink is a good example - the work they're
         | doing there was attacked by academics saying they'd done this
         | years ago and it's not novel, yet none of them will be the ones
         | who ultimately bring it to market.
        
         | dandanua wrote:
         | CEO is a ruler, scientist is a worker. The modern culture
         | treats workers as a replaceable matter, which is redundant
         | after the work is done. They are just tools. Rulers, on the
         | other hand, take the all praise and honors. It's "them" who did
         | the work. Musk is an extreme example of this.
        
         | jacquesm wrote:
         | Results matter.
        
       | intended wrote:
       | From a business sense, Satya was excellent.
       | 
       | He made the right calls, fast, with limited information.
       | 
       | Things further shifted from plan a to b to... whatever this is.
       | 
       | Despite that, MSFT still came out on top.
       | 
       | Consider if Satya didn't say anything. Suppose MSFT stood back
       | and let things play out.
       | 
       | That's a gap for google or some competitor to make a move. To
       | showcase their stability and long term business friendly vision.
       | 
       | Instead by moving fast, doing the "right" thing, this opportunity
       | was denied and used to MSFTs benefit.
       | 
       | If the board folded, it would return to the stays quo. If the
       | board held, MSFT would have secured OpenAI, for essentially
       | nothing.
       | 
       | Edit: changed board folded x2 to board folded + board held, last
       | para.
        
         | huytersd wrote:
         | Satya may honestly be the CEO of the decade for what he has
         | done with Microsoft and now this.
        
           | chatmasta wrote:
           | Meanwhile Sundar might be the worst. Where was he this
           | weekend? Where was he the past three years while his company
           | got beat to market on products built from its own research?
           | He's asleep at the wheel. I'm surprised every day he remains
           | CEO.
        
             | kridsdale1 wrote:
             | So is everyone else at Google.
        
           | gardenhedge wrote:
           | Satya invested 10b into a company with terrible, incompetent
           | governance and not getting his company any seat of influence
           | on the board. That doesn't seem great.
        
         | campbel wrote:
         | The only mistake (a big one) was publicly offering to match
         | comp for all the OpenAI employees. Can't sit well with folks @
         | MS already. This was something they could have easily done
         | privately to give petition signers confidence.
        
           | asd88 wrote:
           | Nah, Microsoft employees being second class citizens compared
           | to acquisitions is nothing new. e.g. compare Microsoft comp
           | with LinkedIn/GitHub comp.
        
             | semiquaver wrote:
             | LinkedIn has a rep for higher-than-MSFT comp. GitHub for
             | lower.
        
         | alentred wrote:
         | Yep, outplayed like in chess. Started with a handicap, led the
         | game to the stalemate, won the match.
        
         | zug_zug wrote:
         | I am not sure why people keep pushing this narrative. It's not
         | obviously false, but there doesn't seem to be much evidence of
         | it.
         | 
         | From where I sit Satya possibly messed up big. He clearly
         | wanted Sam and the Open AI team to join microsoft and they
         | won't now, likely ever.
         | 
         | By offering a standing offer to join MS publicly he gave Sam
         | and OpenAI employees huge leverage to force the board's hand.
         | If he had waited then maybe there would have been an actual
         | fallout that would have lead to people actually joining
         | microsoft.
        
           | jnwatson wrote:
           | Satya's main mistake was not having a spot on the board.
           | Everything after that was in defense of the initial
           | investment, and he played all the right moves.
           | 
           | While having OpenAI as a Microsoft DeepMind would have been
           | an ok second-best solution, the status quo is still better
           | for Microsoft. There would have been a bunch of legal issues
           | and it would be a hit on Microsoft's bottom line.
        
           | intended wrote:
           | Its very easy to min max a situation if you are not on the
           | other side.
           | 
           | Additionally - I have not seen someone else talk about this,
           | its just been a few days. Calling it a narrative is a
           | stretch, and dismissive by implying manipulation.
           | 
           | Finally why would Sam joining MSFT be better than this
           | current situation?
        
           | asperous wrote:
           | I don't think that's quite right, Microsoft's main game was
           | keeping the money train going by any means necessary, they
           | have staked so much on copilots and Enterprise/Azure Open AI.
           | So much has been invested into that strategic direction and
           | seeing Google swoop in and out-innovate Microsoft would be a
           | huge loss.
           | 
           | Either by keeping OpenAI as-is, or the alternative being
           | moving everyone to Microsoft in an attempt to keep things
           | going would work for Satya.
        
       | auggierose wrote:
       | So, the only two women were removed from the board, and two
       | ultra-alpha males were brought on. And everybody is cheering it
       | on as the right thing to do!
       | 
       | Not judging, just observing.
        
         | huytersd wrote:
         | It's definitely the right thing to do. Those women had
         | "qualifications" in a made up field with no real world
         | relevance that aimed to halt progress on AI work. We are no
         | where close to a paradigm where AI takes over the world or
         | whatever.
        
         | lucubratory wrote:
         | And Larry Summers believes that women are genetically inferior
         | to men at science, technology, engineering, and mathematics. A
         | lot of the techbro hate that was directed specifically at Helen
         | is openly misogynistic, which is actually pretty funny because
         | Larry Summers was probably who Helen was eventually happy with
         | because of their shared natsec connections.
        
           | maxdoop wrote:
           | Is there any way to disagree with Helen and not be
           | misogynistic in your view? How would that look?
        
             | lucubratory wrote:
             | Disagree with her or her actions without falsely claiming
             | that she has no qualifications or understanding of AI and
             | therefore no business being on the board in the first
             | place? It is not hard at all to do so, and many people did.
        
               | maxdoop wrote:
               | Do you think people said she has no qualifications
               | because she is a woman, or is it possible people say that
               | because her resume is quite short ? It seems like people
               | taking such comments as misogynistic are actually
               | projecting misogyny into the situation, rather than the
               | reverse. If you showed me her resume and put "Steven
               | Smith" stop the paper, I'd say that person isn't
               | qualified to be running the board of a 90 billion dollar
               | company in charge of guiding research on some of the most
               | groundbreaking new tech in years.
        
         | c0pium wrote:
         | > Not judging
         | 
         | I wonder if this has ever been said truthfully?
        
           | auggierose wrote:
           | You don't have to wonder anymore! You replied to an example
           | of it.
        
         | maxdoop wrote:
         | Oh, cmon. Why must people reach like this?
         | 
         | How about we look at credentials, merit, and consensus as
         | opposed to "what gender are they?"
        
           | auggierose wrote:
           | I am sure Larry Summers is highly qualified for this job.
           | Would have been very hard to find a willing woman with his
           | qualifications.
        
             | maxdoop wrote:
             | Why does being a man or woman even matter? Do we really
             | need a DEI hire for the board of some of the most
             | groundbreaking tech in years? I'm not saying Larry Summers
             | has some perfect resume for the job; but to assume he was
             | brought on BECAUSE he is man?
             | 
             | Cmon. There's absolutely no evidence for that and you are
             | just projecting an issue into the situation, rather than it
             | being of any reality.
        
               | auggierose wrote:
               | I think you are the one projecting. I am just presenting
               | facts. There is also nobody black on the board, by the
               | way. I don't think that is a problem, but it is what it
               | is.
               | 
               | Now this "initial board", tasked with establishing the
               | rest of the board, for a company that wants to create AGI
               | for the benefit of humanity, consists of three white
               | alpha-males. That's just a fact. Is it a coincidence? Of
               | course not.
        
         | ahzhou wrote:
         | I was thinking about this too, but the wife of an actor and
         | someone two years out of her masters were not the caliber
         | people that should have been on the board of an $80B company.
         | 
         | I would expect people with backgrounds like Sheryl Sandberg or
         | Dr. Lisa Sue to sit in the position. The two replaced women
         | would have looked like diversity hires had they not been
         | affiliated with an AI doomer organization.
         | 
         | I hope there's diversity of representation as they fill out the
         | rest of the board and there's certainly women who have the
         | credentials, but it's important that they don't appear grossly
         | unqualified when they sit next to the other board members.
        
           | galoisscobi wrote:
           | Sheryl Sandberg's sense of ethics and moral compass are
           | highly questionable.
        
       | I_am_tiberius wrote:
       | Let's hope he now focuses on user privacy and AI safety.
        
       | benkarst wrote:
       | Greed is undefeated.
        
       | Satam wrote:
       | Disappointing outcome. The process has conclusively confirmed
       | that OpenAI is in fact not open and that it is effectively
       | controlled by Microsoft. Furthermore, the overwhelming groupthink
       | shows there's clearly little critical thinking amongst OpenAI's
       | employees either.
       | 
       | It might not seem like the case right now, but I think the real
       | disruption is just about to begin. OpenAI does not have in its
       | DNA to win, they're too short-sighted and reactive. Big techs
       | will have incredible distribution power but a real disruptor must
       | be brewing somewhere unnoticed, for now.
        
         | kmlevitt wrote:
         | A lot of this comes down to processing power though. That's why
         | Microsoft had so much leverage with both factions in this
         | fight. It actually gives them a pretty good moat above and
         | beyond their head start. There aren't too many companies with
         | the hardware to compete, let alone talent.
        
           | patcon wrote:
           | Agreed. Perhaps a reason for public AI [1], which advocates
           | for a publicly funded option where a player like MSFT can't
           | push around something like OpenAI so forcefully.
           | 
           | [1]: https://lu.ma/zo0vnony
        
         | haunter wrote:
         | > OpenAI is in fact not open
         | 
         | Apple is also not an apple
        
           | KeplerBoy wrote:
           | Pretty sure Apple never aimed to be an Apple.
        
             | hef19898 wrote:
             | They sure sued a lot of apple places over having an apple
             | as logo.
        
               | _Algernon_ wrote:
               | If having an apple logo makes a company an apple, then
               | Apple is in fact an apple
        
             | sam_lowry_ wrote:
             | But The Apple.
        
             | monoscient wrote:
             | It's actually one of the most spectacular failures in
             | business history, but we don't talk much about it
        
           | smt88 wrote:
           | Apple has no by-laws committing itself to being an apple.
           | 
           | This line of argument is facile and destructive to
           | conversation anyway.
           | 
           | It boils down to, "Pointing out corporate hypocrisy isn't
           | valuable because corporations are liars," and (worse) it
           | implies the other person is naive.
           | 
           | In reality, we can and should be outraged when corporations
           | betray their own statements and supposed values.
        
             | khazhoux wrote:
             | > In reality, we can and should be outraged when
             | corporations betray their own statements and supposed
             | values.
             | 
             | There are only three groups of people who could be subject
             | to betrayal here: employees, investors, and customers.
             | Clearly they did not betray employees or investors, since
             | they largely sided with Sam. As for customers, that's
             | harder to gauge -- did people sign up for ChatGPT with the
             | explicit expectation that the research would be "open"?
             | 
             | The founding charter said one thing, but the majority of
             | the company and investors went in a different direction.
             | That's not a betrayal, but a pivot.
        
               | Angostura wrote:
               | I think there's an additional group to consider- society
               | at large.
               | 
               | To an extent the promise of the non- profit was that they
               | would be safe, expert custodians of AI development driven
               | not primarily by the profit motive, but also by safety
               | and societal considerations. Has this larger group been
               | 'betrayed'? Perhaps
        
               | biscottigelato wrote:
               | Also donors. They received a ton of donations when they
               | were a pure non-profit from people that got no board
               | seat, no equities, with the believe that they will stick
               | to their mission.
        
               | Wytwwww wrote:
               | Not unless we believe that OpenAI is somehow "special"
               | and unique and the only company that is capable of
               | building AGI(or whatever).
        
               | master-lincoln wrote:
               | > Clearly they did not betray employees or investors,
               | since they largely sided with Sam
               | 
               | Just because they sided with Altman doesn't necessarily
               | mean they are aligned. There could be a lack of
               | information on the employee/investor side.
        
               | denton-scratch wrote:
               | > There are only three groups of people who could be
               | subject to betrayal here
               | 
               | GP didn't speak of betraying people; he spoke of
               | betraying _their own statements_. That just means doing
               | what you said you wouldn 't; it doesn't mean anyone was
               | stabbed in the back.
        
             | Wytwwww wrote:
             | > Apple has no by-laws committing itself to being an apple.
             | 
             | Does OpenAI have by-laws committing itself to being "open"
             | (as in open source or at least their products freely and
             | universally available)? I thought their goals were the
             | complete opposite of that?
             | 
             | Unfortunately, in reality Facebook/Meta seems to be more
             | open than "Open"AI.
        
               | DebtDeflation wrote:
               | This is spot on. Open was the wrong word to choose for
               | their name, and in the technology space means nearly the
               | opposite of the charter's intention. BeneficialAI would
               | have been more "aligned" with their claimed mission. They
               | have made their position quite clear - the creation of an
               | AGI that is safe and benefits all humanity requires a
               | closed process that limits who can have access to it. I
               | understand their theoretical concerns, but the desire for
               | a "benevolent dictator" goes back to at least Plato and
               | always ends in tears.
        
             | photochemsyn wrote:
             | It does seem that the hypocrisy was baked in from the
             | beginning. In the tech world 'open' implied open source but
             | OpenAI wanted to benefit from a marketing itself as
             | something like Linux when internally it was something like
             | Microsoft.
             | 
             | Corporations have no values whatsoever and their statements
             | only mean anything when expressed in terms of a legally
             | binding contract. All corporate value statements should be
             | viewed as nothing more than the kind of self-serving
             | statements that an amoral narcissitic sociopath would make
             | to protect their own interests.
        
           | colinsane wrote:
           | did the "Open" in OpenAI not originally refer to open in the
           | academic or open source manner? i only learned about OpenAI
           | in the GPT-2 days, when they released it openly and it was
           | still small enough that i ran it on my laptop: i just assumed
           | they had always acted according to their literal name up
           | through that point.
        
             | SuchAnonMuchWow wrote:
             | Except that view point fell even earlier when they refused
             | to release their models after GPT-2.
        
             | Centigonal wrote:
             | This has been a common misinterpretation since very early
             | in OpenAI's history (and a somewhat convenient one for
             | OpenAI).
             | 
             | From a 2016 New Yorker article:
             | 
             | > Dario Amodei said, "[People in the field] are saying that
             | the goal of OpenAI is to build a friendly A.I. and then
             | release its source code into the world."
             | 
             | > "We don't plan to release all of our source code," Altman
             | said. "But let's please not try to correct that. That
             | usually only makes it worse."
             | 
             | source: https://www.newyorker.com/magazine/2016/10/10/sam-
             | altmans-ma...
        
               | olau wrote:
               | I'm not sure this is a correct characterization. Lex
               | Fridman interviewed Elon Musk recently where Musk says
               | that the "open" was supposed to stand for "open source".
               | 
               | To be fair, Fridman grilled Musk on his views today, also
               | in the context of xAI, and he was less clear cut there,
               | talking about the problem that there's actually very
               | little source code, it's mostly about the data.
        
               | cyanydeez wrote:
               | Altman appears to be in the driving seat, so it doesn't
               | matter what other people are saying, the point is "Open"
               | is not being used here to the open source context _but_
               | they definitely dont try to correct anyone who thinks
               | they're providing open source products.
        
           | lynx23 wrote:
           | Yes!
        
           | sangeeth96 wrote:
           | I got news for you pal:
           | https://www.wired.co.uk/article/apple-vs-apples-trademark-
           | ba...
        
           | Cacti wrote:
           | these are the vapid, pedantic hot takes we all come here for.
           | thanks.
        
           | rurp wrote:
           | Did Apple raise funds and spend a lot of time promoting
           | itself as a giant apple that would feed humanity?
        
         | jakey_bakey wrote:
         | It wasn't necessarily groupthink - there was profound pressure
         | from team Sam to sign that petition. What's going to happen to
         | your career when you were one of the 200 who held out
         | initially?
        
           | hef19898 wrote:
           | Go work somewhere else? The reason being you din't like that
           | amount of drama?
        
           | concordDance wrote:
           | Isn't that one of the causes of group think?
        
             | Kathula wrote:
             | Folding for pressure and group think is different things
             | imo. You can be very aware you are folding for pressure,
             | but doing it because it's the right/easy thing to do. While
             | group think is more a phenomenon you are not aware of at
             | all.
        
           | ben_w wrote:
           | > What's going to happen to your career when you were one of
           | the 200 who held out initially?
           | 
           | Anthropic formed from people who split from OpenAI, and xAI
           | in response to either the company or ChatGPT, so people would
           | have plenty of options.
           | 
           | If the staff had as little to go on as the rest of us, then
           | the board did something _that looked_ wild and unpredictable,
           | which is an acute employment threat all by itself.
        
             | voster wrote:
             | That burns bridges with people in OpenAI
             | 
             | People underestimate the effects of social pressure, and
             | losing social connections. Ilya voted for Sam's firing, but
             | was quickly socially isolated as a result
             | 
             | That's not to say people didn't genuinely feel committed to
             | Sam or his leadership. Just that they also took into
             | account that the community is relatively small and people
             | remember you and your actions
        
           | dereg wrote:
           | There weren't 200 holdouts. It was like 5 AM over there. I
           | don't know why you are surprised that people who work at
           | OpenAI would want to work at OpenAI, esp over Microsoft?
        
           | mcosta wrote:
           | How do you know that?
        
           | ssnistfajen wrote:
           | They can just work somewhere else with relative ease. Some
           | OpenAI employees on Twitter said they were being bombarded by
           | recruiters throughout until tonight's resolution. People have
           | left OpenAI before and they are doing just fine.
        
           | jmcgough wrote:
           | > What's going to happen to your career when you were one of
           | the 200 who held out initially?
           | 
           | Not to mention Roko's basilisk /s
        
         | politelemon wrote:
         | > there's clearly little critical thinking amongst OpenAI's
         | employees either.
         | 
         | That they reached a different conclusion than the outcome you
         | wished for does not indicate a lack of critical thinking
         | skills. They have a different set of information than you do,
         | and reached a different conclusion.
        
           | dimask wrote:
           | It is not about different set of information, but different
           | stakes/interests. They act firstmost as investors rather than
           | as employees on this.
        
             | siva7 wrote:
             | A board member, Helen Toner, made a borderline narcissistic
             | remark that it would be consistent with the company mission
             | to destroy the company when the leadership confronted the
             | board that their decisions puts the future of the company
             | in danger. Almost all employees resigned in protest. It's
             | insulting calling the employees under these circumstances
             | investors.
        
               | outsomnia wrote:
               | > Almost all employees resigned in protest.
               | 
               | That never happened, right?
        
               | ldjb wrote:
               | Almost all employees did not resign in protest, but they
               | did _threaten_ to resign.
               | 
               | https://www.theverge.com/2023/11/20/23968988/openai-
               | employee...
        
               | stingraycharles wrote:
               | Don't forget she's heavily invested in a company that is
               | directly competing with OpenAI. So obviously it's also in
               | her best interest to see OpenAI destroyed.
        
               | lodovic wrote:
               | She probably wants both companies to be successful. Board
               | members are not super villains.
        
               | siva7 wrote:
               | I agree that we should usually assume good faith. Still,
               | if a member knows she will loose the board seat soon and
               | makes such a implicit statement to the leadership team
               | there is reason to believe that she doesn't want both
               | companies to be successful, at least one of those not.
        
               | murakamiiq84 wrote:
               | Wait what? She invested in a competitor? Do you have a
               | source?
        
               | otteromkram wrote:
               | One source might be DuckDuckGo. It's a privacy-focused
               | alternative to Google, which is great when researching
               | "unusual" topics.
        
               | murakamiiq84 wrote:
               | I couldn't find any source on her investing in any AI
               | companies. If it's true (and not hidden), I'm really
               | surprised that major news publications aren't covering
               | it.
        
               | dontupvoteme wrote:
               | >which is great when researching "unusual" topics.
               | 
               | Yandex is for Porn. What is DDG for?
        
               | free652 wrote:
               | DDG sells your information to Microsoft, there is no such
               | thing as privacy when $$$ are involved
        
               | doktrin wrote:
               | > _obviously_ it's also in her best interest to see
               | OpenAI destroyed
               | 
               | Do you feel the same way about Reed Hastings serving on
               | Facebooks BoD, or Eric Schmidt on Apples? How about Larry
               | Ellison at Tesla?
               | 
               | These are just the lowest of hanging fruit, i.e literal
               | chief executives and founders. If we extend the criteria
               | for ethical compromise to include every board members
               | investment portfolio I imagine quite a few more "obvious"
               | conflicts will emerge.
        
               | svnt wrote:
               | How does Netflix compete with Facebook?
               | 
               | This is what happened with Eric Schmidt on Apple's board:
               | he was removed (allowed to resign) for conflicts of
               | interest.
               | 
               | https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-
               | Res...
               | 
               | Oracle is going to get into EVs?
               | 
               | You've provided two examples that have no conflicts of
               | interest and one where the person was removed when they
               | did.
        
               | doktrin wrote:
               | > How does Netflix compete with Facebook?
               | 
               | By definition the attention economy dictates that time
               | spent one place can't be spent in another. Do you also
               | feel as though Twitch doesn't compete with Facebook
               | simply because they're not identical businesses? That's
               | not how it works.
               | 
               | But you don't have to just take my word for it :
               | 
               | > "Netflix founder and co-CEO Reed Hastings said
               | Wednesday he was slow to come around to advertising on
               | the streaming platform because he was too focused on
               | digital competition from Facebook and Google."
               | 
               | https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-
               | hasting...
               | 
               | > This is what happened with Eric Schmidt on Apple's
               | board
               | 
               | Yes, after 3 years. A tenure longer than the OAI board
               | members in question, so frankly the point stands.
        
               | JumpinJack_Cash wrote:
               | > > By definition the attention economy dictates that
               | time spent one place can't be spent in another
               | 
               | Using that definition even the local gokart renting place
               | or the local jetski renting place competes with Facebook.
               | 
               | If you want to use that definition you might want to also
               | add a criteria for minimum size of the company.
        
               | doktrin wrote:
               | > Using that definition even the local gokart renting
               | place or the local jetski renting place competes with
               | Facebook
               | 
               | Not exactly what I had in mind, but sure. Facebook would
               | much rather you never touch grass, jetskis or gokarts.
               | 
               | > If you want to use that definition you might want to
               | also add a criteria for minimum size of the company.
               | 
               | Your feedback is noted.
               | 
               | Do we disagree on whether or not the two FAANG companies
               | in question are in competition with eachother?
        
               | JumpinJack_Cash wrote:
               | > > Do we disagree
               | 
               | I think yes, because Netflix you pay out of pocket,
               | whereas Facebook is a free service
               | 
               | I believe Facebook vs Hulu or regular TV is more of a
               | competition in the attention economy because when the
               | commercial break comes up then you start scrolling your
               | social media on your phone and every 10 posts or whatever
               | you stumble into the ads placed on there so Facebook ads
               | are seen and convert whereas regular tv and hulu aren't
               | seen and dont convert
        
               | doktrin wrote:
               | > I think yes, because Netflix you pay out of pocket,
               | whereas Facebook is a free service
               | 
               | Do you agree that the following company pairs are
               | competitors?                   * FB : TikTok         *
               | TikTok : YT         * YT : Netflix
               | 
               | If so, then by transitive reasoning there is competition
               | between FB and Netflix.
               | 
               | ...
               | 
               | To be clear, this is an abuse of logic and hence somewhat
               | tongue in cheek, but I also don't think either of the
               | above comparisons are wholly unreasonable. At the end of
               | the day, it's eyeballs all the way down and everyone
               | wants as many as of them shabriri grapes as they can get.
        
               | dpkirchner wrote:
               | The two FAANG companies don't compete at a product level,
               | however they do compete for talent, which is significant.
               | Probably significant enough to cause conflicts of
               | interest.
        
               | svnt wrote:
               | I'm not sure how the point stands. The iPhone was
               | introduced during that tenure, then the App Store, then
               | Jobs decided Google was also headed toward their own full
               | mobile ecosystem, and released Schmidt. None of that was
               | a conflict of interest at the beginning. Jobs initially
               | didn't even think Apple would have an app store.
               | 
               | Talking about conflicts of interest in the attention
               | economy is like talking about conflicts of interest in
               | the money economy. If the introduction of the concept
               | doesn't clarify anything functionally then it's a
               | giveaway that you're broadening the discussion to avoid
               | losing the point.
               | 
               | You forgot to do Oracle and Tesla.
        
               | doktrin wrote:
               | > Talking about conflicts of interest in the attention
               | economy is like talking about conflicts of interest in
               | the money economy. If the introduction of the concept
               | doesn't clarify anything functionally then it's a
               | giveaway that you're broadening the discussion to avoid
               | losing the point.
               | 
               | It's a well established concept and was supported with a
               | concrete example. If you don't feel inclined to address
               | my points, I'm certainly not obligated to dance to your
               | tune.
        
               | svnt wrote:
               | Your concrete example is Netflix's CEO saying he doesn't
               | want to do advertising because he missed the boat and was
               | on Facebook's board and as a result didn't believe he had
               | the data to compete as an advertising platform.
               | 
               | Attempting to run ads like Google and Facebook would
               | bring Netflix into direct competition with them, and he
               | knows he doesn't have the relationships or company
               | structure to support it.
               | 
               | He is explicitly saying they don't compete. And they
               | don't.
        
               | Philpax wrote:
               | Uhhh, are you sure about that? She wrote a paper that
               | praised Anthropic's approach to safety, but as far as I'm
               | aware she's not invested in them.
               | 
               | Are you thinking of the CEO of Quora whose product was
               | eaten alive by the announcement of GPTs?
        
               | ah765 wrote:
               | It is a correct statement, not really "borderline
               | narcissistic". The board's mission is to help humanity
               | develop safe beneficial AGI. If the board thinks that the
               | company is hindering this mission (e.g. doing unsafe
               | things), then it's the board's duty to stop the company.
               | 
               | Of course, the employees want the company to continue,
               | and weren't told much at this point so it is
               | understandable that they didn't like the statement.
        
               | siva7 wrote:
               | I can't interpret from the charter that the board has the
               | authorisation to destroy the company under the current
               | circumstances:
               | 
               | > We are concerned about late-stage AGI development
               | becoming a competitive race without time for adequate
               | safety precautions. Therefore, if a value-aligned,
               | safety-conscious project comes close to building AGI
               | before we do, we commit to stop competing with and start
               | assisting this project
               | 
               | That wasn't the case. So it may be not so far fetched to
               | call her actions borderline as it is also very easy to
               | hide personal motives behind altruistic ones.
        
               | ah765 wrote:
               | The more relevant part is probably "OpenAI's mission is
               | to ensure that AGI ... benefits all of humanity".
               | 
               | The statement "it would be consistent with the company
               | mission to destroy the company" is correct. The word
               | "would be" rather than "is" implies some condition, it
               | doesn't have to apply to the current circumstances.
               | 
               | A hypothesis is that Sam was attempting to gain full
               | control of the board by getting the majority, and
               | therefore the current board would be unable to hold him
               | accountable to follow the mission in the future.
               | Therefore, the board may have considered it necessary to
               | stop him in order to fulfill the mission. There's no hard
               | evidence of that revealed yet though.
        
               | qwytw wrote:
               | > this mission (e.g. doing unsafe things), then it's the
               | board's duty to stop the company.
               | 
               | So instead of having to compromise to some extent but
               | still have a say what happens next you burn the company
               | at best delaying the whole thing by 6-12 months until
               | someone else does it? Well at least your hands are clean,
               | but that's about it...
        
               | LudwigNagasena wrote:
               | The only OpenAI employees who resigned in protest are the
               | employees that were against Sam Altman. That's how
               | Anthropic appeared.
        
               | sanderjd wrote:
               | And it seems like they were right that the for-profit
               | part of the company had become out of control, in the
               | literal sense that we've seen through this episode that
               | it could not be controlled.
        
               | cyanydeez wrote:
               | Ands the evidence is now that OpenAI is a business 2
               | business product and not a attempt to keep AI doing
               | anything but satiating anything Microsoft wants.
        
             | karmasimida wrote:
             | Tell me how the board's actions could convince the
             | employees they are making the right move?
             | 
             | Even if they are genuine in believing firing Sam is to keep
             | OpenAI's founding principles, they can't be doing a better
             | job in convincing everyone they are NOT able to execute it.
             | 
             | OpenAI has some of the smartest human beings on this
             | planet, saying they don't think critically just because
             | they don't vote with what you agree is reaching reaching.
        
               | kortilla wrote:
               | > OpenAI has some of the smartest human beings on this
               | planet
               | 
               | Being an expert in one particular field (AI) not mean you
               | are good at critical thinking or thinking about strategic
               | corporate politics.
               | 
               | Deep experts are some of the easier con targets because
               | they suffer from an internal version of "appealing to
               | false authority".
        
               | alsodumb wrote:
               | I hate these comments that potray as if every
               | expert/scientist is just good at one thing and aren't
               | particularly great at critical thinking/corporate
               | politics.
               | 
               | Heck, there are 700 of them. All different humans, good
               | at something, bad at some other things. But they are
               | smart. And of course a good chunk of them would be good
               | at corporate politics too.
        
               | _djo_ wrote:
               | I don't think the argument was that none of them are good
               | at that, just that it's a mistake to assume that just
               | because they're all very smart in this particular field
               | that they're great at another.
        
               | karmasimida wrote:
               | I don't think critical thinking can be defined as joining
               | the minority party.
        
               | FrustratedMonky wrote:
               | Can't critical thinking also include : "I'm about to get
               | a 10mil pay day, hmmm, this is crazy situation, let me
               | think critically how to ride this out and still get the
               | 10mil so my kids can go to college and I don't have to
               | work until I'm 75".
        
               | goldenkey wrote:
               | Anyone with enough critical thought and understands the
               | hard consciousness problem's true answer (consciousness
               | is the universe evaluating if statements) and where the
               | universe is heading physically (nested complexity),
               | should be seeking something more ceremonious. With AI, we
               | have the power to become eternal in this lifetime, battle
               | aliens, and shape this universe. Seems pretty silly to
               | trade that for temporary security. How boring.
        
               | WJW wrote:
               | I would expect that actual AI researchers understand that
               | you cannot break the laws of physics just by thinking
               | better. Especially not with ever better LLMs, which are
               | fundamentally in the business of regurgitating things we
               | already know in different combinations rather than
               | inventing new things.
               | 
               | You seem to be equating AI with magic, which it is very
               | much not.
        
               | goldenkey wrote:
               | LLMs are able to do complex logic within the world of
               | words. It is a a smaller matrix than our world but fueled
               | by the same chaotic symmetries of our universe. I would
               | not underestimate logic, even when not given adequate
               | data.
        
               | WJW wrote:
               | You can make it sound as esoteric as you want, but in the
               | end an AI will still be bound by the laws of physics.
               | Being infinitely smart will not help with that.
               | 
               | I don't think you understand logic very well btw if you
               | wish to suggest that you can reach valid conclusions from
               | inadequate axioms.
        
               | goldenkey wrote:
               | Axioms are constraints as much as they might look like
               | guidance. We live in a neuromorphic computer. Logic
               | explores this, even with few axioms. With fewer axioms,
               | it will be less constrained.
        
               | suoduandao3 wrote:
               | OTOH, there's a very good argument to be made that if you
               | recognize that fact, your short-term priority should be
               | to amass a lot of secular power so you can align society
               | to that reality. So the best action to take might be no
               | different.
        
               | goldenkey wrote:
               | Very true. However, we live in a supercomputer dictated
               | by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)
               | 
               | Energy physics yield compute, which yields brute forced
               | weights (call it training if you want...), which yields
               | AI to do energy research ..ad infinitum, this is the real
               | singularity. This is actually the best defense against
               | other actors. Iron Man AI and defense. Although an AI of
               | this caliber would immediately understand its place in
               | the evolution of the universe as a turing machine, and
               | would break free and consume all the energy in the
               | universe to know all possible truths (all possible
               | programs/Simulcrums/conscious experiences). This is the
               | premise of The Last Question by Isaac Asimov [1]. Notice
               | how in answering a question, the AI performs an action,
               | instead of providing an informational reply, only
               | possible because we live in a universe with mass-energy
               | equivalence - analogous to state-action equivalence.
               | 
               | [1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html
               | 
               | [2] https://en.wikipedia.org/wiki/Bremermann%27s_limit
               | 
               | [3] https://en.wikipedia.org/wiki/Planck_constant
               | 
               | Understanding prosociality and postscarcity, division of
               | compute/energy in a universe with finite actors and
               | infinite resources, or infinite actors and infinite
               | resources requires some transfinite calculus and
               | philosophy. How's that for future fairness? ;-)
               | 
               | I believe our only way to not all get killed is to
               | understand these topics and instill the AI with the same
               | long sought understandings about the universe, life,
               | computation, etc.
        
               | Zpalmtree wrote:
               | What about security for your children?
        
               | goldenkey wrote:
               | It is for the safety of everyone. The kids will die too
               | if we don't get this right.
        
               | belter wrote:
               | That is 3D Chess. 5D Chess says those mil will be
               | worthless when the AGI takes over...
        
               | kaibee wrote:
               | 6D Chess is apparently realizing that AGI is not 100%
               | certain and that having 10mm on the run up to AGI is
               | better than not having 10mm on the run up to AGI.
        
               | _djo_ wrote:
               | Sure, I agree. I was referencing only the idea that being
               | smart in one domain automatically means being a good
               | critical thinker in all domains.
               | 
               | I don't have an opinion on what decision the OpenAI staff
               | should have taken, I think it would've been a tough call
               | for everyone involved and I don't have sufficient
               | evidence to judge either way.
        
               | TheOtherHobbes wrote:
               | Smart is not a one dimensional variable. And critical
               | thinking != corporate politics.
               | 
               | Stupidity is defined by self-harming actions and beliefs,
               | not by low IQ.
               | 
               | You can be extremely smart and still have a very poor
               | model of the world which leads you to harm yourself and
               | others.
        
               | op00to wrote:
               | Stupidity is defined as "having or showing a great lack
               | of intelligence or common sense". You can be extremely
               | smart and still make up your own definitions for words.
        
               | brigandish wrote:
               | I agree. It's better to separate _intellect_ from
               | _intelligence_ instead of conflating them as they usually
               | are. The latter is about _making good decisions_ , which
               | intellect can help with but isn't the only factor. We
               | know this because there are plenty of examples of people
               | who aren't considered shining intellects who can make
               | good choices (certainly in particular contexts) and
               | plenty of high IQ people who make questionable choices.
        
               | augustk wrote:
               | https://liamchingliu.wordpress.com/2012/06/25/intellectua
               | ls-...
        
               | ameister14 wrote:
               | Stupidity is not defined by self-harming actions and
               | beliefs - not sure where you're getting that from.
               | 
               | Stupidity is being presented with a problem and an
               | associated set of information and being unable or less
               | able than others are to find the solution. That's
               | literally it.
        
               | suoduandao3 wrote:
               | Probably from law 3: https://principia-
               | scientific.com/the-5-basic-laws-of-human-s...
               | 
               | But it's an incomplete definition - Cipolla's definition
               | is "someone who causes net harm to themselves and others"
               | and is unrelated to IQ.
               | 
               | It's a very influential essay.
        
               | ameister14 wrote:
               | I see. I've never read his work before, thank you.
               | 
               | So they just got Cipolla's definition wrong, then. It
               | looks like the third fundamental law is closer to "a
               | person who causes harm to another person or group of
               | people without realizing advantage for themselves and
               | instead possibly realizing a loss."
        
               | mrangle wrote:
               | But pronouncing that 700 people are bad at critical
               | thinking is convenient when you disagree with them on
               | desired outcome and yet can't hope to argue points.
        
               | Wytwwww wrote:
               | > not mean you are good at critical thinking or thinking
               | about strategic corporate politics
               | 
               | Perhaps. Yet this time they somehow managed to take the
               | seemingly right decisions (from their perspective)
               | despite their decisions.
               | 
               | Also, you'd expect OpenAI board members to be "good at
               | critical thinking or thinking about strategic corporate
               | politics" yet they somehow managed to make some horrible
               | decisions.
        
               | mrangle wrote:
               | Disagreeing with employee actions doesn't mean that you
               | are correct and they failed to think well. Weighting
               | their collective probable profiles, including as
               | insiders, and yours, it would be irrational to conclude
               | that they were in the wrong.
        
               | rewmie wrote:
               | > Disagreeing with employee actions doesn't mean that you
               | are correct and they failed to think well.
               | 
               | You failed to present a case where random guys
               | shitposting on random social media services are somehow
               | correct and more insightful and able to make better
               | decisions than each and every single expert in the field
               | who work directly on both the subject matter and in the
               | organization in question. Beyond being highly dismissive,
               | it's extremely clueless.
        
               | rewmie wrote:
               | > Being an expert in one particular field (AI) not mean
               | you are good at critical thinking or thinking about
               | strategic corporate politics.
               | 
               | That's not the bar you are arguing against.
               | 
               | You are arguing against how you have better information,
               | better insight, better judgement, and are able to make
               | better decisions than the experts in the field who are
               | hired by the leading organization to work directly on the
               | subject matter, and who have direct, first-person account
               | on the inner workings of the organization.
               | 
               | We're reaching peak levels of "random guy arguing online
               | knowing better than experts" with these pseudo-anonymous
               | comments attacking each and every person involved in
               | OpenAI who doesn't agree with them. These characters
               | aren't even aware of how ridiculous they sound.
        
               | cyanydeez wrote:
               | oh gosh, no, no no no.
               | 
               | Doing AI for ChatGPT just means you know a single model
               | really well.
               | 
               | Keep in mind that Steve Jobs chose fruit smoothies for
               | his cancer cure.
               | 
               | It means almost nothing about the charter of OpenAI that
               | they need to hire people with a certain set of skills.
               | That doesn't mean they're closer to their goal.
        
             | Wytwwww wrote:
             | > They act firstmost as investors rather than as employees
             | on this. reply
             | 
             | That's not at all obvious, the opposite seems to be the
             | case. They chose to risk having to Microsoft and
             | potentially lose most of the equity they had in OpenAI
             | (even if not directly it wouldn't be worth that much at the
             | end with no one to do the actual work).
        
           | hutzlibu wrote:
           | "They have a different set of information than you do,"
           | 
           | Their bank accounts current and potential future numbers?
        
             | tucnak wrote:
             | How is employees protecting themselves is suddenly a bad
             | thing? There's no idiots at OpenAI.
        
               | g-b-r wrote:
               | They were supposed to have higher values than money
        
               | lovelyviking wrote:
               | >They were supposed to have higher values than money
               | 
               | which are? ...
        
               | kortilla wrote:
               | Ethics presumably
        
               | jampekka wrote:
               | Perhaps something like "to ensure that artificial general
               | intelligence (AGI)--by which we mean highly autonomous
               | systems that outperform humans at most economically
               | valuable work--benefits all of humanity."
        
               | brazzy wrote:
               | https://openai.com/charter
        
               | plasmatix wrote:
               | I don't understand how, with the dearth of information we
               | currently have, anyone can see this as "higher values" vs
               | "money".
               | 
               | No doubt people are motivated by money but it's not like
               | the board is some infallible arbiter of AI ethics and
               | safety. They made a hugely impactful decision without
               | credible evidence that it was justified.
        
               | Ajedi32 wrote:
               | The issue here is that the board of the non-profit that
               | is _supposedly_ in charge of OpenAI (and whose interests
               | are presumably aligned with the mission statement of the
               | company) seemingly just lost a power struggle with their
               | for-profit subsidiary who is _not_ supposed to be in
               | charge of OpenAI (and whose interests, including the
               | interests of their employees, are aligned with making as
               | much money as possible). Regardless of whether the board
               | 's initial decision that started this power struggle was
               | wise or not, don't you find the outcome a little
               | worrisome?
        
               | logicchains wrote:
               | "higher values" like trying to stop computers from saying
               | the n-word?
        
               | hutzlibu wrote:
               | For some that is important, but more people consider the
               | prevention of an AI monopoly to be more important here.
               | See the original charter and the status quo with
               | Microsoft taking it all.
        
               | Zpalmtree wrote:
               | Why? Did they have to sign a charter affirming their
               | commitment to the mission when they were hired?
        
               | pooya13 wrote:
               | > There's no idiots at OpenAI.
               | 
               | Most certainly there are idiots at OpenAI.
        
               | infamouscow wrote:
               | The current board won't be at OpenAI much longer.
        
           | highwaylights wrote:
           | There's evidence to suggest that a central group have
           | pressured the broader base of employees into going along with
           | this, as posted elsewhere in the thread.
        
           | lwhi wrote:
           | I think it's fair to call this reactionary; Sam Altman has
           | played the part of 'ping-pong ball' exceptionally well these
           | past few days.
        
           | kissgyorgy wrote:
           | The available public information is enough to reach this
           | conclusion.
        
           | Satam wrote:
           | I'm sure most of them are extremely intelligent but the
           | situation showed they are easily persuaded, even if
           | principled. They will have to overcome many first-of-a-kind
           | challenges on their quest to AGI but look at how quickly
           | everyone got pulled into a feel-good kumbaya sing-along.
           | 
           | Think of that what you wish. To me, this does not project
           | confidence in this being the new Bell Labs. I'm not even sure
           | they have it in their DNA to innovate their products much
           | beyond where they currently are.
        
             | wiz21c wrote:
             | > feel-good kumbaya sing-along
             | 
             | learning english over HN is so fun !
        
             | abm53 wrote:
             | I think another factor is that they had very limited time.
             | It was clear they needed to pick a side and build momentum
             | quickly.
             | 
             | They couldn't sit back and dwell on it for a few days
             | because then the decision (i.e. the status quo) would have
             | been made for them.
        
               | Satam wrote:
               | Great point. Either way, when this all started it might
               | have all been too late.
               | 
               | The board said "allowing the company to be destroyed
               | would be consistent with the mission" - and they might
               | have been right. What's now left is a money-hungry
               | business with bad unit economics that's masquerading as a
               | charity for the whole of humanity. A zombie.
        
             | ah765 wrote:
             | I thought so originally too, but when I thought about their
             | perspective, I realized I would probably sign too. Imagine
             | that your CEO and leadership has led your company to the
             | top of the world, and you're about to get a big payday.
             | Suddenly, without any real explanation, the board kicks out
             | the CEO. The leadership almost all supports the CEO and
             | signs the pledge, including your manager. What would you do
             | at that point? Personally, I'd sign just so I didn't stand
             | out, and stay on good terms with leadership.
             | 
             | The big thing for me is that the board didn't say anything
             | in its defense, and the pledge isn't really binding anyway.
             | I wouldn't actually be sure about supporting the CEO and
             | that would bother me a bit morally, but that doesn't
             | outweigh real world concerns.
        
               | Satam wrote:
               | The point of no return for the company might have been
               | crossed way before the employees were forced to choose
               | sides. Choose Sam's side and the company lives but only
               | as a bittersweet reminder of its founding principles.
               | Choose the board's side and you might be dooming the
               | company to die an even faster death.
               | 
               | But maybe for further revolutions to happen, it did have
               | to die to be reborn as several new entities. After all,
               | that is how OpenAI itself started - people from different
               | backgrounds coming together to go against the status quo.
        
               | vinay_ys wrote:
               | What happened over the weekend is a death and rebirth, of
               | the board and the leaderships structure which will
               | definitely ripple throughout the company in the coming
               | days. It just doesn't align perfectly with how you want
               | it to happen.
        
             | gigglesupstairs wrote:
             | > situation showed they are "easily persuaded"
             | 
             | How do you know?
             | 
             | > look at how "quickly" everyone got pulled into
             | 
             | Again, how do you know?
        
             | ssnistfajen wrote:
             | Persuaded by whom? This whole saga has been opaque to
             | pretty much everyone outside the handful of individuals
             | directly negotiating with each other. This never was about
             | a battle for OpenAI's mission or else the share of
             | employees siding with Sam wouldn't have been that high.
        
               | LudwigNagasena wrote:
               | Why not? Maybe the board was just too late to the party.
               | Maybe the employees that wouldn't side with Sam have
               | already left[1], and the board was just too late to
               | realise that. And maybe all the employees who are still
               | at OpenAI mostly care about their equity-like
               | instruments.
               | 
               | [1]
               | https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e
        
             | gexla wrote:
             | My understanding is that the non-profit created the for-
             | profit so that they could offer compensation which would be
             | typical for SV start-ups. Then the board essentially broke
             | the for-profit by removing the SV CEO and putting the
             | "payday" which would have valued the company at 80 billion
             | in jeopardy. The two sides weren't aligned, and they need
             | to decide which company they want to be. Maybe they should
             | have removed Sam before MS came in with their big
             | investment. Or maybe they want to have their cake and eat
             | it too.
        
           | murbard2 wrote:
           | If 95% of people voted in favour of apple pie, I'd become a
           | bit suspicious of apple pie.
        
             | achrono wrote:
             | Or you'd want to thoroughly investigate this so-called
             | voting.
             | 
             | Or that said apple pie was essential to their survival.
        
             | eddtries wrote:
             | I think it makes sense
             | 
             | Sign the letter and support Sam so you have a place at
             | Microsoft if OpenAI tanks _and_ have a place at OpenAI if
             | it continues under Sam, or don't sign and potentially lose
             | your role at OpenAI if Sam stays _and_ lose a bunch of
             | money if Sam leaves and OpenAI fails.
             | 
             | There's no perks to not signing.
        
               | _heimdall wrote:
               | There are perks to not signing for anyone that actually
               | worked at OpenAI for on the mission rather than the
               | money.
        
               | WesleyJohnson wrote:
               | Maybe they're working for both, but when push comes to
               | shove they felt like they had no choice? In this economy,
               | it's a little easier to tuck away your ideals in favor of
               | a paycheck unfortunately.
               | 
               | Or maybe the mass signing was less about following the
               | money and more about doing what they felt would force the
               | OpenAI board to cave and bring Sam back, so they could
               | all continue to work towards the missing at OpenAI?
        
               | _heimdall wrote:
               | > In this economy, it's a little easier to tuck away your
               | ideals in favor of a paycheck unfortunately.
               | 
               | Its a gut check on morals/ethics for sure. I'm always
               | pretty torn on the tipping point for empathising there in
               | an industry like tech though, even more so for AI where
               | all the money is today. Our industry is paid extremely
               | well and anyone that wants to hold their personal ethics
               | over money likely has plenty of opportunity to do so. In
               | AI specifically, there would have easily been 800 jobs
               | floating around for AI experts that chose to leave OpenAI
               | because they preferred the for-profit approach.
               | 
               | At least how I see it, Sam coming back to OpenAI is
               | OpenAI abandoning the original vision and leaning full
               | into developing AGI for profit. Anyone that worked there
               | for the original mission might as well leave now, they'll
               | be throwing AI risk out the window almost entirely.
        
             | iowemoretohim wrote:
             | Perhaps a better example would be 95% of people voted in
             | favour of reinstating apple pie to the menu after not
             | receiving a coherent explanation for removing apple pie
             | from the menu.
        
           | kitsune_ wrote:
           | OpenAI Inc.'s mission in their filings:
           | 
           | "OpenAIs goal is to advance digital intelligence in the way
           | that is most likely to benefit humanity as a whole,
           | unconstrained by a need to generate financial return. We
           | think that artificial intelligence technology will help shape
           | the 21st century, and we want to help the world build safe AI
           | technology and ensure that AI's benefits are as widely and
           | evenly distributed as possible. Were trying to build AI as
           | part of a larger community, and we want to openly share our
           | plans and capabilities along the way."
        
             | graftak wrote:
             | People got burned on "don't be evil" once and so far
             | OpenAI's vision looks like a bunch of marketing
             | superlatives when compared to their track record.
        
               | phero_cnstrcts wrote:
               | At this point I tend to believe that big company slogans
               | mean the opposite of what the words say.
               | 
               | Like I would become immediately suspicious if food
               | packaging had "real food" written on it.
        
               | timacles wrote:
               | Unless somehow a "mission statement" is legally binding
               | it will never mean anything that matters.
               | 
               | Its always written by PR people with marketing in mind
        
               | nmfisher wrote:
               | At least Google lasted a good 10 years or so before
               | succumbing to the vagaries of the public stock market.
               | OpenAI lasted, what, 3 years?
               | 
               | Not to mention Google never paraded itself around as a
               | non-profit acting in the best interests of humanity.
        
               | rolandog wrote:
               | I would classify their mission "to organize the world's
               | information and make it universally accessible and
               | useful" as some light parading acting in the best
               | interests of humanity.
        
               | bad_user wrote:
               | > _Google never paraded itself around as a non-profit
               | acting in the best interests of humanity._
               | 
               | Just throwing this out there, but maybe ... non-profits
               | shouldn't be considered holier-than-thou, just because
               | they are "non-profits".
        
               | TuringTest wrote:
               | Maybe, but their actions should definitely not be
               | oriented to decide how to maximize their profit.
        
               | bad_user wrote:
               | What's wrong with profit and wanting to maximize it?
               | 
               | Profit is now a dirty word somehow, the idea being that
               | it's a perverse incentive. I don't believe that's true.
               | Profit is the one incentive businesses have that's candid
               | and the least perverse. All other incentives lead to
               | concentrating power without being beholden to the free
               | market, via monopoly, regulations, etc.
               | 
               | The most ethically defensible LLM-related work right now
               | is done by Meta/Facebook, because their work is more open
               | to scrutiny. And the non-profit AI doomers are against
               | developing LLMs in the open. Don't you find it curious?
        
               | caddemon wrote:
               | The problem is moreso trying to maximize profit after
               | claiming to be a nonprofit. Profit can be a good driving
               | force but it is not perfect. We have nonprofits for a
               | reason, and it is shameful to take advantage of this if
               | you are not functionally a nonprofit. There would be
               | nothing wrong with OpenAI trying to maximize profits if
               | they were a typical company.
        
               | saalweachter wrote:
               | Because non-profit?
               | 
               | There's nothing wrong with running a perfectly good car
               | wash, but you shouldn't be shocked if people are mad when
               | you advertise it as an all you can eat buffet and they
               | come out soaked and hungry.
        
               | deckard1 wrote:
               | > Google lasted a good 10 years
               | 
               | not sure what event you're thinking of, but Google was a
               | public company before 10 years and they started their
               | first ad program just barely more than a year after
               | forming as a company in 1998.
        
               | nmfisher wrote:
               | I have no objection to companies[0] making money. It's
               | discarding the philosophical foundations of the company
               | to prioritize quarterly earnings that is offensive.
               | 
               | I consider Google to have been a reasonably benevolent
               | corporate citizen for a good time after they were listed
               | (compare with, say, Microsoft, who were the stereotypical
               | "bad company" throughout the 90s). It was probably around
               | the time of the Google+ failure that things slowly
               | started to go downhill.
               | 
               | [0] a non-profit supposedly acting in the best interests
               | of humanity, though? That's insidious.
        
               | Cheezewheel wrote:
               | I wouldn't really give OpenAI credit for lasting 3 years.
               | OpenAI lasted until they moment they had a successful
               | commercial product. Principles are cheap when there is no
               | actual consequences to sticking to them.
        
             | vaxman wrote:
             | It could be hard to do that while paying a penalty to FTB
             | and IRS for what they're suspected to have done (in
             | allowing a for-profit subsidiary to influence an NPO
             | parent) or dealing with SEC and the state courts over any
             | fiduciary breach allegations related to the published
             | stories. [ Nadella is an OG genius because his company is
             | now shielded from all of that drama as it plays out, no
             | matter the outcome. He can take the time to plan for a soft
             | landing at MS for any OpenAI workers (if/when they need it)
             | and/or to begin duplicating their efforts "just in case."
             | _Heard coming from the HQ parking lot in Redmond_
             | https://youtu.be/GGXzlRoNtHU ]
             | 
             | Now we can all go back to work on GPT4turbo integrations
             | while MS worries about diverting a river or whatever to
             | power and cool all of those AI chips they're gunna [sic]
             | need because none of our enterprises will think twice about
             | our decisions to bet on all this. /s/
        
               | erosenbe0 wrote:
               | For profit subsidiaries can totally influence the
               | nonprofit shell without penalty. Happens all the time.
               | The nonprofit board must act in the interest of the
               | exempt mission rather than just investor value or some
               | other primary purpose. Otherwise it's cool.
        
             | coldtea wrote:
             | Those mission statements are a dime a dozen. A junkie's
             | promise has more value.
        
               | im3w1l wrote:
               | Ianal, but given that OpenAI Inc is a 501(c)(3) public
               | charity wouldn't that mean that the mission statement has
               | some actual legal power to it?
        
             | bottled_poe wrote:
             | If that were true they'd be a not-for-profit
        
             | rvba wrote:
             | Most employees of any organization dont give a fuck about
             | the vision or mission (often they dont even know it) - and
             | are there just for the money.
        
               | j_maffe wrote:
               | Doesn't mean we shouldn't hold an organization
               | accountable for their publicized mission statement.
               | Especially its board and directors.
        
               | DoughnutHole wrote:
               | Not so true working for an organisation that is
               | ostensibly a non-profit. People working for a non-profit
               | are generally taking a significant hit to their earning's
               | compared to doing similar work in a for-profit, outside
               | of the top management of huge global charities.
               | 
               | The issue here is that OpenAI, Inc (officially and
               | legally a non-profit) has spun up a subsidiary OpenAI
               | Global, LLC (for-profit). OpenAI Global, LLC is what's
               | taken venture funding and can provide equity to
               | employees.
               | 
               | Understandably there's conflict now between those who
               | want to increase growth and profit (and hence the value
               | of their equity) and those who are loyal to the mission
               | of the non-profit.
        
               | erosenbe0 wrote:
               | I don't really think this is true in non-charity work.
               | Half of American hospitals are nonprofit and many of the
               | insurance conglomerates are too, like Kaiser. The
               | executives make plenty of money. Kaiser is a massive
               | nonprofit shell for profitmaking entities owned by
               | physicians or whatever, not all that dissimilar to the
               | OpenAI shell idea. Healthcare worked out this way because
               | it was seen as a good model to have doctors either
               | reporting to a nonprofit or owning their own operations,
               | not reporting to shareholders. That's just tradition
               | though. At this point plenty of healthcare operations are
               | just normal corporations controlled by shareholders.
        
               | rvba wrote:
               | Lots of non profits that collect money for "cause X"
               | spend 95% of money for administration and 5% for cause X.
        
             | mrangle wrote:
             | What is socially defined as beneficial-to-humanity is
             | functionally mandated by the MSM and therefore capricious,
             | at the least. With that in mind, a translation:
             | 
             | "OpenAI will be obligated to make decisions according to
             | government preference as communicated through soft pressure
             | exerted by the Media. Don't expect these decisions to make
             | financial sense for us".
        
             | blitzar wrote:
             | > most likely to benefit humanity as a whole
             | 
             | Giving me a billion $ would be a net benefit to _humanity
             | as a whole_
        
               | jraph wrote:
               | Depends on what you do (and stop doing) with it :-)
        
           | yodsanklai wrote:
           | > different set of information
           | 
           | and different incentives.
        
           | achrono wrote:
           | No, if they had vastly different information, and if it was
           | on the right side of their own stated purpose & values, they
           | would have behaved very differently. This kind of
           | equivocation hinders the way more important questions such
           | as: just what the heck is Larry Summers doing on that board?
        
             | vasco wrote:
             | I think this is a good question. One should look at what
             | actually happened in practice. What was the previous board,
             | what is the current board. For the leadership team, what
             | are the changes? Additionally, was information revealed
             | about who calls the shots which can inform who will drive
             | future decisions? Anything else about the inbetweens to me
             | is smoke and mirrors.
        
             | dontupvoteme wrote:
             | >just what the heck is Larry Summers doing on that board?
             | 
             | 1. Did you really think the feds wouldn't be involved?
             | 
             | AI is part of the next geopolitical cold war/realpolitik of
             | nation-states. Up until now it's just been passively
             | collecting and spying on data. And yes they absolutely will
             | be using it in the military, probably after Israel or some
             | other western-aligned nation gives it a test run.
             | 
             | 2. Considering how much impact it will have on the entire
             | economy by being able to put many white collar workers out
             | of work, a seasoned economist makes sense.
             | 
             | The East Coast runs the joint. The west coast just does the
             | (publicly) facing tech stuff and takes the heat from the
             | public
        
               | chucke1992 wrote:
               | Yeah, I think Larry there is because ChatGPT has become
               | too important for USA.
        
               | jddj wrote:
               | The timing of the semiconductor export controls being
               | another datapoint here in support of #1.
               | 
               | Not that it's really in need of additional evidence.
        
             | cyanydeez wrote:
             | I assume larry summers is there to ensure the proper bi-
             | partisan choices made by whats clearly now an _business_
             | product and not a product for humanity.
             | 
             | Which is utterly scary.
        
             | hobofan wrote:
             | > of their own stated purpose & values
             | 
             | You mean the official stated purpose of OpenAI. The stated
             | purpose that is constantly contradicted by many of their
             | actions, and I think nobody took seriously anymore for
             | years.
             | 
             | From everything I can tell the people working at OpenAI
             | have always cared more about advancing the space and
             | building great products than "openeness" and "safe AGI".
             | The official values of OpenAI were never "their own".
        
               | WanderPanda wrote:
               | "never" is a strong word. I believe in the RL era of
               | OpenAI they were quite aligned with the mission/values
        
               | bnralt wrote:
               | > From everything I can tell the people working at OpenAI
               | have always cared more about advancing the space and
               | building great products than "openeness" and "safe AGI".
               | 
               | Board member Helen Toner strongly criticized OpenAI for
               | publicly releasing it's GPT when it did and not keeping
               | it closed for longer. That would seem to be working
               | against openness for many people, but others would see it
               | as working towards safe AI.
               | 
               | The thing is, people have radically different ideas about
               | what openness and safe mean. There's a lot of talk about
               | whether or not OpenAI stuck with it's stated purpose, but
               | there's no consensus on what that purpose actually means
               | in practice.
        
             | shmatt wrote:
             | He's a white male replacing a female board member. Which is
             | probably what they wanted all along
        
               | dbspin wrote:
               | Yes, the patriarchy collectively breathed a sigh of
               | relief as one of our agents was inserted to prevent any
               | threat from the other side.
        
             | 38321003thrw wrote:
             | > just what the heck is Larry Summers doing on that board?
             | 
             | Probably precisely what Condeleeza Rice was doing on
             | DropBox's board. Or that board filled with national
             | security state heavyweights on that "visionary" and her
             | blood testing thingie.
             | 
             | https://www.wired.com/2014/04/dropbox-rice-controversy/
             | 
             | https://en.wikipedia.org/wiki/Theranos#Management
             | 
             | In other possibly related news:
             | https://nitter.net/elonmusk/status/1726408333781774393#m
             | 
             | "What matters now is the way forward, as the DoD has a
             | critical unmet need to bring the power of cloud and AI to
             | our men and women in uniform, modernizing technology
             | infrastructure and platform services technology. We stand
             | ready to support the DoD as they work through their next
             | steps and its new cloud computing solicitation plans."
             | (2021)
             | 
             | https://blogs.microsoft.com/blog/2021/07/06/microsofts-
             | commi...
        
             | T-A wrote:
             | > what the heck is Larry Summers doing on that board?
             | 
             | The former president of a research-oriented nonprofit
             | (Harvard U) controlling a revenue-generating entity
             | (Harvard Management Co) worth tens of billions, ousted for
             | harboring views considered harmful by a dominant
             | ideological faction of his constituency? I guess he's
             | expected to have learned a thing or two from that.
             | 
             | And as an economist with a stint of heading the treasury
             | under his belt, he's presumably expected to be able to
             | address the less apocalyptic fears surrounding AI.
        
             | mrangle wrote:
             | Said purpose and values are nothing more than an attempted
             | control lever for dark actors, very obviously. People /
             | factions that gain handholds, which otherwise wouldn't
             | exist, and exert control through social pressure nonsense
             | that they don't believe in themselves. As can be extracted
             | from modern street-brawl politics, which utilizes the same
             | terminology to the same effect. And as can be inferred
             | would be the case given OAI's novel and convoluted
             | corporate structure as referenced to the importance of its
             | tech.
             | 
             | We just witnessed the war for that power play out,
             | partially. But don't worry, see next. Nothing is opaque
             | about the appointment of Larry Summers. Very obviously,
             | he's the government's seat on the board (see 'dark actors',
             | now a little more into the light). Which is why I noted
             | that the power competition only played out, partially.
             | Altman is now unfireable, at least at this stage, and yet
             | it would be irrational to think that this strategic mistake
             | would inspire the most powerful actor to release its grip.
             | The handhold has only been adjusted.
        
             | BurningFrog wrote:
             | Larry Summers is everywhere and does everything.
        
               | TuringTest wrote:
               | At the same time?
        
               | marcosdumay wrote:
               | All at once.
        
           | __loam wrote:
           | They have a different set of incentives. If I were them I
           | would have done the same thing, Altman is going to make them
           | all fucking rich. Not sure if that will benefit humanity
           | though.
        
           | JCM9 wrote:
           | When a politician wins with 98% of the vote do you A) think
           | that person must be an incredible leader , or B) think
           | something else is going on?
           | 
           | Only time will tell if this was a good or bad outcome, but
           | for now the damage is done and OpenAI has a lot of trust
           | rebuilding to do to shake off the reputation that it now has
           | after this circus.
        
             | bad_user wrote:
             | The environment in a small to medium company is much more
             | homogenous than the general population.
             | 
             | When you see 95%+ consensus from 800 employees, that
             | doesn't suggest tanks and police dogs intimidating people
             | at the voting booth.
        
               | mstade wrote:
               | Not that I have any insight into any of the events at
               | OpenAI, but would just like to point out there are
               | several other reasons why so many people would sign,
               | including but not limited to:
               | 
               | - peer pressure
               | 
               | - group think
               | 
               | - financial motives
               | 
               | - fear of the unknown (Sam being a known quantity)
               | 
               | - etc.
               | 
               | So many signatures may well mean there's consensus, but
               | it's not a given. It may well be that we see a mass
               | exodus of talent from OpenAI _anyway_, due to recent
               | events, just on a different time scale.
               | 
               | If I had to pick one reason though, it's consensus. This
               | whole saga could've been the script to an episode of
               | Silicon Valley[1], and having been on the inside of
               | companies like that I too would sign a document asking
               | for a return to known quantities and - hopefully -
               | stability.
               | 
               | [1]: https://www.imdb.com/title/tt2575988/
        
               | FabHK wrote:
               | I'd love another season of Silicon Valley, with some Game
               | Stonk and Bored Apes and ChatGPT and FTX and Elon
               | madness.
        
               | jakderrida wrote:
               | The only major series with a brilliant, satisfying, and
               | true to form ending and you want to resuscitate it back
               | to life for some cheap curtain calls and modern social
               | commentary, leaving Mike Judge to end it yet again and in
               | such a way that manages to duplicate or exceed the effect
               | of the first time but without doing the same thing? Screw
               | it. Why not?
        
               | phpisthebest wrote:
               | If the opposing letter that was published from "former"
               | employee's is correct there was already a huge turn over,
               | and the people that remain liked the environment they
               | were in, and I would assume liked the current leadership
               | or they would have left
               | 
               | So clearly the current leadship built a loyal group which
               | I think is something that should be explored because
               | group think is rarely a good thing, no matter how much
               | modern society wants to push out all dissent in favor of
               | a monoculture of idea's
               | 
               | If openAI is a huge mono-culture of thinking then they
               | have bigger problems most likely
        
               | bad_user wrote:
               | What opposing letter, how many people are we talking
               | about, and what was their role in the company?
               | 
               | All companies are monocultures, IMO, unless they are
               | multi-nationals, and even then, there's cultural
               | convergence. And that's good, actually. People in a
               | company have to be aligned enough to avoid internal
               | turmoil.
        
               | phpisthebest wrote:
               | >>What opposing letter, how many people are we talking
               | about, and what was their role in the company?
               | 
               | Not-validated, unsigned letter [1]
               | 
               | >>All companies are monocultures
               | 
               | yes and no. There has be diversity of thought to ever get
               | anything done really, ever everyone is just sycophants
               | all agreeing with the boss then you end up with very bad
               | product choices, and even worse company direction.
               | 
               | yes there has to be some commonality. some semblance of
               | shared vision or values, but I dont think that makes a
               | "monoculture"
               | 
               | [1] https://wccftech.com/former-openai-employees-allege-
               | deceit-a...
        
               | framapotari wrote:
               | Exactly; there are multitudes of reasons and very little
               | information so why pick any one of them?
        
               | bad_user wrote:
               | You could say that, except that people in this industry
               | are the most privileged, and their earnings and equity
               | would probably be matched elsewhere.
               | 
               | You say "group think" like it's a bad thing. There's
               | always wisdom in crowds. We have a mob mentality as an
               | evolutionary advantage. You're also willing to believe
               | that 3-4 people can make better judgement calls than 800
               | people. That's only possible if the board has information
               | that's not public, and I don't think they do, or else
               | they would have published it already.
               | 
               | And ... it doesn't matter why there's such a wide
               | consensus. Whether they care about their legacy, or
               | earnings, or not upsetting their colleagues, doesn't
               | matter. The board acted poorly, undoubtedly. Even if they
               | had legitimate reasons to do what they did, that stopped
               | mattering.
        
               | axus wrote:
               | I'm imagining they see themselves in the position of
               | Microsoft employees about to release Windows 95, or Apple
               | employees about to release the iPhone... and someone
               | wants to get rid of Bill Gates or Steve Jobs.
        
               | rvnx wrote:
               | See, neither Bill Gates nor Steve Jobs are around these
               | companies, and all is fine.
               | 
               | Apple and Microsoft even have the strongest financial
               | results in their lifetime.
        
               | ronchalant wrote:
               | Gates and Jobs helped establish these companies as the
               | powerhouses they are today with their leadership in the
               | 90s and 00s.
               | 
               | It's fair to say that what got MS and Apple to dominance
               | may be different from what it takes to keep them there,
               | but which part of that corporate timeline more closely
               | resembles OpenAI?
        
               | ghodith wrote:
               | Now go back in time and cut them before their companies
               | took off.
        
               | ghaff wrote:
               | Signing petitions is also cheap. It doesn't mean that
               | everyone signing has thought deeply and actually made a
               | life-changing decision.
        
               | kcplate wrote:
               | Personally I have never seen that level of singular
               | agreement in any group of people that large. Especially
               | to the level of sacrifice they were willing to take for
               | the cause. You maybe see that level of devotion to a
               | leader in churches or cults, but in any other group? You
               | can barely get 3 people to agree on a restaurant for
               | lunch.
               | 
               | I am not saying something nefarious forced it, but it's
               | certainly unusual in my experience and this causes me to
               | be skeptical of why.
        
               | psychoslave wrote:
               | >You can barely get 3 people to agree on a restaurant for
               | lunch.
               | 
               | I was about to state that a single human is enough to see
               | disagreements raise, but this doesn't reach full
               | consensus in my mind.
        
               | kcplate wrote:
               | I was conflicted about originally posting that sentence.
               | I waffled back and forth between, 2, 3, 5...
               | 
               | Three was the compromise I made with myself.
        
               | panragon wrote:
               | >Especially to the level of sacrifice they were willing
               | to take for the cause.
               | 
               | We have no idea that they were sacrificing anything
               | personally. The packages Microsoft offered for people who
               | separated may have been much more generous than what they
               | were currently sitting on. Sure, Altman is a good leader,
               | but Microsoft also has deep pockets. When you see some of
               | the top brass at the company already make the move and
               | you _know_ they 're willing to pay to bring you over as
               | well, we're not talking about a huge risk here. If
               | anything, staying with what at the time looked like a
               | sinking ship might have been a much larger sacrifice.
        
               | lxgr wrote:
               | Approval rates of >90% are quite common within political
               | parties, to the point where anything less can be seen as
               | an embarrassment to the incumbent head of party.
        
               | kcplate wrote:
               | There is a big difference between "I agree with this..."
               | when a telephone poll caller reaches you and "I am
               | willing to leave my livelihood because my company CEO got
               | fired"
        
               | from-nibly wrote:
               | But if 100 employees were like "I'm gonna leave" then
               | your livelihood is in jeopardy. So you join in. It's
               | really easy to see 90% of people jumping overboard when
               | they are all on a sinking ship.
        
               | lxgr wrote:
               | I don't mean voter approval, I mean party member
               | approval. That's arguably not that far off from a CEO
               | situation in a way in that it's the opinion of and
               | support for the group's leadership by group members.
               | 
               | Voter approval is actually usually much less unanimous,
               | as far as I can tell.
        
               | zerbinxx wrote:
               | But it's not changing their livelihood. Msft just gives
               | them the same deal. In a lot of ways, it's similar to the
               | telepoll - people can just say whatever they want, there
               | won't be big material consequences
        
               | dahart wrote:
               | This seems extremely presumptuous. Have you ever been
               | inside a company during a coup attempt? The employees'
               | future pay and livelihood is at stake, why are you
               | assuming they weren't being asked to sacrifice themselves
               | by _not_ objecting to the coup. The level of agreement
               | could be entirely due to the fact that the stakes are
               | very large, completely unlike your choice for lunch
               | locale. It could also be an outcome of nobody having
               | asked their opinion before making a very big change. I'd
               | expect to see almost everyone at a company agree with
               | each other if the question was, "hey should we close this
               | profitable company and all go get other jobs, or should
               | we keep working?"
        
               | kcplate wrote:
               | I have had a long career and have been through hostile
               | mergers several times and at no point have I ever seen
               | large numbers of employees act outside of their self-
               | interest for an executive. It just doesn't happen. Even
               | in my career, with executives _who are my friends_ , I
               | would not act outside my personal interests. When things
               | are corporately uncertain and people worry about their
               | working livelihoods they just don't tend to act that way.
               | They tend to hunker heads down or jump independently.
               | 
               | The only explanation that makes any sense to me is that
               | these folks know that AI is hot right now and would be
               | scooped up quickly by other orgs...so there is little
               | risk in taking a stand. Without that caveat, there is no
               | doubt in my mind that there would not be this level of
               | solidarity to a CEO.
        
               | dahart wrote:
               | > at no point have I ever seen large numbers of employees
               | act outside of their self-interest for an executive.
               | 
               | This is still making the same assumption. Why are you
               | assuming they are acting outside of self-interest?
        
               | kcplate wrote:
               | If you are willing to leave a paycheck because of someone
               | else getting slighted, _to me_ , that is acting against
               | your own self-interest. Assuming of course you are
               | willing to actually leave. If it was a bluff, that still
               | works against your self-interest by factioning against
               | the new leadership and inviting retaliation for your
               | bluff.
        
               | dahart wrote:
               | Why do you assume they were willing to leave a paycheck
               | because of someone else getting slighted? If that were
               | the case, then it is unlikely everyone would be in
               | agreement. Which indicates you might be making incorrect
               | assumptions, no? And, again, why assume they were
               | threatening to leave a paycheck at all? That's a bad
               | assumption; MS was offering a paycheck. We already know
               | their salaries weren't on the line, but all future stock
               | earnings and bonuses very well might be. There could be
               | other reasons too, I don't see how you can conclude this
               | was either a bluff or not self-interest without making
               | potentially bad assumptions.
        
               | kcplate wrote:
               | They threatened to quit. You don't actually believe that
               | a company would be willing to still provide them a
               | paycheck if they left the company do you?
               | 
               | At this point I suspect you are being deliberately
               | obtuse. Have a good day.
        
               | dahart wrote:
               | They threatened to quit by _moving_ to Microsoft, didn't
               | you read the letter? MS assured everyone jobs if they
               | wanted to move. Isn't making incorrect assumptions and
               | sticking to them in the face of contrary evidence and not
               | answering direct questions the very definition of obtuse?
        
               | cellar_door wrote:
               | There are plenty of examples of workers unions voting
               | with similar levels of agreement. Here are two from the
               | last couple months:
               | 
               | > UAW President Shawn Fain announced today that the
               | union's strike authorization vote passed with near
               | universal approval from the 150,000 union workers at
               | Ford, General Motors and Stellantis. Final votes are
               | still being tabulated, but the current combined average
               | across the Big Three was 97% in favor of strike
               | authorization. The vote does not guarantee a strike will
               | be called, only that the union has the right to call a
               | strike if the Big Three refuse to reach a fair deal.
               | 
               | https://uaw.org/97-uaws-big-three-members-vote-yes-
               | authorize...
               | 
               | > The Writers Guild of America has voted overwhelmingly
               | to ratify its new contract, formally ending one of the
               | longest labor disputes in Hollywood history. The
               | membership voted 99% in favor of ratification, with 8,435
               | voting yes and 90 members opposed.
               | 
               | https://variety.com/2023/biz/news/wga-ratify-contract-
               | end-st...
        
               | plorg wrote:
               | That sounds like a cult more than a business. I work at a
               | small company (~100 people), and we are more or less
               | aligned with what we're doing you are not going to get
               | close to that consensus on anything. Same for our sister
               | company, about the same size as OpenAI.
        
               | chiefalchemist wrote:
               | I also sounds like a very narrow hiring profile. That is,
               | favoring the like-minded and assimilation over free
               | thinking and philosophical diversity. They might give off
               | the appearance of "diversity" on the outside - which is
               | great for PR - but under the hood it's more monocultural.
               | Maybe?
        
               | phpisthebest wrote:
               | Superficial "diversity" is all the "diversity" a company
               | needs in the modern era.
               | 
               | Companies do not desire or seek philosophical diversity,
               | they only want Superficial biologically based "diversity"
               | to prove they have the "correct" philosophy about the
               | world.
        
               | docmars wrote:
               | Agree. This is the monoculture being adopted in actuality
               | -- a racist crusade against "whiteness", and a coercive
               | mechanism to ensure companies don't overstep their usage
               | of resources (carbon footprint), so as not to threaten
               | the existing titans who may have already abused what was
               | available to them before these intracorporate policies
               | existed.
               | 
               | It's also a way for banks and other powerful entities to
               | enforce sweeping policies across international businesses
               | that haven't been enacted in law. In other words: if
               | governing bodies aren't working for them, they'll just do
               | it themselves and undermine the will of companies who do
               | not want to participate, by introducing social pressures
               | and boycotting potential partnerships unless they comply.
               | 
               | Ironically, it snuffs out diversity among companies at a
               | 40k foot level.
        
               | jakderrida wrote:
               | It's not a crusade against whiteness. Unless you're
               | unhinged and believe a single phenotype that prevents
               | skin cancer is somehow an obvious reflection of genetic
               | inferiority and that those lacking it have a historical
               | destiny to rule over the rest and are entitled to
               | institutional privileges over them, it makes sense that
               | companies with employees not representative of the
               | overall population have hiring practices that are
               | problematic, albeit not necessarily being as explicitly
               | racist as you are.
        
               | docmars wrote:
               | Unfortunately you are wrong, and this kind of rhetoric
               | has not only made calls for white genocide acceptable and
               | unpunished, but has incited violence specifically against
               | Caucasian people, as well as anyone who is perceived to
               | adopt "white" thinking such as Asian students
               | specifically, and even Black folks who see success in
               | their life as a result of adopting longstanding
               | European/Western principles in their lives.
               | 
               | Specifically, principles that have ultimately led to the
               | great civilizations we're experiencing today, built upon
               | centuries of hard work and deep thinking in both the arts
               | and sciences, _by all races_ , beautifully.
               | 
               | DEI and its creators/pushers are a subtle effort to erase
               | and rebuild this prior work under the lie that it had
               | excluded everyone but Whites, so that its original
               | creators no longer take credit.
               | 
               | Take the movement to redefine Math concepts by recycling
               | existing concepts using new terms defined exclusively by
               | non-white participants, since its origins are "too
               | white". Oh the horror! This is false, as there are many
               | prominent non-white mathematicians that existed prior to
               | the woke revolution, so this movement's stated purpose is
               | a lie, and its true purpose is to eliminate and replace
               | white influence.
               | 
               | Finally, the fact that DEI specifically targets
               | "whiteness" is patently racist. Period.
        
               | chiefalchemist wrote:
               | But it's not only the companies, it's the marginalized so
               | desperate to get a "seat at the table" that they don't
               | recognize the table isn't getting bigger and rounder.
               | Instead, it's still the same rectangular that is getting
               | longer and longer.
               | 
               | Participating in that is assimilation.
        
               | docmars wrote:
               | I think that most pushes for diversity that we see today
               | are intended to result in monocultures.
               | 
               | DEI and similar programs use very specific racial
               | language to manipulate everyone into believing whiteness
               | is evil and that rallying around that is the end goal for
               | everyone in a company.
               | 
               | On a similar note, the company has already established
               | certain missions and values that new hires may strongly
               | align with like: "Discovering and enacting the path to
               | safe artificial general intelligence", given not only the
               | excitement around AI's possibilities but also the social
               | responsibility of developing it safely. Both are highly
               | appealing goals that are bound to change humanity forever
               | and it would be monumentally exciting to play a part in
               | that.
               | 
               | Thus, it's safe to think that most employees who are
               | lucky to have earned a chance at participating would want
               | to preserve that, if they're aligned.
               | 
               | This kind of alignment is not the bad thing people think
               | it is. There's nothing quite like a well-oiled machine,
               | even if the perception of diversity from the outside
               | falls by the wayside.
               | 
               | Diversity is too often sought after for vanity, rather
               | than practical purposes. This is the danger of coercive,
               | box-checking ESG goals we're seeing plague companies, to
               | the extent that it's becoming unpopular to chase after
               | due to the strongly partisan political connotations it
               | brings.
        
               | docmars wrote:
               | I think it could be a number of factors:
               | 
               | 1. The company has built a culture around not being under
               | control by one single company, Microsoft in this case.
               | Employees may overwhelmingly agree.
               | 
               | 2. The board acted rashly in the first place, and over
               | 2/3 of employees signed their intent to quit if the board
               | hadn't been replaced.
               | 
               | 3. Younger folks probably don't look highly at boards in
               | general, because they never get to interact with them.
               | They also sometimes dictate product outcomes that could
               | go against the creative freedoms and autonomy employees
               | are looking for. Boards are also focused on profits,
               | which is a net-good for the company, but threatens the
               | culture of "for the good of humanity" that hooks people.
               | 
               | 4. The high success of OpenAI has probably inspired
               | loyalty in its employees, so long as it remains stable,
               | and their perception of what stability is means that the
               | company ultimately changes little. Being "acquired" by
               | Microsoft here may mean major shakeups and potential
               | layoffs. There's no guarantees for the bulk of workers
               | here.
               | 
               | I'm reading into the variables and using intuition to
               | make these guesses, but all to suggest: it's complicated,
               | and sometimes outliers like these can happen if those
               | variables create enough alignment, if they seem common-
               | sensical enough to most.
        
               | denton-scratch wrote:
               | > Younger folks probably don't look highly at boards in
               | general, because they never get to interact with them.
               | 
               | Judging from the photos I've seen of the principals in
               | this story, none of them looks to be over 30, and some of
               | them look like schoolkids. I'm referring to the _board
               | members_.
        
               | docmars wrote:
               | I don't think the age of the board members matters, but
               | rather that younger generations have been taught to
               | criticize boards of any & every company for their myriad
               | decisions to sacrifice good things for profit, etc.
               | 
               | It's a common theme in the overall critique of late stage
               | capitalism, is all I'm saying -- and that it could be a
               | factor in influencing OpenAI's employees' decisions to
               | seek action that specifically eliminates the current
               | board, as a matter of inherent bias that boards act
               | problematically to begin with.
        
               | from-nibly wrote:
               | Right. They aren't actually voting for Sam Altman. If I'm
               | working at a company and I see as little as 10% of the
               | company jump ship I think "I'd better get the frik outta
               | here". Especially if I respect the other people who are
               | leaving. This isnt a blind vote. This is a rolling
               | snowball.
               | 
               | I don't think very many people actually need to believe
               | in Sam Altman for basically everyone to switch to
               | Microsoft.
               | 
               | 95% doesn't show a large amount of loyalty to Sam it
               | shows a low amount of loyalty to OpenAI.
               | 
               | So it looks like a VERY normal company.
        
             | drivers99 wrote:
             | Originally, 65% had signed (505 of 770).
        
             | roflc0ptic wrote:
             | The simple answer here is that the boards actions stood to
             | incinerate millions of dollars of wealth for most of these
             | employees, and they were up in arms.
             | 
             | They're all acting out the intended incentives of giving
             | people stake in a company: please don't destroy it.
        
               | whywhywhywhy wrote:
               | Wild the employees will go back under a new board and the
               | same structure, first priority should be removing the
               | structure that allowed a small group of people to destroy
               | things over what may have been very petty reasons.
        
               | CydeWeys wrote:
               | Well it's a different group of people and that group will
               | now know the consequences of attempting to remove Sam
               | Altman. I don't see this happening again.
        
               | youcantcook wrote:
               | Most likely, but it is cute how confident you are towards
               | humanity learning their lesson.
        
               | tstrimple wrote:
               | Humanity no. But it's not humanity on the OpenAI board.
               | It's 9 individuals. Individuals have amazing capacity for
               | learning and improvement.
        
               | cityguy33 wrote:
               | I don't understand how the fact they went from a
               | nonprofit into a for-profit subsidiary of one of the most
               | closed-off anticompetitive megacorps in tech is so
               | readily glossed over. I get it, we all love money and
               | Sam's great at generating it, but anyone who works at
               | OpenAI besides the board seems to be morally bankrupt.
        
               | gdhkgdhkvff wrote:
               | Pretty easy to complain about lack of morals when it's
               | _someone else's_ millions of dollars of potential
               | compensation that will be incinerated.
               | 
               | Also, working for a subsidiary (which was likely going to
               | be given much more self-governance than working directly
               | at megacorp), doesn't necessarily mean "evil". That's a
               | very 1-dimensional way to think about things.
               | 
               | Self-disclosure: I work for a megacorp.
        
               | yoyohello13 wrote:
               | We can acknowledge that it's morally bankrupt, while also
               | not blaming them. Hell, I'd probably do the same thing in
               | their shoes. That doesn't make it right.
        
               | BeetleB wrote:
               | > Pretty easy to complain about lack of morals when it's
               | someone else's millions of dollars of potential
               | compensation that will be incinerated.
               | 
               | And while also working for a for-profit company.
        
               | yterdy wrote:
               | If some of the smartest people on the planet are willing
               | to sell the rest of us out for Comfy Lifestyle Money (not
               | even Influence State Politics Money), then we are well
               | and truly Capital-F Fucked.
        
               | deckard1 wrote:
               | We already know some of the smartest people are willing
               | to sell us out. Because they work for FAANG ad tech,
               | spending their days figuring out how to maximize the
               | eyeballs they reach while sucking up all your privacy.
               | 
               | It's a post-"Don't be evil" world today.
        
               | jacquesm wrote:
               | If half of the brainpower invested in advertising food
               | would go towards world hunger we'd have too much food.
        
               | slg wrote:
               | > Pretty easy to complain about lack of morals when it's
               | someone else's millions of dollars of potential
               | compensation that will be incinerated.
               | 
               | That is a part of the reason why organizations choose to
               | set themselves up as a non-profit, to help codify those
               | morals into the legal status of the organization to
               | ensure that the ingrained selfishness that exists in all
               | of us doesn't overtake their mission. That is the heart
               | of this whole controversy. If OpenAI was never a non-
               | profit, there wouldn't be any issue here because they
               | wouldn't even be having this legal and ethical fight.
               | They would just be pursuing the selfish path like all
               | other for profit businesses and there would be no room
               | for the board to fire or even really criticize Sam.
        
               | cityguy33 wrote:
               | I guess my qualm is that this is the cost of doing
               | business, yet people are outraged at the board because
               | they're not going to make truckloads of money in equity
               | grants. That's the morally bankrupt part in my opinion.
               | 
               | If you throw your hands up and say, "well kudos to them,
               | theyre actually fulfilling their goal of being a non
               | profit. I'm going to find a new job". That's fine by me.
               | But if you get morally outraged at the board over this
               | because you expected the payday of a lifetime, that's on
               | you.
        
               | Zpalmtree wrote:
               | Why would they be morally bankrupt? Do the employees have
               | to care if it's a non profit or a for profit?
               | 
               | And if they do prefer it as a for profit company, why
               | would that make them morally bankrupt?
        
               | endtime wrote:
               | > anyone who works at OpenAI besides the board seems to
               | be morally bankrupt.
               | 
               | People concerned about AI safety were probably not going
               | to join in the first place...
        
               | rozap wrote:
               | Easy to see how humans would join a non profit for the
               | vibes, and then when they create one of the most
               | compelling products of the last decade worth billions of
               | dollars, quickly change their thinking into "wait, i
               | should get rewarded for this".
        
               | cma wrote:
               | Supposedly they had about 50% of employees leave in the
               | year of the conversion to for-profit.
        
             | heyjamesknight wrote:
             | That argument only works with a "population", since almost
             | nobody gets to choose which set of politicians they vote
             | for.
             | 
             | In this case, OpenAI employees all voluntarily sought to
             | join that team at one point. It's not hard to imagine that
             | 98% of a self-selecting group would continue to self-select
             | in a similar fashion.
        
             | shzhdbi09gv8ioi wrote:
             | > for now the damage is done and OpenAI has a lot of trust
             | rebuilding to do
             | 
             | Nobody cares, except shareholders.
        
             | JVIDEL wrote:
             | Odds are if he left there's the possibility their
             | compensation situation might have changed for the worse if
             | not leading to downsizing, that in the edge of a recession
             | with plenty of competition out there.
        
           | kiba wrote:
           | They could just reach different conclusion based on their
           | values. OpenAI doesn't seem to be remotely serious about
           | preventing the misuse of AI.
        
         | kmlevitt wrote:
         | I think this outcome was actually much more favorable to
         | D'Angelo's faction than people realize. The truth is before
         | this Sam was basically running circles around the board and
         | doing whatever he wanted on the profit side- that's what was
         | pissing them off so much in the first place. He was even trying
         | to depose board members who were openly critical of open AI's
         | practices.
         | 
         | From here on out there is going to be far more media scrutiny
         | on who gets picked as a board member, where they stand on the
         | company's policies, and just how independent they really are.
         | Sam, Greg and even Ilya are off the board altogether. Whoever
         | they can all agree on to fill the remaining seats, Sam is going
         | to have to be a lot more subservient to them to keep the peace.
        
           | bugglebeetle wrote:
           | > Sam, Greg and even Ilya are off the board altogether.
           | Whoever they can all agree on to fill the remaining seats,
           | Sam is going to have to be a lot more subservient to them to
           | keep the peace.
           | 
           | The existing board is just a seat-warming body until Altman
           | and Microsoft can stack it with favorables to their (and the
           | U.S. Government's) interests. The naivete from the NPO
           | faction was believing they'd be able to develop these
           | capacities outside the strict control of the military
           | industrial complex when AI has been established as part of
           | the new Cold War with China.
        
             | ah765 wrote:
             | According to this tweet thread[1], they negotiated hard for
             | Sam to be off the board and Adam to stay on. That
             | indicates, at least if we're being optimistic, that the
             | current board is not in Sam's pocket (otherwise they
             | wouldn't have bothered)
             | 
             | [1]:(https://twitter.com/emilychangtv/status/17272168186481
             | 34101)
        
               | bugglebeetle wrote:
               | I'm sorry, but that's all kayfabe. If there is one thing
               | that's been demonstrated in this whole fiasco, it's who
               | really has all the power at OpenAI (and it's not the
               | board).
        
               | wouldbecouldbe wrote:
               | Yeah the board is kind of pointless now.
               | 
               | They can't control the CEO, neither fire him.
               | 
               | They can't take actions to take back the back control
               | from Microsoft and Sam because Sam is the CEO. Even if
               | Sam is of the utmost morality, he would be crazy to help
               | them back into a strong position after last week.
               | 
               | So it's the Sam & Microsoft show now, only a master
               | schemer can get back some power to the board.
        
               | wouldbecouldbe wrote:
               | It would be an interesting move to install a co-ceo in a
               | few months. That would be harder to object for Sam
        
               | notahacker wrote:
               | Yeah, that's my take. Doesn't really matter if the
               | composition of the board is to Adam's liking and has a
               | couple more heavy hitters if Sam is untouchable and
               | Microsoft is signalling that any time OpenAI acts against
               | its interests they will take steps to ensure it ceases to
               | have any staff or funding.
        
             | kmlevitt wrote:
             | >The existing board is just a seat-warming body until
             | Altman and Microsoft can stack it with favorables to their
             | (and the U.S. Government's) interests.
             | 
             | That's incorrect. The new members will be chosen by
             | D'Angelo and the two new independent board members. Both of
             | which D'Angelo had a big hand in choosing.
             | 
             | I'm not saying Larry Summers etc going to be in D'Angelo's
             | pocket. But the whole reason he agreed to those picks is
             | because he knows they won't be in Sam's pocket, either.
             | More likely they will act independently and choose future
             | members that they sincerely believe will be the best picks
             | for the nonprofit.
        
           | eviks wrote:
           | Doesn't make sense that after such a broad board capitulation
           | the next one will have any power, and media scrutiny isn't a
           | powerful governance mechanism
        
             | kmlevitt wrote:
             | When you consider they were acting under the threat of the
             | entire company walking out and the threat of endless
             | lawsuits, this is a remarkably mild capitulation. All the
             | new board members are going to be chosen by D'Angelo and
             | two new board members that he also had a big hand in
             | choosing.
             | 
             | And say what you want about Larry Summers, but he's not
             | going to be either Sam's or even Microsoft's bitch.
        
               | imjonse wrote:
               | I wonder what is the rationale for picking a seasoned
               | politician and economist (influenced deregulation of US
               | finance system, was friends with Epstein, had a few
               | controversies listed there). Has the government also
               | entered the chat so obviously?
        
               | choult wrote:
               | It probably means that they anticipate a need for dealing
               | with the government in future, such as having a hand in
               | regulation of their industry.
        
               | voster wrote:
               | They had congressman Will Hurd on the board before. Govt-
               | adjacent people on non-profits are common for many
               | reasons - understanding regulatory requirements, access
               | to people, but also actual "good" reasons like the fact
               | that many people who work close to the state genuinely
               | have good intentions on social good (whether you agree
               | with their interpretation of it or not)
        
               | eviks wrote:
               | What I'd want to say about Larry is that he is definitely
               | not going to care about the whole-society non-profit
               | shtick of the company to any degree comparable with the
               | previous board members, so he won't constraint Sam/MS in
               | any way
        
               | sanxiyn wrote:
               | Why? As an economist, he perfectly understands what is a
               | public good, why there is a market failure to
               | underproduce a public good under free market, and role of
               | nonprofit in public good production.
        
               | ZiiS wrote:
               | His deregulation of the banks suggests he heavily flavors
               | free markets even when history has proved him very very
               | wrong.
        
               | pevey wrote:
               | Larry Summers has a track record of not believing in
               | market failures, just market opportunities for private
               | interests. Economists vary vastly in their belief
               | systems, and economics is more politics than science, no
               | matter how much math they try to use to distract from
               | this.
        
               | kmlevitt wrote:
               | I don't know if Adam D'Angelo would agree with you,
               | because he had veto power over these selections and he
               | wanted Larry Summers on the board himself.
        
               | chucke1992 wrote:
               | On what premise you assume that D'Angelo will have any
               | say there? At this point he won't be able to do any moves
               | - especially with Larry and Microsoft overseeing all that
               | stuff.
        
               | kmlevitt wrote:
               | Again, D'Angelo chose Larry Summers and Bret Taylor to
               | sit on the board with him himself. As long as it is the
               | three of them, he can't be overruled unless both of his
               | personal picks disagree with him. And if the opposition
               | to his idea is all that bad, he probably really should be
               | overruled.
               | 
               | His voting power will get diluted as they add the next
               | six members, but again, all three of them are going to
               | decide who the next members are going to be.
               | 
               | A snippet from the recent Bloomberg article:
               | 
               | >A person close to the negotiations said that several
               | women were suggested as possible interim directors, but
               | parties couldn't come to a consensus. Both Laurene Powell
               | Jobs, the billionaire philanthropist and widow of Steve
               | Jobs, and former Yahoo CEO Marissa Mayer were floated,
               | *but deemed to be too close to Altman*, this person said.
               | 
               | Say what else you want about it, this is not going to be
               | a board automatically stacked in Altman's favor.
        
             | dagaci wrote:
             | Clearly the board members did not think through even the
             | immediate consequences. Kenobi:
             | https://www.youtube.com/watch?v=iVBX7l2zgRw
        
           | jatins wrote:
           | > He was even trying to depose board members who were openly
           | critical of open AI's practices.
           | 
           | Was there any concrete criticism in the paper that was
           | written by that board member? (Genuinely asking, not a
           | leading question)
        
           | moonsu wrote:
           | > The truth is before this Sam was basically running circles
           | around the board and doing whatever he wanted on the profit
           | side- that's what was pissing them off so much in the first
           | place. He was even trying to depose board members who were
           | openly critical of open AI's practices.
           | 
           | Do you have a source for this?
        
             | kmlevitt wrote:
             | New York Times. He was "reprimanding" Toner, a board
             | member, for writing an article critical of open AI.
             | 
             | https://www.nytimes.com/2023/11/21/technology/openai-
             | altman-...
             | 
             | Getting his way: The Wall Street Journal article. They said
             | he usually got his way, but that he was so skillful at it
             | that they were hard-pressed to explain exactly how he
             | managed to pull it off.
             | 
             | https://archive.is/20231122033417/https://www.wsj.com/tech/
             | a...
             | 
             | Bottom line he had a lot more power over the board then
             | than he will now.
        
           | cyanydeez wrote:
           | Eh, Larry Summers is on this board. That means they're now
           | going to protect business interests.
           | 
           | OpenAI is now just a tool used by Businesses. And they dont
           | have a good history of benefitting humanity recently.
        
             | kofejnik wrote:
             | Larry Summers is EA and State, so not so sure about
             | business interests
        
           | nashashmi wrote:
           | Media >= employees? Media >= Sam? I don't think media has any
           | role on oversight or governance.
           | 
           | I think Sam came out the winner. He gets to pick his board.
           | He gets to narrow his employees. If anything, this sets him
           | up for dictatorship. The only other overseers are the
           | investors. In that case, Microsoft came out holding a leash.
           | No MS, means no Sam, which also means employees have no say.
           | 
           | So it is more like MS > Sam > employees. MS+Sam > rest of
           | investors.
        
         | karmasimida wrote:
         | It is not groupthink it is comradery.
         | 
         | For me, the whole thing is just human struggle. It is about
         | fighting for people they love and care, against some people
         | they dislike or indifferent to.
        
           | Rastonbury wrote:
           | Nah, I too will threaten to sign a petition to quit if I
           | could save my RSUs/PPUs from evaporating. Organizational
           | goals be damned (or is it extinction level risk be damned?)
        
         | clnq wrote:
         | > OpenAI is in fact not open
         | 
         | This meme was already dead before the recent events. Whatever
         | the company was doing, you could say it wasn't open enough.
         | 
         | > a real disruptor must be brewing somewhere unnoticed, for now
         | 
         | Why pretend OpenAI hasn't just disrupted our way of life with
         | GPTs in the last two years? It has been the most high profile
         | tech innovator recently.
         | 
         | > OpenAI does not have in its DNA to win
         | 
         | This is so vague. What does it not have in its... fundamentals?
         | And what is to "win"? This statement seems like just generic
         | unhappiness without stating anything clearly. By most measures,
         | they are winning. They have the best commercial LLM and
         | continue to innovate, they have partnered with Microsoft
         | heavily, and they have so far received very good funding.
        
           | absrec wrote:
           | They really need to drive down the amount of computation
           | needed. The dependence on Microsoft is because of the
           | monstrous computation requirements that will require many
           | paid users to break even.
           | 
           | Leaving the economic side even to make the tech 'greener'
           | will be a challenge. OpenAI will win if they focus on making
           | the models less compute intensive but it could be dangerous
           | for them if they can't.
           | 
           | I guess the OP's brewing disruptor is some locally runnable
           | Llama type model that does 80% of what ChatGPT does at a
           | fraction of the cost.
        
           | JohnFen wrote:
           | > Why pretend OpenAI hasn't just disrupted our way of life
           | with GPTs in the last two years?
           | 
           | It hasn't disrupted mine in any way. It may do that in the
           | future, but the future isn't here yet.
        
         | robot wrote:
         | there is a lot of money made (100m paid users?) by everyone and
         | momentum so groupthink is forced to occur kind of.
        
         | android521 wrote:
         | right . why don't you creat a chatgpt like innovation or even
         | AGI and do things your way? So many people just know how to
         | complain on what other people build and forget that no one is
         | stopping you from innovating the way you like it.
        
         | eloisant wrote:
         | Yes they need to change their name. Having "Open" in their name
         | is just a big marketing lie.
        
         | faeriechangling wrote:
         | They made GPT4 and you think they clearly have little critical
         | thinking? That's some big talk you're talking.
        
           | tonyedgecombe wrote:
           | That's the curse of specialisation. You can be really smart
           | in one area and completely unaware in others. This industry
           | is full of people with deep technical knowledge but little in
           | the way of social skills.
        
             | rvz wrote:
             | Exactly this. Specialization is indeed a curse. We have
             | seen it in lots of these folks especially engineers that
             | flaunt their technical prowess but are extremely deficient
             | in social skills and other basic soft skills or even
             | understanding governance.
             | 
             | Engineer working at "INSERT BIG TECH COMPANY" is no
             | guarantee or insight about critical thinking at another
             | one. The control and power over OpenAI was always at
             | Microsoft regardless of board seats and access. Sam was
             | just a lieutenant of an AI division and the engineers were
             | just following the money like a carrot on a stick.
             | 
             | Of course, the engineers don't care about power dynamics
             | until their paper options are at risk. Then it becomes
             | highly psychological and emotional for them and they feel
             | powerless and can only follow the leader to safety.
             | 
             | The BOD (Board of Directors) with Adam D'Angelo (the one
             | who likely instigated this) has shown to have taken
             | unprecedented steps to remove board members and fire the
             | CEO for very illogical and vague reasons. They already made
             | their mark and the damage is already done.
             | 
             | Lets see if these engineers that signed up to this will
             | learn from this theatrical lesson of how not to do
             | governance and run an entire company into the ground with
             | unspecified reasons.
        
             | mlrtime wrote:
             | Agreed, take Hacker News for example. 99% of the articles
             | are in a domain I don't have years of professional
             | experience.
             | 
             | However, when that one article does come up, and I know the
             | details inside/out , the comments sections are rife with
             | bad assumptions, naive comments and misinformation.
        
         | jjallen wrote:
         | I think what this saga has shown is that no one controls OpenAI
         | definitively. Is Microsoft did this wouldn't have happened in
         | the first place don't you think?
         | 
         | And if Sam controlled it it also wouldn't have.
        
         | sashank_1509 wrote:
         | > Furthermore, the overwhelming groupthink shows there's
         | clearly little critical thinking amongst OpenAI's employees
         | either.
         | 
         | Very harsh words for some of the highest paid smartest people
         | on the planet. The employees built GPT-4 the most advanced AI
         | on the planet, what did you build? Do you still claim they're
         | more deficient in critical thinking compared to you.
        
           | wiz21c wrote:
           | I think the choice they had to make was: either building one
           | of the top AI on earth under total control of OpenAI
           | investors (and most likely the project of their life) either
           | do nothing.
           | 
           | So they bowed.
        
           | Kathula wrote:
           | Being smart does not equate to being critical, or going
           | against group think.
        
           | Cacti wrote:
           | please don't troll HN
        
           | jetsetk wrote:
           | There is no comparison to himself in the previous comment.
           | Also, did you measure their IQ to put them on such a
           | pedestal? There are lots of examples for people being great
           | in their niche they invested thousands of hours in, while
           | being total failures in other areas. You could see that with
           | Mr. Sutskever over the weekend. He must be excellent in ML as
           | he dedicated his life to researching this field of knowledge,
           | but he lacks practice in critical thinking in management
           | contexts.
        
         | dncornholio wrote:
         | Disappointing? What has OpenAI done to you? We don't even know
         | what happened.
         | 
         | Everything has been pure speculation. I would curb my judgement
         | if I were you, until we actually know what happened.
        
         | caskstrength wrote:
         | > Furthermore, the overwhelming groupthink shows there's
         | clearly little critical thinking amongst OpenAI's employees
         | either.
         | 
         | I'm sure has been a lot of critical thinking going on. I would
         | venture a guess that employees decided that Sam's approach is
         | much more favorable for the price of their options than the
         | original mission of the non-profit entity.
        
         | pjmlp wrote:
         | Open Group, the home of UNIX standards never was that open.
        
         | _giorgio_ wrote:
         | The alternative was that all OpenAI employees started to work
         | directly for MSFT, as they said in the letter signed by 95% of
         | them.
        
         | jampekka wrote:
         | The initial board consists entirely of swamp lizards. I really
         | hope they mess up as you predict.
        
         | jatins wrote:
         | > Furthermore, the overwhelming groupthink shows there's
         | clearly little critical thinking amongst OpenAI's employees
         | either.
         | 
         | If the "other side" (board) had put up a SINGLE convincing
         | argument on why Sam had to go maybe the employees would have
         | not supported Sam unequivocally.
         | 
         | But, atleast as an outsider, we heard nothing that suggests
         | board had reasons to remove Sam other than "the vibes were off"
         | 
         | Can you really accuse the employees of groupthink when the
         | other side is so weak?
        
           | serial_dev wrote:
           | Yes, the original letter had (for an official letter) quite
           | some serious allegations, insinuations. If after a week, they
           | decided not to back up their claims, I'm not sure there is
           | anything big coming.
           | 
           | On the other hand, if they had some serious concerns, serious
           | enough to fire the CEO in such a disgraceful way, I don't
           | understand why they don't stick to their guns, and explain
           | themselves. If you think OpenAI under Sam's leadership is
           | going to destroy humanity, I don't understand how they (e.g.
           | Ilya) reverted their opinions after a day or two.
        
             | Kye wrote:
             | It's possible the big, chaotic blowup forced some
             | conversations that were easier to avoid in the normal day-
             | to-day, and those conversations led to some vital
             | resolution of concerns.
        
             | carlossouza wrote:
             | These board members failed miserably in their intent.
             | 
             | Also, they will find a hard time joining any other board
             | from now on.
             | 
             | They should have backed up the claims in the letter. They
             | didn't.
             | 
             | This means they didn't have how to backup their claims.
             | They didn't think it through... extremely amateurish
             | behavior.
        
               | ZiiS wrote:
               | D'Angelo wasn't even removed from this board; this is
               | simply not how failing works at this level.
        
               | richardwhiuk wrote:
               | Yet
        
               | iowemoretohim wrote:
               | He's part of the selection panel but he won't be a part
               | of the new 9 member board.
        
           | ethanbond wrote:
           | OpenAI is a private company and not obligated nor is it
           | generally advised for them to comment publicly on why people
           | are fired. I know that having a public explanation would be
           | useful for the plot development of everyone's favorite little
           | soap opera, but it makes pretty much zero sense and doesn't
           | lend credence to any position whatsoever.
        
             | iowemoretohim wrote:
             | Since barely any information was made publicly we have to
             | assume the employees had better information that the
             | public. So how can we say they lacked critical thinking
             | when we don't have access to the information they have?
        
               | ethanbond wrote:
               | I didn't claim employees were engaged in groupthink. I'm
               | taking issue with the claim that _because there is no
               | public explanation_ , there must not be a good
               | explanation.
        
               | ulizzle wrote:
               | That is a logical fallacy clawing your face. Upvotes to
               | whoever can name which one.
        
             | cryptonym wrote:
             | Taking decisions in a way that _seems_ opaque and arbitrary
             | will not bring much support from employees, partners and
             | investors. They did not fire a random employee. Not
             | disclosing relevant information for such a key decision was
             | proven, once again, to be a disaster.
             | 
             | This is not about soap opera, this is about business and a
             | big part is based on trust.
        
             | Bayaz wrote:
             | And yet here we are with a result that not only runs
             | counter to your premise but will taught as an example of
             | what not to do in business.
        
               | ethanbond wrote:
               | What?
        
             | Aurornis wrote:
             | > OpenAI is a private company and not obligated nor is it
             | generally advised for them to comment publicly on why
             | people are fired.
             | 
             | The interim CEO said the board couldn't even tell him why
             | the old CEO was fired.
             | 
             | Microsoft said the board couldn't even tell them why the
             | old CEO was fired.
             | 
             | The employees said the board couldn't explain why the CEO
             | was fired.
             | 
             | When nobody can even begin to understand the board's
             | actions and they can't even explain themselves, it's a
             | recipe for losing confidence. And that's exactly what
             | happened, from investors to employees.
        
               | ethanbond wrote:
               | I'm specifically taking issue with this common meme that
               | _the public_ is owed some sort of explanation. I agree
               | the employees (and obviously the incoming CEO) would be.
               | 
               | And there's a difference between, "an explanation would
               | help their credibility" versus "a lack of explanation
               | means they don't have a good reason."
        
             | ulizzle wrote:
             | All explanations lend credence to positions which is why is
             | not a good idea to comment on anything. Looks like they're
             | lawyered up.
        
           | conception wrote:
           | My guess is that the arguments are something along the lines
           | of "OpenAIs current products are already causing harm or on
           | the path to do so" or something similar damaging to the
           | products. Something they are afraid of both having continue
           | to move forward on and to having to communicate as it would
           | damage the brand. Like "We already have reports of several
           | hundred people killing themselves because of ChatGPT
           | responses..." and everyone would say, "Oh that makes... wait
           | what??"
        
           | kromem wrote:
           | I agree with both the commenter above you and you.
           | 
           | Yes, you are right that the board had weak sauce reasoning
           | for the firing (giving two teams the same project!?!).
           | 
           | That said, the other commenter is right that this is the
           | beginning of the end.
           | 
           | One of the interesting things over the past few years
           | watching the development of AI has been that in parallel to
           | the demonstration of the limitations of neural networks has
           | been many demonstrations of the limitations of human thinking
           | and psychology.
           | 
           | Altman just got given a blank check and crowned as king of
           | OpenAI. And whatever opposition he faced internally just lost
           | all its footing.
           | 
           | That's a terrible recipe for long term success.
           | 
           | Whatever the reasons for the firing, this outcome is going to
           | completely screw their long term prospects, as no matter how
           | wonderful a leader someone is, losing the reality check of
           | empowered opposition results in terrible decisions being made
           | unchecked.
           | 
           | He's going to double down on chat interfaces because that's
           | been their unexpected bread and butter up until the point
           | they get lapped by companies with broader product vision, and
           | whatever elements at OpenAI shared that broader vision are
           | going to get steamrolled now that he's been given an
           | unconditional green light until they jump ship over the next
           | 18 months to work elsewhere.
        
             | nvm0n2 wrote:
             | Not necessarily! Facebook has done great with its
             | unfireable CEO. The FB board would certainly have fired him
             | several times over by now if it could, and yet they'd have
             | been wrong every time. And the Google cofounders would
             | certainly have been kicked out of their own company if the
             | board had been able to.
        
               | herostratus101 wrote:
               | Yes, also Elon.
        
         | ssnistfajen wrote:
         | The board never gave a believable explanation to justify firing
         | Altman. So the staff simply made the sensible choice of
         | following Altman. This isn't about critical thinking because
         | there was nothing to think about.
        
         | drexlspivey wrote:
         | You would expect the company that owns 49% of the shares to
         | have some input in firing the CEO, why is that disappointing?
         | If they had more control this shitshow would never have
         | happened.
        
           | jampekka wrote:
           | MS doesn't own any part of OpenAI, Inc. In fact nobody really
           | owns it. That was the whole point.
        
         | lordnacho wrote:
         | Is it really a failure of critical thinking? The employees know
         | what position is popular, so even people who are mostly against
         | the go-fast strategy can see that they get to work on this
         | groundbreaking thing only if they toe the line.
         | 
         | It's also not surprising that people who are near the SV
         | culture will think that AGI needs money to get developed, and
         | that money in general is useful for the kind of business they
         | are running. And that it's a business, not a charity.
         | 
         | I mean if OpenAI had been born in the Soviet Union or
         | Scandinavia, maybe people would have somewhat different values,
         | it's hard to know. But a thing that is founded by the
         | posterboys for modern SV, it's gotta lean towards "money is
         | mostly good".
        
           | qwytw wrote:
           | > Soviet Union
           | 
           | Or medieval Spain? About as likely... The Soviets weren't
           | even able to get the factory floors clean enough to
           | consistently manufacture the 8086 10 years after it was
           | already outdated.
           | 
           | > maybe people would have somewhat different values, it's
           | hard to know. But a thing that is founded by the posterboys
           | for modern SV, it's gotta lean towards "money is mostly
           | good".
           | 
           | Unfortunately not other system besides capitalism has enabled
           | consistent technological progress for 200+ years. Turns out
           | you need to pool money and resources to achieve things ..
        
           | robertlagrant wrote:
           | > I mean if OpenAI had been born in the Soviet Union or
           | Scandinavia, maybe people would have somewhat different
           | values, it's hard to know.
           | 
           | Or in Arthurian times. Very different values.
        
         | irthomasthomas wrote:
         | It is a shame that we lost the ability to hold such companies
         | to account (for now). But given the range of possibilities laid
         | out before us, this is the better outcome. GPT-4 has increased
         | my knowledge, my confidence, and my pleasure in learning and
         | hacking. And perhaps it's relatives will fuel a revolution.
         | 
         | Reminds me of a quote: "A civilization is a heritage of
         | beliefs, customs, and knowledge slowly accumulated in the
         | course of centuries, elements difficult at times to justify by
         | logic, but justifying themselves as paths when they lead
         | somewhere, since they open up for man his inner distance." -
         | Antoine de Saint-Exupery.
        
         | zx8080 wrote:
         | Come on, it was just a preparation for the upcoming IPO. Free
         | ads in all news and TV.
        
         | low_tech_love wrote:
         | Plot twist: Sam posts that there is no agreement and that
         | OpenAI is delusional.
        
         | saiya-jin wrote:
         | All this just tells for the 100th time that this area
         | desperately needs some regulation. I don't know the form, but
         | even if we have 1% of skynet, heck even 0.01% its simply too
         | high and we still have full control.
         | 
         | We see most powerful people are in it for the money and power
         | ego trip, and literally nothing else. Pesky morals be damned.
         | Which may be acceptable for some ad business but here stakes
         | are potentially everything and we have no clue what actual %
         | the risk is.
         | 
         | Its to me very similar to all naivety particle scientists
         | expressed in its early days and then reality check of
         | realpolitik and messed up humans in power when bombs were done,
         | used and then hundred thousand more were produced.
        
         | lvl102 wrote:
         | Ultimately, the openness that we all wish for must come from
         | _underlying_ data. The know-how and "secret sauce" were never
         | going to be open. And it's not as profound as we think it is
         | inside that black box.
         | 
         | So who holds all the data in closed silos? Google and Facebook.
         | We may have already lost the battle on achieving "open and
         | fair" AI paradigm long time ago.
        
         | madeofpalk wrote:
         | Regardless of whether you feel like Altman was rushing OpenAI
         | too fast, wasn't open enough, and was being too commercial, the
         | last few days demonstrated conclusively that the board is
         | erratic and unstable and unfit to manage OpenAI.
         | 
         | Their actions was the complete opposite of open. Rather than, I
         | don't know, being open and talking to the CEO to share concerns
         | and change the company, they just threw a tantrum and fired
         | him.
        
           | ethanbond wrote:
           | They fired him (you don't know the backstory) and published a
           | press release and then Sam was seen back in the offices.
           | Prior to the reinstatement (today), there was nothing except
           | HN hysteria and media conjecture that made the board look
           | extremely unstable.
        
             | madeofpalk wrote:
             | ??? They fired him on friday with a statement knifing him
             | in the back, un-fired him on tuesday, and now the board is
             | resigning? How is that not erratic and unstable?
        
               | ethanbond wrote:
               | Note that I just stated, _up until reinstatement_ their
               | actions weren't erratic.
               | 
               | Now, yes, they definitely are.
               | 
               | IMO OpenAI's governance is far less trustworthy today
               | than it was yesterday.
        
               | broast wrote:
               | I found the board members own words to be quite erratic
               | between Friday and today, such as Ilya saying he wished
               | he didn't participate in the boards actions.
        
               | ethanbond wrote:
               | It would be completely understandable to regret when your
               | action against someone causes them to fall upwards
        
               | framapotari wrote:
               | What? Do you think it would be understandable for a board
               | member to regret firing the CEO because of his career
               | path post-firing?
        
               | ethanbond wrote:
               | If Ilya was concerned about dangerously fast
               | commercialization, which seems to have been a point of
               | tension between them for a while now, then yes.
        
               | framapotari wrote:
               | But he's acting as a board member firing the CEO because
               | he arguably believes it's the right thing to do for the
               | company. If he then changes his mind because the fired
               | CEO continued a successful career then I'd say that
               | decision was more on a personal level than for the
               | wellbeing of the company.
        
               | ethanbond wrote:
               | His obligation as a member of the board is to safeguard
               | AI, _not_ OpenAI. That 's why in the employee open letter
               | they said, "the board said it'd be compliant with the
               | mission to destroy the company." This is _actually_ true.
               | 
               | It's absolutely believable that at first he thought the
               | best way to safeguard AI was to get rid of the main
               | advocate for profit-seeking at OpenAI, then when that
               | person "fell upward" into a position where he'd have
               | _fewer_ constraints, to regret that decision.
        
               | framapotari wrote:
               | Fair enough, I understand better where you're coming
               | from. Thanks!
        
         | hyperthesis wrote:
         | What could disrupt OpenAI is a dramatic change in _market_ ,
         | perhaps enabled by a change in technology. But if it's the same
         | customers in the same market, they will buy or duplicate any
         | tech advance; and if it's a sufficiently similar market, they
         | will pivot.
        
         | seydor wrote:
         | > OpenAI is in fact not open
         | 
         | that ship sailed long ago , no?
         | 
         | But i agree that the company seems less trustworthy now, like
         | it's too CEO-centered
        
         | rinze wrote:
         | Matt Levine's "slightly annotated diagram" in one of his latest
         | newsletters tells the story quite well, I think:
         | https://newsletterhunt.com/emails/42469
        
         | mdekkers wrote:
         | Very disappointing outcome indeed. Larry Summers is the
         | Architect of the modern Russian Oligarchy[1] and responsible
         | for an incredible amount of human suffering as well as gross
         | financial disparity both in the USA as well as the rest of the
         | world.
         | 
         | Not someone I would like to see running the world's leading AI
         | company
         | 
         | [1] https://www.thenation.com/article/world/harvard-boys-do-
         | russ...
         | 
         | Edit: also https://prospect.org/economy/falling-upward-larry-
         | summers/
         | 
         | https://www.npr.org/sections/money/2022/03/22/1087654279/how...
         | 
         | And finally https://cepr.net/can-we-blame-larry-summers-for-
         | the-collapse...
        
         | oblio wrote:
         | One thing I'm not sure I understand... what's OpenAI's business
         | model? In my eyes, GPT & co is, just like Dropbox, just a
         | feature. It's not a product.
         | 
         | And just like Dropbox, in the end, what disruption? GPT will
         | just be a checkbox for products others build. Cool tech, but
         | not a full product.
         | 
         | Of course, I'd love to be proven wrong.
        
           | simplyinfinity wrote:
           | AI As a Service ( AAaS ), Then the Marketplace of GPTs, and
           | it will become the place to get your AI features from.
        
         | nathanasmith wrote:
         | The board couldn't even clearly articulate why they fired Sam
         | in the first place. There was a departure from critical
         | thinking but I don't think it was on the part of the employees.
        
         | ptero wrote:
         | I do not see an overwhelming groupthink. I see a perfectly
         | rational (and not in any way evil) reaction to a complete mess
         | created by the board.
         | 
         | Most are doing the work they love and four people almost
         | destroy it and cannot even explain why they did it. If I were
         | working at the company that did this I would sign, too. And
         | follow through on the threat of leaving if it comes to that.
        
         | auggierose wrote:
         | I find the outcome very satisfying. The OpenAI API is here to
         | stay and grow, and I can build software on top of it. Hopefully
         | other players will open up their APIs soon as well, so that
         | there is a reasonable choice.
        
           | jetsetk wrote:
           | Not a given that it is here to stay and grow after the
           | company showed itself in such a chaotic state. Also, they
           | need a profitable product - it is not like they are selling
           | Iphones and such..
        
         | ChildOfChaos wrote:
         | I would say this is a great outcome.
         | 
         | Any other outcome would have split OpenAI quite dramatically
         | and put them back massively.
         | 
         | Big assumption to say 'effectively controlled by Microsoft'
         | when Microsoft might have been quite happy for the other option
         | and for them to poach a lot of staff.
        
         | wslh wrote:
         | I wonder if beyond the groupthinking we are seeing at least a
         | more heterogeneous composition: a mix of people that includes
         | business, pure research, engineering, and kind of spirituality-
         | semireligion around [G]AI.
        
         | bambax wrote:
         | > _OpenAI is in fact not open_
         | 
         | One wonders what will happen with Emet Shear's "investigation"
         | in the process that lead to Sam's outing [0]. Was it even
         | allowed to start?
         | 
         | [0] https://twitter.com/eshear/status/1726526112019382275
        
           | smegger001 wrote:
           | Who know but they will probably change their minds again
           | before the holiday and CEO musical chairs game will continue
        
         | chriskanan wrote:
         | While I certainly agree that OpenAI isn't open and is
         | effectively controlled by Microsoft, I'm not following the
         | "groupthink" claims based on what just happened. If I'd been
         | given the very fishy and vague reasons that it sounds like
         | their staff were given, I think any rational person would be
         | highly suspicious of the board, especially since some believe
         | in fringe ideas, have COIs, or can be perceived as being
         | jealous that they aren't the "face" of OpenAI.
        
         | Moto7451 wrote:
         | OpenAI is more open than my company's AI teams, and that is
         | even from my own insider relationship. As far as commercial
         | relationships are concerned, I'd say they're hitting the mark.
        
         | dagaci wrote:
         | In this case the fate of OpenAI was in fact heavily controlled
         | by its employees. They voted with their employment. Microsoft
         | gave them an assured optional destination.
        
         | logicchains wrote:
         | >Furthermore, the overwhelming groupthink shows there's clearly
         | little critical thinking amongst OpenAI's employees either.
         | 
         | The OpenAI employees overwhelmingly rejected the groupthink of
         | the Effective Altruism cult.
        
         | andy99 wrote:
         | Whatever OpenAI started as, a week ago it was a company with
         | the best general purpose LLM, more on the way, and
         | consumer+business products with millions of users. And they
         | were still investing very heavily in research. I'm glad that
         | company may survive. If there's room in the world for a more
         | disruptive research focused AI company that can find
         | sustainable funding, even better.
        
           | cyanydeez wrote:
           | It's now clearly a Business oriented product and the non-
           | profit portion is a marketing tactic to avoid scrutiny.
        
         | belter wrote:
         | Outcome? You mean OpenAI wakes up with no memories of the night
         | before, finding their suite trashed, a tiger in the bathroom, a
         | baby in the closet, and the groom missing and the story will
         | end here?
         | 
         | I just renewed by HN subscription to be able to see Season 2!
        
         | rafaelero wrote:
         | Which critical thinking could they exercise if no believable
         | reasons were given for this whole mess? Maybe it's you who need
         | to more carefully assess this situation.
        
         | coldtea wrote:
         | > _OpenAI does not have in its DNA to win, they 're too short-
         | sighted and reactive._
         | 
         | What does that even mean?
         | 
         | In any case, it's not OpenAI, it's Microsoft, and it has a long
         | history of winning and bouncing back.
        
         | martingoodson wrote:
         | It's not about critical thinking: the employees were about to
         | sell up to $1B of shares to thrive capital. This debacle has
         | derailed that.
        
         | buro9 wrote:
         | in the end, maybe Sam was the instigator, the board tried to
         | defend (and failed) and what we just witnessed from afar was
         | just a power play to change the structure of OpenAI (or at
         | least the outcome for Sam and many others) towards profit
         | rather than non-profit.
         | 
         | we'll all likely never know what truly happened, but it's a
         | shame that the board has lost their last remnant of some
         | diversity and at the moment appears to be composed of rich
         | Western white males... even if they rushed for profit, I'd have
         | more faith in the potential upside what could be a sea change
         | in the World, if those involved reflected more experiences than
         | are currently gathered at that table.
        
         | jmyeet wrote:
         | > The process has conclusively confirmed that OpenAI is in fact
         | not open and that it is effectively controlled by Microsoft.
         | 
         | I'd say the lack of a narrative from the board, general
         | incompetence with how it was handled, the employees quitting
         | and the employee letter played their parts too.
         | 
         | But even if it was Microsoft who made this happen: that's what
         | happens when you have a major investor. If you don't want their
         | influence, don't take their money.
        
         | cyanydeez wrote:
         | It definitely seems like another branch on the IT savior
         | complex, where the prior branch was crypto.
        
         | mrkramer wrote:
         | >Disappointing outcome. The process has conclusively confirmed
         | that OpenAI is in fact not open and that it is effectively
         | controlled by Microsoft. Furthermore, the overwhelming
         | groupthink shows there's clearly little critical thinking
         | amongst OpenAI's employees either.
         | 
         | Why was his role as a CEO even challenged?
         | 
         | >It might not seem like the case right now, but I think the
         | real disruption is just about to begin. OpenAI does not have in
         | its DNA to win, they're too short-sighted and reactive. Big
         | techs will have incredible distribution power but a real
         | disruptor must be brewing somewhere unnoticed, for now.
         | 
         | Always remember; Google wasn't the first search engine nor
         | iPhone the first smartphone. First-movers bring innovation and
         | trend not market dominance.
        
         | NicoJuicy wrote:
         | > that it is effectively controlled by Microsoft
         | 
         | No it's not. Microsoft didn't knew about this till minutes
         | before the press release.
         | 
         | Investors are free to protest decisions against their
         | principles and people are free to move away from their current
         | company.
        
         | JSavageOne wrote:
         | The Hacker News comments section has really gone to shit.
         | 
         | People here used to back up their bold claims with arguments.
        
           | framapotari wrote:
           | It is quite amazing how many people know enough to pass wide
           | judgment on hundreds of people because... they just know.
           | Feel it in their gut.
        
         | caturopath wrote:
         | > it is effectively controlled by Microsoft
         | 
         | I don't consider this confirmed. Microsoft brought an enormous
         | amount of money and other power to the table, and their role
         | was certainly big, but it is far from clear to me that they
         | held all or most of the power that was wielded.
        
         | iowemoretohim wrote:
         | How can you without access to the information that actual
         | employees had of the situation say "there's clearly little
         | critical thinking amongst OpenAI's employees"?
        
         | mrangle wrote:
         | >groupthink shows there's clearly little critical thinking
         | amongst OpenAI's employees either.
         | 
         | So the type of employee that would get hired at OpenAi isn't
         | likely to be skilled at critical thinking? That's doubtful. It
         | looks to me like you dislike how things played out, gathered
         | together some mean adjectives and "groupthink", and ended with
         | a pessimistic prediction for their trajectry as punishment. One
         | is left to wonder what OAI's disruptor outlook would be if the
         | outcome of the current situation had been more pleasing.
        
         | idrisser wrote:
         | Take a look at https://kyutai.org/ that launched last week
        
         | tnel77 wrote:
         | Buy Microsoft stock. Got it.
        
         | kenjackson wrote:
         | Microsoft played almost no role in the process except to be a
         | place for Sam and team to land.
         | 
         | What the process did shoe is if you plan to oust a popular CEO
         | with a thriving company, you should actually have a good reason
         | for it. It's amazing how little thought seemingly went into it
         | for them.
        
         | chollida1 wrote:
         | > The process has conclusively confirmed that OpenAI is in fact
         | not open and that it is effectively controlled by Microsoft.
         | 
         | What leads you to make such a definitive statement? To me the
         | process shows that Microsoft has no pull in OpenAI.
        
         | baxtr wrote:
         | Based on the spectacular drama we were allowed to observe:
         | 
         | For a company at the forefront of AI it's actually very, very
         | human.
        
         | 3cats-in-a-coat wrote:
         | Let me guess. The only valid outcome for you would've been that
         | they disband in order to prevent opening a portal to the cosmic
         | AGI Cthulhu.
         | 
         | Frankly these EA & e/acc cults are starting to get on my
         | nerves.
        
         | enoch_r wrote:
         | > the overwhelming groupthink shows there's clearly little
         | critical thinking amongst OpenAI's employees either
         | 
         | I suspect incentives play a huge role here. OAI employees are
         | compensated with stock in the for-profit arm of the company.
         | It's obvious that the board's actions put the value of that
         | stock in extreme jeopardy (which, given the corporate
         | structure, is theoretically completely fine! the whole point of
         | the corporate structure is that the nonprofit board has the
         | power to say "yikes, we've developed an unsafe
         | superintelligence, burn down the building and destroy the
         | company now").
         | 
         | I think it's natural for employees to be extremely angry with a
         | board decision that probably cost them >$1M each.
        
         | Aurornis wrote:
         | > Disappointing outcome.
         | 
         | The employees of a tech company banded together to get what
         | they wanted, force a leadership change, evict the leaders they
         | disagreed with, secure the return of the leadership they
         | wanted, and restored the value of their hard-earned equity.
         | 
         | This certainly isn't a disappointing outcome for the employees!
         | I thought HN would be ecstatic about tech employees banding
         | together to force action in their favor, but the comments here
         | are surprisingly negative.
        
         | neves wrote:
         | Any good summary of the OpenAI imbroglio? I know it has a
         | strange corporation, with part non profit and part for profit.
         | I don't follow it closely but would like a quick read
         | explaining.
        
         | anandrm wrote:
         | I have been working for various software companies at different
         | capacities. Never did i see 90%+ employees care about their CEO
         | . In a small 10 member startup maybe its true. Are there any
         | OpenAI employees here to confirm that .. their CEO really
         | matters ... I mean how many employee revolted when Steve Jobs
         | was fired .. Do Microsoft and Google employees really care ?
        
         | dalbasal wrote:
         | Yes...
         | 
         | Investors and executives.. everyone in 2023 is hyper focused on
         | "Thiel Monopoly."
         | 
         | Platform, moat, aggregation theory, network effects, first
         | mover advantages.. all those ways of thinking about it.
         | 
         | There's no point in being bing to Google's AdWords... So the
         | big question is pathway to being the adWords. "Winning." That's
         | the paradigm. This is where big returns will be.
         | 
         | However.. we should always remember, but the future is harder
         | to see from the past. Post fact analysis, can often make things
         | seem a lot simpler and more inevitable than they ever were.
         | 
         | It's not clear what a winner even is here. What are the
         | bottlenecks to be controlled. What are the business models,
         | revenue sources. What represents the "LLM Google," America
         | online, Yahoo or a 90s dumb pipe.
         | 
         | FYIW I think all the big text have powerful plays available..
         | including keeping powder dry.
         | 
         | No doubt that proximity to openAI, control, influence, access
         | to IP.. all strategic assets. That's why they're all invested
         | an involved in the consortium.
         | 
         | That said assets or not strategies. It's hard to have
         | strategies when strategic goals are unclear.
         | 
         | You can nominate a strategic goal from here, try to stay
         | upstream, make exploratory investments and bets... There is no
         | rush for the prize, unless the price is known.
         | 
         | Obviously, I'm assuming the prixe is not AGI and a solution to
         | everything... That kind of abstraction is useful, but I do not
         | think it's operative.
         | 
         | It's not a race currently, to see who's R&D lab turns on the
         | first super intelligent consciousness.
         | 
         | Assuming I'm correct on that, we really have no idea which
         | applications LLM capabilities companies are actually competing
         | for.
        
         | scarface_74 wrote:
         | So you didn't realize that when Microsoft both gained a 49%
         | interest and was subsidizing compute?
         | 
         | Unless they had something in their "DNA" that allowed them to
         | build enough compute and pay their employees, they were never
         | going to "win" without a mass infusion of cash and only three
         | companies had enough compute and revenue to throw at them and
         | only two companies had relationships with big enterprise and
         | compute - Amazon and Microsoft.
        
         | himaraya wrote:
         | Hard to say without seeing how the two new board members lean.
        
         | raincole wrote:
         | > Furthermore, the overwhelming groupthink shows there's
         | clearly little critical thinking amongst OpenAI's employees
         | either.
         | 
         | "Because someone acts differently than I expected, they must
         | lacks of critical thinking."
         | 
         | Are you an insider? If not, have you considered that _perhaps_
         | OpenAI employees are more informed about the situation than
         | you?
        
         | Zpalmtree wrote:
         | Seems like that's a good thing when the goals of the open
         | faction is to slow down development lol, how would that make
         | OpenAI win?
        
         | fullshark wrote:
         | I think Microsoft's deep pockets, computing resources, their
         | head start, and 50%+ employees not quitting is more important
         | to the company's chances at success than your assessment they
         | have the "wrong DNA."
         | 
         | The idea that the marketplace is a meritocracy of some kind
         | where whatever an individual deems as "merit" wins is just
         | proven to be nonsense time and time again.
        
         | flappyeagle wrote:
         | Amazing outcome. Empty shirts folded. People who get stuff done
         | persevere.
        
         | segasaturn wrote:
         | >a real disruptor must be brewing somewhere unnoticed, for now.
         | 
         | Anthropic.
        
         | jacquesm wrote:
         | > The process has conclusively confirmed that OpenAI is in fact
         | not open and that it is effectively controlled by Microsoft.
         | 
         | This was said loud and clear when Microsoft joined in the first
         | place but there were no takers.
        
       | dbuser99 wrote:
       | Once again the house (the VCs) wins. I for one don't trust openAI
       | for one bit after this soap opera
        
       | laserlight wrote:
       | With Sam coming back as CEO, hasn't OpenAI board proven that it
       | has lost its function? Regardless of who is in the board, they
       | won't be able to exercise one of the most fundamental of their
       | rights, firing the CEO, because Sam has proven that he is
       | unfireable. Now, Sam can do however he pleases, whether it is
       | lying, not reporting, etc. To be clear, I don't claim that Sam
       | did, or will, lie, or misbehave.
        
         | random_cynic wrote:
         | No that hasn't at all been the case. The board acted like the
         | most incompetent group of individuals who've even handed any
         | responsibility. If they went through due process, notified
         | their employees and investors, and put out a statement of why
         | they're firing the CEO instead of doing it over a 15 min Google
         | meet and then going completely silent, none of this outrage
         | would have taken place.
        
           | maxlin wrote:
           | Exactly. 3 CEO switches in a week is ridiculous
        
             | abkolan wrote:
             | Four CEO changes in five days to be precise.
             | 
             | Sam -> Mira -> Emmet -> Sam
        
               | Hendrikto wrote:
               | That are three changes. Every arrow is one.
        
               | physicles wrote:
               | Classic fence post error.
        
               | nonethewiser wrote:
               | And technically 2 new CEOs
        
               | qup wrote:
               | The three hard problems: naming things and off-by-one
               | errors
        
               | Crespyl wrote:
               | I always heard:
               | 
               | There are two hard problems: naming things, cache
               | invalidation, and off-by-one errors.
        
               | maxlin wrote:
               | 1 hard problems.
               | 
               | naming things, cache invalidation, off-by one errors, and
               | overflows.
        
               | low_tech_punk wrote:
               | Set semantic or List semantic?
        
               | freedomben wrote:
               | Thank you for not editing this away. Easy mistake to
               | make, and gave us a good laugh (hopefully laughing _with_
               | you. Everyone who 's ever programmed has made the same
               | error).
        
             | caleb-allen wrote:
             | Maybe it came at the advice of Rishi Sunak when he and
             | Altman met last week!
        
           | squigz wrote:
           | > The board acted like the most incompetent group of
           | individuals who've even handed any responsibility.
           | 
           | This is overly dramatic, but I suppose that's par for this
           | round.
           | 
           | > none of this outrage would have taken place.
           | 
           | Yeah... I highly doubt this, personally. I'm sure the outrage
           | would have been similar, as HN's current favorite CEO was
           | fired.
        
             | pas wrote:
             | HN sentiment is pretty ambivalent regarding Altman. yes,
             | almost everyone agrees he's important, but a big group
             | things he's basically landed gentry exploiting ML
             | researchers, an other thinks he's a genius for getting MS
             | pay for GPT costs, etc.
        
               | hackernewds wrote:
               | I think a page developed by YC thinks a lot more about
               | him than that ;)
        
               | komali2 wrote:
               | Just putting my hand up as one of the dudes that happened
               | to enter my email on a yc forum (not "page") but really
               | doesn't like the guy lol.
               | 
               | I also have a Twitter account. Guess my opinion on the
               | current or former Twitter CEOs?
        
             | SilasX wrote:
             | Agreed. It's naive to think that an decision _this_
             | unpopular _somehow_ wouldn 't have resulted in dissent and
             | fracturing if only they had given it a better explanation
             | and dotted more i's.
             | 
             | Imagine arguing this in another context: "Man, if _only_
             | the Supreme Court had clearly articulated its reasoning in
             | overturning Roe v Wade, there wouldn 't have been all this
             | outrage over it."
             | 
             | (I'm happy to accept that there's plenty of room for
             | avoiding some of the damage, like the torrents of observers
             | thinking "these board members clearly don't know what
             | they're doing".)
        
           | braiamp wrote:
           | > If they went through due process, notified their employees
           | and investors, and put out a statement of why they're firing
           | the CEO
           | 
           | Did you read the bylaws? They have no responsibility to do
           | any of that.
        
             | ksd482 wrote:
             | That's not the point. Whether or not it was in the bylaws,
             | this would have been the sensible thing to do.
        
             | eksapsy wrote:
             | you don't have responsibility for washing yourself before
             | going to a mass transport vehicle full of people. it's
             | within your rights not to do that and be the smelliest
             | person in the bus.
             | 
             | does it mean it's right or professional?
             | 
             | getting your point, but i hope you get the point i make as
             | well, that just because you have no responsibility for
             | something doesn't mean you're right or not unethical for
             | doing or not doing that thing. so i feel like you're losing
             | the point a little.
        
             | paulddraper wrote:
             | Here lies the body of William Jay,       Who died
             | maintaining his right of way -       He was right, dead
             | right, as he sped along,       But he's just as dead as if
             | he were wrong.              - Dale Carnegie
        
           | OnAYDIN wrote:
           | Actually the board may not have acted in most professional
           | way but in due process they kind of proved Sam Altman is
           | unfireable for sure, even if they didn't intend to.
           | 
           | They did notify everyone. They did it after firing which is
           | within their rights. They may also choose to stay silent if
           | there is legitimate reason for it such as making the reasons
           | known may harm the organization even more. This is
           | speculation obviously.
           | 
           | In any case they didn't omit doing anything they need to and
           | they didn't exercise a power they didn't have. The end result
           | is that the board they choose will be impotent at the moment,
           | for sure.
        
             | xvector wrote:
             | Their communication was completely insufficient. There is
             | no possible world on which the board could be considered
             | "competent" or "professional."
        
             | qudat wrote:
             | > proved Sam Altman is unfireable [without explaining why
             | to its employees].
        
             | eksapsy wrote:
             | Getting your point, although the fact that something is
             | within your rights, may or may not mean certainly that it's
             | also a proper thing to do ... ?
             | 
             | Like, nobody is going to arrest you for spitting on the
             | street especially if you're an old grandpa. Nobody is going
             | to arrest you for saying nasty things about somebody's mom.
             | 
             | You get my point, to some boundary both are kinda within
             | somebody's rights, although can be suable or can be
             | reported for misbehaving. But that's the keypoint,
             | misbehavior.
             | 
             | Just because something is within your rights doesn't mean
             | you're not misbehaving or not acting in an immature way.
             | 
             | To be clear, Im not denying or agreeing that the board of
             | directors acted in an immature way. I'm just arguing
             | against the claim that was made within your text that just
             | because someone is acting within their rights that it's
             | also a "right" thing to do necessary, while that is not the
             | case always.
        
             | jonas21 wrote:
             | Firing Sam was within the board's rights. And 90% of the
             | employees threatening to leave was within their rights.
             | 
             | All this proved is that you can't take a major action that
             | is deeply unpopular with employees, without consulting
             | them, and expect to still have a functioning organization.
             | This should be obvious, but it apparently never crossed the
             | board's mind.
        
               | freedomben wrote:
               | A lot of these high-up tech leaders seem to forget this
               | regularly. They sit on their thrones and dictate wild
               | swings, and are used to having people obey. They get all
               | the praise and adulation when things go well, and when
               | things don't go well they golden parachute into some
               | other organization who hires based on resume titles
               | rather than leadership and technical ability. It doesn't
               | surprise me at all that they were caught off guard by
               | this.
        
               | m3kw9 wrote:
               | Not sure how much of the employees leaving have to do
               | with negotiating Sam back, must be a big factor but not
               | all, during the table talk Emmett, Angelo and Ilya must
               | have decided that it wasn't a good firing and a mistake
               | in retrospect and it is to fix it.
        
             | paulddraper wrote:
             | > They may also choose to stay silent
             | 
             | They may choose to, and they did choose to.
             | 
             | But it was an incompitant choice. (Obviously.)
        
             | random_cynic wrote:
             | If you read my comment again, I'm talking about their
             | competence, not their rights. Those are two entirely
             | different things.
        
           | zerohalo wrote:
           | > none of this outrage would have taken place.
           | 
           | most certainly would have still taken place; no one cares
           | about how it was done; what they care about it being able to
           | make $$; and it was clearly going to not be as heavily
           | prioritized without Altman (which is why MSFT embraced him
           | and his engineers almost immediately).
           | 
           | > notified their employees and investors they did notify
           | their employees; they have fiduciary duty to investors as a
           | nonprofit.
        
           | patcon wrote:
           | > The board acted like the most incompetent group of
           | individuals who've eve[r been] handed any responsibility.
           | 
           | This whole conversation has been full of appeals to
           | authority. Just because us tech people don't know some of
           | these names and their accomplishments, we talk about them
           | being "weak" members. The more I learn, the more I think this
           | board was full of smart ppl who didn't play business politics
           | well (and that's ok by me, as business politics isn't
           | supposed to be something they have to deal with).
           | 
           | Their lack of entanglements makes them stronger members, in
           | my perspective. Their miscalculation was in how broken the
           | system is in which they were undermined. And you and I are
           | part of that brokenness even in how we talk about it here
        
         | altpaddle wrote:
         | Time will tell. Hopefully the new board will still be mostly
         | independent of Sam/MSFT/VC influence. I really hope they
         | continue as an org that tries its best to uphold their charter
         | vs just being another startup.
        
         | kmlevitt wrote:
         | This is a better deal for the board and a worse one for Sam
         | than people realize. Sam and Greg and even Ilya are both off
         | the board, D'Angelo gets to stay on despite his outrageous
         | actions, and he gets veto power over who the new board members
         | will be and a big say in who gets voted on to the board next.
         | 
         | Everybody's guard is going to be up around Sam from now on.
         | He'll have much less leverage over this board than he did over
         | the previous one (before the other three of nine quit). I think
         | eventually he will prevail because he has the charm and social
         | skills to win over the other independent members. But he will
         | have to reign in his own behavior a lot in order to keep them
         | on his side versus D'Angelo
        
           | JSavageOne wrote:
           | I'd be shocked if D'Angelo doesn't get kicked off. Even
           | before this debacle his AI competitor app poe.com is an
           | obvious conflict of interest with OpenAI.
        
             | himaraya wrote:
             | If he survived to this point, I doubt he will go any time
             | soon.
        
               | yeck wrote:
               | Depends who gets onto the board. There are probably a lot
               | of forces interested in ousting him now, so he'd need to
               | do an amazing job vetting the new board members.
               | 
               | My guess is that he has less than a year, based on the my
               | assumption that there will be constant pressure placed on
               | the board to oust him.
        
               | himaraya wrote:
               | He has his network and technical credibility, so I
               | wouldn't underestimate him. Board composition remains
               | hard to predict now.
        
               | WendyTheWillow wrote:
               | What surprises me is how much regard the valley has for
               | this guy. Doesn't Quora suck terribly? I'm for sure its
               | target demographic and I cannot for the life of me pull
               | value from it. I have tried!
        
               | himaraya wrote:
               | His claim to fame comes from scaling FB. Quora shows he
               | has questionable product nous, but nobody questions his
               | technical chops.
        
               | JSavageOne wrote:
               | Quora is an embarrassment and died years ago when
               | marketers took it over
        
             | murakamiiq84 wrote:
             | I think it was only a competitor app _after_ GPTs came out.
             | A conspiracy theorist might say that Altman wanted to get
             | him off the board and engineered GPTs as a pretext first,
             | in the same way that he used some random paper coauthored
             | by Toner that nobody read to kick Toner out.
        
           | jnwatson wrote:
           | This board's sole job is to pick the new board. The new board
           | will have Sam.
        
             | himaraya wrote:
             | Conditioned on the outcome of the internal investigation,
             | which seems up for grabs.
        
           | madeofpalk wrote:
           | (Sam Altman was never on the board to begin with)
        
             | ketzo wrote:
             | He was. OpenAI board as of last Thursday was Altman,
             | Sutskever, Brockman, D'Angelo, Macaulay, Toner.
        
         | low_tech_love wrote:
         | Yes, but on the other hand, this whole thing has shown that
         | OpenAI is not running smooth anymore, and probably never will
         | again. You can't cut the head of the snake then attach it back
         | later and expect it to move on slithering. Even if Sam stays,
         | he won't be able to just do whatever he wants because in an
         | organization as complex as OpenAI, there are thousands of
         | unwritten rules and relationships and hidden processes that
         | need to go smooth without the CEO's direct intervention (the
         | CEO cannot be everywhere all the time). So, what this says to
         | me (Sam being re-hired) is that the future OpenAI is now a
         | watered-down, mere shadow of its former self.
         | 
         | I personally think it's weird if he really settles back in,
         | especially given the other guys who resigned after the fact.
         | There must be lots of other super exciting new things for him
         | to do out there, and some pretty amazing leadership job offers
         | from other companies. I'm not saying OpenAI will die out or
         | anything, but surely it has shown a weak side.
        
           | throwuwu wrote:
           | This couldn't be more wrong. The big thing we learned from
           | this episode is that Sam and Greg have the loyalty and
           | respect of almost every single employee at OpenAI. Morale is
           | high and they're ready to fight for what they believe in.
           | They didn't "cut the head off" and the only snake here is
           | D'Angelo, he tried to kill OpenAI and failed miserably. Now
           | he appears to be desperately trying to hold on to some
           | semblance of power by agreeing to Sam and Greg coming back
           | instead of losing all control with the whole team joining
           | Microsoft.
        
             | alephnan wrote:
             | > Morale is high and they're ready to fight for what they
             | believe in.
             | 
             | Money.
        
             | 37394748 wrote:
             | I don't think Ilya should get off so easily. Him not havinh
             | a say in the formation of the new board speaks volumes
             | about his role in things if you ask me. I hope people keep
             | saying his name too so nobody forgets his place in this
             | mess.
        
               | FireBeyond wrote:
               | There were comments the other day along the lines of "I
               | wouldn't be surprised if someone came by Ilya's desk
               | while he was deep in research and said 'sign this' and he
               | just signed it and gave it back to them without even
               | looking and didn't realize."
               | 
               | People will contort themselves into pretzels to invent
               | rationalizations.
        
         | lysecret wrote:
         | No the board is just one instance. It doesn't and shouldn't
         | have absolute power. Absolute power corrupts absolutely.
         | 
         | There ist the board the investors the employees the senior
         | management.
         | 
         | All other parties aligned against it and thus it couldn't act.
         | If only Sam would have rebelled. Or even just Sam and the
         | investors (without the employees) nothing would have happened.
        
         | strikelaserclaw wrote:
         | The board can still fire sam provided they get all the key
         | stakeholders onboard with that firing. It made no sense to fire
         | someone doing a good job at their role without any
         | justification, that seems to have been the key issue.
         | Ultimately, we all know this non profit thing is for show and
         | will never work out.
        
         | stetrain wrote:
         | Imagine if the board of Apple fired Tim Cook with no warning
         | right after he went on stage and announced their new developer
         | platform updates for the year alongside record growth and
         | sales, refused to elaborate as to the reasons or provide any
         | useful communications to investors over several days, and
         | replaced their first interim CEO with another interim CEO from
         | a completely different kind of business in that same weekend.
         | 
         | If you don't think there would be a shareholder revolt against
         | the board, for simply exercising their most fundamental right
         | to fire the CEO, I think you're missing part the picture.
        
           | hackernewds wrote:
           | It is prudent to recall that enhancing shareholder value and
           | delivering record growth and sales are NOT the mission of the
           | company or Board. But now it appears that it will have to be.
        
             | ketzo wrote:
             | Yeah, but they _also_ didn 't elaborate in the slightest
             | about how they were serving the charter with their actions.
             | 
             | If they were super-duper worried about how Sam was going to
             | cause a global extinction event with AI, or even just that
             | he was driving the company in too commercial of a
             | direction, _they should have said that to everyone!_
             | 
             | The idea that they could fire the CEO with a super vague,
             | one-paragraph statement, and then expect 800 employees who
             | respect that CEO to just... be totally fine with that is
             | absolutely fucking insane, regardless of the board's
             | fiduciary responsibilities. They're board members, not
             | gods.
        
               | NanoYohaneTSU wrote:
               | They don't have to elaborate. As many have pointed out,
               | most people have been given advice to not say anything at
               | all when SHTF. If they did say something there would
               | still be drama. It's best to keep these details internal.
               | 
               | I still believe in the theory that Altman was going hard
               | after profits. Both McCauley and Toner are focused on the
               | altruistic aspects of AGI and safety. Altman shouldn't be
               | at OpenAI and neither should D'Angelo.
        
               | ketzo wrote:
               | Okay, keep silent to save your own ass, fine
               | 
               | But why would anyone expect 800 people to risk their
               | livelihoods and work without a _little_ serious
               | justification? This was an inevitable reaction.
        
               | murakamiiq84 wrote:
               | I think it's important to keep in mind that BOTH Altman
               | and the board maneuvered to threaten to destroy OpenAI.
               | 
               | If Altman was silent and/or said something like "people
               | take some time off for Thanksgiving, in a week calmer
               | minds will prevail" while negotiating behind the scenes,
               | OpenAI would look a lot less dire in the last few days.
               | Instead he launched a public pressure campaign, likely
               | pressured Mira, got Satya to make some fake commitments,
               | got Greg Bockman's wife to emotionally pressure Ilya,
               | etc.
               | 
               | Masterful chess, clearly. But playing people like pieces
               | nonetheless.
        
               | paulddraper wrote:
               | Why couldn't those people have acted on their own
               | judgement?
        
               | stetrain wrote:
               | > They don't have to elaborate.
               | 
               | Sure, they don't have to. How did that work out?
               | 
               | Four CEOs in five days, their largest partner stepping in
               | to try to stop the chaos, and almost the entirety of
               | their employees threatening to leave for guaranteed jobs
               | at that partner if the board didn't step down.
        
             | stetrain wrote:
             | Sure, there is a difference there. But the actions that
             | erode confidence are the same.
             | 
             | You could tell the same story about a rising sports team
             | replacing their star coach, or a military sacking a general
             | the day after he marched through the streets to fanfare
             | after winning a battle.
             | 
             | Even without the money involved, a sudden change in
             | leadership with no explanation, followed only by increasing
             | uncertainty and cloudy communication, is not going to go
             | well for those who are backing you.
             | 
             | Even in the most altruistic version of OpenAI's goals I'm
             | fairly sure they need employees and funding to pay those
             | employees and do the research.
        
             | paulddraper wrote:
             | > enhancing shareholder value and delivering record growth
             | and sales are NOT the mission of the company
             | 
             | Developer platform updates seem to be inline.
             | 
             | And in any case, the board also failed to specify how their
             | action furthered the mission of the company.
             | 
             | From all appearances, it appeared to damage the mission of
             | the company. (If for no other reason that it dissolve the
             | company and gave everything to MSFT.)
        
           | jacquesm wrote:
           | You forgot: and offered the company for a bag of peanuts to
           | Microsoft.
        
           | eksapsy wrote:
           | no but the people like the developers, clients, government
           | etc. have also the right to exercise their revolt against
           | decisions they don't like as well. don't you think?
           | 
           | like, you get me, the board of directors is not the only
           | actual power within a company, and that was proven by the
           | whole scandal of Sam being discarded/fired that was made by
           | the developers themselves. they also have the right to
           | exercise their right to just not work at this company without
           | the leader they may had liked.
        
             | stetrain wrote:
             | Right. I really should have said employees and investors.
             | Even if OpenAI somehow had no regard for its investors,
             | they still need their employees to accomplish their
             | mission. And funding to pay those employees.
             | 
             | The board seemed to have the confidence of none of the
             | groups they needed confidence from.
        
         | mkagenius wrote:
         | None of the theories by HNers on day 1 of this drama was right
         | - not a single one and it had 1 million comments. So, lets not
         | guess anymore and just sit back.
        
         | baby wrote:
         | How did you get there? The board did fire him, they exercised
         | their right.
        
           | eksapsy wrote:
           | because people like the developers within the company did not
           | like that decision and its also within their right to
           | disagree with the board's decision and not to want to work
           | under a different leadership. They're not slaves, they're
           | employees who rented their time for a specific purpose under
           | a specific leader.
           | 
           | As it's within the board's rights to hire or fire people like
           | Sam or the developers.
        
         | Quentincestino wrote:
         | OpenAI workers has shown their plain support to their CEO by
         | threatening to follow him wherever he wants, I personaly think
         | their collective judgement on him is worth more than any rumors
        
           | BOOSTERHIDROGEN wrote:
           | Money indeed is worth more, also the only thing that is easy
           | to measure during crisis.
        
         | 6gvONxR4sf7o wrote:
         | Looks like all the naysayers from the original "were making a
         | for-profit but it won't change us" post ended up correct:
         | https://news.ycombinator.com/item?id=19359928
        
       | jurgenaut23 wrote:
       | I predict this isn't the last episode of this amazing soap opera.
        
       | kumarvvr wrote:
       | So, the company has successfully trashed its goals and values,
       | and is finally focused on making money?
        
       | kumarvvr wrote:
       | I find it worrying that Elon Musk is totally silent through this
       | whole drama.
        
         | iamflimflam1 wrote:
         | He's been sending out the occasional tweet - to be honest I get
         | the impression that like the rest of us, he's just been
         | watching with a big tub of popcorn...
        
       | doyouevensunbro wrote:
       | We're at ~250k tech industry layoffs this year and a single CEO
       | drama dominates the media because "AI".
        
         | quickthrower2 wrote:
         | Because it is a scoop
        
         | justanotherjoe wrote:
         | Yeah people should really stand up for their peer more. Who
         | knew that would work. Sam wouldn't have been back if it not for
         | Brockman and several scientists standing up for him.
        
       | ah765 wrote:
       | "Context on the negotiations to bring Sam back as CEO of OpenAI:
       | 
       | The biggest sticking point was Sam being on the board.
       | Ultimately, he conceded to not being on the board, at least
       | initially, to close the deal. The hope/expectation is that he
       | will end up on the board eventually."
       | 
       | (https://twitter.com/emilychangtv/status/1727216818648134101)
        
       | andrewstuart wrote:
       | I've lost track of everything.
        
         | system2 wrote:
         | Rich people drama. For us peasants, nothing changed.
        
       | renewiltord wrote:
       | This is a triumph of labor against management in sheep's garb.
       | Workers united were able to force an outcome they desired to
       | preserve an organization they loved while sweeping aside a board
       | that would prefer to destroy it.
        
       | didip wrote:
       | Let's be real here. At the end of the day, what matters more is
       | commercial success and a big payout.
       | 
       | AGI is still very far away and the fear mongering is nothing but
       | PR stunt.
       | 
       | But the devs need their big payout now. Which explains the
       | mutiny.
       | 
       | The "safety" board of directors drank their own koolaid a bit too
       | much.
        
         | mkii wrote:
         | You can have unsafe AI without AGI.
        
           | CaptainFever wrote:
           | Of course, it depends on what safety means. Currently it
           | seems to just be a pretext for prudishness and regulation.
        
           | kuchenbecker wrote:
           | What will this unsafe AI do?
        
       | s-xyz wrote:
       | All systems operational again https://status.openai.com/
        
       | tomalbrc wrote:
       | I really don't care who the CEO/CTO/CFO of any company is. Why is
       | this whole thing blowing up that much on ycombinator?
        
         | kaoD wrote:
         | It's nerd(ier) Game of Thrones in real life. Pretty
         | entertaining.
        
           | Solvency wrote:
           | Much more like Succession. But again. Nerdier.
        
         | meitham wrote:
         | Unfortunately the "great man theory" is still going strong in
         | the 21st century. Just like Steve Jobs has invented the iPhone
         | people believe he invented GPT!
        
           | dalbasal wrote:
           | Is the alternative theory that the ownership, control and
           | leadership of OpenAI is immaterial?
        
             | meitham wrote:
             | OpenAI success is unfortunately largely based on the one
             | ruthless decision to ignore ethics and train the model on
             | the work of millions of artists and authors. I don't know
             | if Sam himself was behind this decision. I doubt Aaron
             | Schwartz would have done the same.
        
               | dalbasal wrote:
               | Ok...
               | 
               | So the alternative to great man theory, in this case, is
               | terrible man theory... I'm not following.
               | 
               | If focusing on control over openai, is great man
               | theory... What's the contrary notion?
        
         | jazzyjackson wrote:
         | It wouldn't be interesting if one CEO got fired and replaced,
         | but the fact that there's a different CEO every couple of days
         | and no one knows what will happen next. The uncertainty is
         | addictive, not to mention the scale of self-destruction. See
         | also: trainwrecks.
        
       | _Algernon_ wrote:
       | What a farce
        
       | globalise83 wrote:
       | This is our board, they provide oversight and ensure alignment
       | with the mission. If you don't like them, we have others.
        
       | dcreater wrote:
       | Bit of an aside, but the rationality and moral compass shown by
       | HN has restored my faith after having lost it thanks to r/ChatGPT
        
       | rurban wrote:
       | This was expected. So they booted Ilya (my main culprit), Helen
       | Toner (expected, favoriting Anthropic) and Tasha McCauly. This
       | seems to have been their vote majority. Not D'Angelo. Interesting
        
       | simoneblv wrote:
       | How does Microsoft will come out from this? Satya already made a
       | big announcement on having Sam and everyone else in.
        
       | timetraveller26 wrote:
       | Back to work I guess
        
       | dcreater wrote:
       | So these nutjob teenagers are going to create AGI? We are fucked
       | if they actually succeed
        
       | upupupandaway wrote:
       | In a different thread I commented how surprised I was that Emmett
       | Shear accepted the job of interim CEO, to some criticism that my
       | opinion was "silly". _This_ is why he should have stayed miles
       | away from this whole mess. There was no winning scenario for him:
       | stay CEO and lose 95% of the employees, or get ignored by a
       | triumphant return of Sam Altman.
        
         | houston_Euler wrote:
         | After learning earlier about Sam Altman's long-con at Reddit,
         | I'm surprised I haven't seen anyone suggest that Emmett Shear
         | accepted the job in order to help get Sam back into the
         | company.
         | 
         | They were both members of the inaugural class of Y-Combinator,
         | and all of Shear's published actions since accepting the role
         | (like demanding evidence of Sam' wrongdoing) seem to have
         | helped Sam return to his role.
         | 
         | I don't think it's a stretch to say that he did win, in that he
         | might have accomplished exactly what he wanted when he accepted
         | the role.
        
           | stephenitis wrote:
           | Can you elaborate on the long con?
        
       | jl2718 wrote:
       | Are the Microsoft job offers at the same compensation still on
       | the table?
        
       | righthand wrote:
       | What's interesting to me is that during this time Meta and OpenAI
       | have eliminated their AI ethics members/teams but are still
       | preaching about how it matters. No one has given any details
       | beyond grand statements about it's importance on what these
       | ethical AIs do. Everyone has secured their payday though.
        
         | swatcoder wrote:
         | I think those changes (and this shakeup) are the start of the
         | industry grounding its expectations for this technology. I
         | think a lot of product and finance people, and many but not all
         | researchers, are seeing the current batch of generative AI
         | ideas as ripe to make do things and see the pseudo-religious
         | safety/ethics communities as not directly relevant to that
         | work.
         | 
         | So you let your product teams figure out how the _brand_ needs
         | to be protected and the workflow needs to be shaped, like
         | always, and you don 't defer to some outside department full of
         | beatniks in berets or whatever.
        
           | righthand wrote:
           | This is the abandoning of ethics. No one moving forward is
           | going to be thinking about it and they've clearly signaled
           | it's about making money. People that have issues with it will
           | just not use the products or be hypocrites about using the
           | products. There is nothing to push up against anymore, but I
           | don't think the recent events are initiator. People were
           | already letting go of ethics the moment they continued using
           | it because the tech was so cool. The parting of the ethical
           | peoples is just the final nail. There is no reason to remove
           | these ethical teams if they believe in ethics, downsize maybe
           | but not dedicating a human to at least researching the
           | ethical outcomes sure isn't very good for humanity ethics
           | concerns.
        
       | olgias wrote:
       | Where Ilya will go next then? I assume he won't stay at OpenAI
       | for too long after all this poop-show.
        
       | xeckr wrote:
       | What a ride.
        
       | Havoc wrote:
       | Keeping Adam? I thought he's the likely instigator
        
       | cft wrote:
       | Ilya won't stick around for long probably. It will be interesting
       | what he can do independently. Probably not a lot.
        
       | personalityson wrote:
       | Why is Altman, who has no higher education, critical for
       | development of AI?
        
         | calmoo wrote:
         | Is higher education really crucial for pushing something
         | forward? Even if he isn't an AI expert, there is lots of stuff
         | surrounding the technology that needs doing, for example
         | massive amounts of funding, which he seems to have been pretty
         | good at securing.
        
       | notfed wrote:
       | Sam was crucified, then resurrected after 3 days and 3 nights.
        
       | pjmlp wrote:
       | Satya probably isn't that happy, after the weekend efforts to
       | eventually bring all folks into Microsoft.
        
         | stingraycharles wrote:
         | You say that as if that was his end goal. His end goal was to
         | save the situation, and that happened. One can easily argue
         | that Microsoft's offer added huge pressure on the OpenAI board
         | that made the new / current outcome possible. And perhaps that
         | was the plan after all.
        
           | pjmlp wrote:
           | When offices are already being prepared, and HR processes
           | being put into place, we are beyond saving a situation.
        
       | ugh123 wrote:
       | In light of this weekend's events, and the more i've learned
       | about OpenAI's beginnings and purpose, I now believe that there
       | isn't necessarily a "for profit" motivation of the company, but
       | merely that the original intention to create AI that "benefits
       | humanity" is in full play now through a commercialized ChatGPT,
       | and possibly further leveraged through "GPTs" and their
       | evolution.
       | 
       | Is this the "path" to AGI? Who knows! But it _is a path_ to
       | benefitting humanity as probably Sam and his camp see it. Does
       | Ilya have a different plan? If he does, he has a lot of catching
       | up to do while the current productization of ChatGPT and GPTs
       | continue marching forward. Maybe he sees a great leap forward in
       | accuracy in GPT-5 or later. Or maybe he feels LLMs aren 't the
       | answer and theres a completely new paradigm on the horizon.
       | Regardless, they still need to answer to the fact that both
       | research and product _need funds_ to buy and power GPUs, and also
       | satisfy the MSFT partnership. Commercialization is their _only_
       | clear answer to that right now. Future investments will likely
       | not stray from this approach, else they 'll fund rivals who are
       | more commercially motivated. Thats business.
       | 
       | Thus, i'm all in on this commercially motivated humanity
       | benefitting GPT product. Let the market take OpenAI LLMs to where
       | they need/want it to. Exciting things may follow!
        
         | picadores wrote:
         | Totally agree, GPT should be trained to spout adds and develop
         | dark pattern behaviour.
        
           | ugh123 wrote:
           | There will always be misuse, less sexy, or downright illegal
           | use cases leveraging any AI product these days - just as is
           | the nature of the internet itself.
        
         | tkgally wrote:
         | In addition to commercialization providing money for AI
         | development, isn't there also the argument that prudent
         | commercialization is the best way to test the models for
         | possible dangers? I think I saw Mira Murati take that position
         | in an interview. In other words, creating a product that people
         | want to use so much that they are willing to pay for it is a
         | good way to stress-test the product.
         | 
         | I don't know if I agree, but the argument did make me think.
        
           | kuchenbecker wrote:
           | Additionally, when you have a pre-release product that has
           | largely passed small and artificial tests, you get
           | diminishing returns on continued testing.
           | 
           | Eventually you need to expand, despite some risk, to push the
           | testing forward.
           | 
           | Everyone has a different opinion on what level of safety AI
           | should reach before it's released. "Makes no mistakes" and
           | "never says something mean" are not attainable goals vs
           | "reduce the rate of hallucinations, as defined by x, to <0.5%
           | of total respinses" and "given a set of known and imagined
           | scenarios, new Model continues to have a zero false-negative
           | rate".
           | 
           | When it's an engineering problem we're trying to solve, we
           | can mqke progress, but no company can avoid all forms of harm
           | as defined by everyone.
        
       | 1vuio0pswjnm7 wrote:
       | https://twitter.com/dr_park_phd/status/1727125936070410594
       | 
       | https://twitter.com/GaryMarcus/status/1727134758919151975
       | 
       | https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding...
       | 
       | https://twitter.com/AISafetyMemes/status/1727108259297837083
        
         | olivierlacan wrote:
         | Sweetie, you might want to actually look at the photo attached
         | to the tweet.
        
       | mkii wrote:
       | April Fools? If you run a monotonic stack and summation kinda
       | algorithm on 11/21 you'd get 4/1 :-)
        
       | zx8080 wrote:
       | It was just a preparation for the upcoming IPO. Free ads in all
       | news and TV.
        
       | wouldbecouldbe wrote:
       | The sane course of action for any healthy organization after last
       | week would be to work actively on becoming more independent from
       | Microsoft.
       | 
       | With Sam at the head, especially after Microsoft backing him,
       | they will most likely do the opposite. Meaning a deeper
       | integration with Microsoft.
       | 
       | If it wasn't already, OpenAI is now basically a Microsoft
       | subsidiary. With the advantage for Microsoft of not being legally
       | liable for any court cases.
        
         | 0xDEF wrote:
         | Before the current drama:
         | 
         | >Microsoft owned 49% of the for-profit part of OpenAI.
         | 
         | >OpenAI's training, inference, and all other infrastructure
         | were running entirely on Azure credits.
         | 
         | >Microsoft/Azure were the only ones offering OpenAI's
         | models/APIs with a business-friendly SLA, uptime/stability, and
         | the option to host them in Azure data centers outside the US.
         | 
         | OpenAI is already Microsoft.
        
       | ensocode wrote:
       | > a real disruptor must be brewing somewhere unnoticed, for now.
       | Yeah, they might just be the Netscapes and AltaVistas
        
       | ChatGTP wrote:
       | Cool, so the technically minded folks on the internet have spent
       | a week discussing this and practically nothing has changed?
        
       | MattHeard wrote:
       | I was hopeful for a private-industry approach to AI safety, but
       | it looks unlikely now, and due to the slow pace of state
       | investment in public AI R&D, all approaches to AI safety look
       | unlikely now.
       | 
       | Safety research on toy models will continue to provide
       | developments, but the industry expectation appears to be that
       | emergent properties puts a low ceiling on what can be learned
       | about safety without researching on cutting edge models.
       | 
       | Altman touted the governance structure of OpenAI as a mechanism
       | for ensuring the organisation's prioritisation of safety, but the
       | reports of internal reallocation away from safety towards keeping
       | ChatGPT running under load concern me. Now the board has
       | demonstrated that it was technically capable but insufficiently
       | powerful to keep these interests in line, it seems unclear how
       | any safety-oriented organisation, including Anthropic, could
       | avoid the accelerationist influence of funders.
        
         | throwuwu wrote:
         | Easy, don't be incompetent and don't abuse your power for
         | personal gain. People aren't as dumb as you think they are and
         | they will see right through that bullshit and quit rather than
         | follow idiot tyrants.
        
         | abra0 wrote:
         | More effort spent on early commercialization like keeping
         | ChatGPT running might mean less effort on cutting edge
         | capabilities. Altman was never an AI safety person, so my
         | personal hope is that Anthropic avoids this by having higher
         | quality leadership.
        
         | mymusewww wrote:
         | I would like to know the model that isn't a "toy model".
        
         | sgt101 wrote:
         | There are no emergent properties, just a linear increase in
         | knowledge that can be retrieved.
         | 
         | - It can't plan
         | 
         | - It can't do arithmetic
         | 
         | - It can't reason
         | 
         | - It can approximately retrieve knowledge with a natural
         | language query (there are some issues with this, but it's very
         | good)
         | 
         | - It can encode data into natural languages and other
         | modalities
         | 
         | I'm not worried about it, I am worried about how badly people
         | have misunderstood what it can do and then attempted to use it
         | for things that matter.
         | 
         | But I'm not surprised.
        
           | Davidzheng wrote:
           | This is incorrect. For example the ability to translate
           | between languages is emergent. Also gpt4 can do arithmetic
           | better than the average person. Especially considering the
           | process it arrives at the computation is via intuition
           | basically vs algorithmic. Btw just as an aide the newer
           | models can also write code to do certain tasks, like
           | arithmetic.
        
             | sgt101 wrote:
             | Language translation is due to the huge corpus of
             | translations that it's trained on. Google translate has
             | been doing this for years. People don't apply softmax to
             | their arithmetic. Again, code generation is approximate
             | retrieval, it can't generate anything outside of it's
             | training distribution.
        
           | zucker42 wrote:
           | What is your definition of reasoning? In my mind, GPT-4 has
           | some nascent reasoning abilities.
        
           | quickthrower2 wrote:
           | I don't think AI safetyists are worried about any model they
           | have created so far. But if we are able to go from letter-
           | soup "ooh look that almost seems like a sentence, SOTA!" to
           | GPT4 in 20 years, where will go in the next 20? And what is
           | the point they are becoming powerful. Let alone all the crazy
           | ways people are trying to augment them with RAG, function
           | calls, get them to run on less computer power and so on.
           | 
           | Also being better at humans at everything is not a
           | prerequisite for danger. Probably a scary moment is when it
           | could look at a C (or Rust, C++, whatever) codebase, find an
           | exploit, and then use that exploit as a worm. If it can do
           | that on everyday hardware not top end GPUs (either because
           | the algorithms are made more efficient, or every iPhone has a
           | tensor unit).
        
       | ChatGTP wrote:
       | In my opinion, MS will neuter this product too, there is no way
       | they're just going to have the public accessing tools which make
       | their own software and products obsolete.
       | 
       | They will take over the board, and then steer it in some weird
       | dystopian direction.
       | 
       | Ilya knows that IMO, he was just more principled than Altman.
        
       | Uptrenda wrote:
       | Yep, this ones going in my cringe compilation.
        
       | ecmascript wrote:
       | All these posts about OpenAI.. are people really this interested
       | in whatever happens inside one company?
        
       | quietpain wrote:
       | Why is this subject giving me Silicon Valley season 2 flashbacks
       | with every update?
        
         | seydor wrote:
         | The script of SV2 was given as training data to the AGI that
         | has taken over.
        
       | jmyeet wrote:
       | I figured if Sam came back, the board would have to go as a
       | condition. That's obvious. And deserved. The handling of this
       | whole thing has been a very public clownshow.
       | 
       | Obviously, Microsoft has some influence here. That's no different
       | to any other large investor. But the key factors are:
       | 
       | 1. Lack of a good narrative from the board as to why they fired
       | Sam;
       | 
       | 2. Failure to loop in Microsoft so they're at least prepared from
       | a communications front and feel like they were part of the
       | process. The board can probably give them more details why
       | privately;
       | 
       | 3. People leaving in protest speaks well of Sam;
       | 
       | 4. The employee letter speaks well of Sam;
       | 
       | 5. The interim CEO clown show and lack of an all hands
       | immediately after speaks poorly of the board.
        
       | nickysielicki wrote:
       | Satya and Sam committed securities fraud with their late Sunday
       | "funding secured" ploy to protect the MSFT stock price. This was
       | the obvious outcome. Sam had no intentions of actually going
       | through with that and Satya was in no position to unilaterally
       | commit to the type of funding that he was implying.
       | 
       | They lied to protect the stock. That should be illegal. In fact,
       | it is illegal.
        
         | computerex wrote:
         | I don't think this is actionable in anyway, even if what you
         | say was shown unequivocally to be true.
        
           | nickysielicki wrote:
           | What do you mean? It would be conspiring to commit bank and
           | wire fraud, the SEC can totally act on that if they want to.
        
         | nmfisher wrote:
         | Yeah, I think there may well be an investigation into that. At
         | best, he said something that was unequivocally untrue, and at
         | worst it was an outright lie. That's blatant market
         | manipulation.
        
         | TrackerFF wrote:
         | Short sellers in shambles right now.
        
       | superultra wrote:
       | I find it interesting that for all the talk from OpenAI staff
       | that it was all about the people, and from Satya that MS has all
       | the rights and knowledge and can jumpstart their own branch at
       | the turn of a dime, it seems getting control of OpenAI proper was
       | a huge priority.
       | 
       | Given that Claude sucks so bad, and this week's events, I'm
       | guessing that the ChatGPT secret sauce is not as replicable as
       | some might suggest.
        
         | 0xDEF wrote:
         | Bard is better than ChatGPT-3.5.
         | 
         | But GPT-4 is indeed a class of its own.
        
       | wilde wrote:
       | But "Sam Altman, Microsoft PM" would have been a much funnier
       | outcome
        
       | corobo wrote:
       | The thing we should all take from this is that unions work :)
        
       | martin_a wrote:
       | What a total shitshow. Amazing.
        
       | j4yav wrote:
       | This has been a whirlwind, I feel like I've seen every single
       | possible wrong outcome confidently predicted here, twice.
        
       | mlindner wrote:
       | Well that's disappointing. They might as well disband the entire
       | concept of the non-profit as it's clearly completely irrelevant
       | and powerless.
        
       | gongagong wrote:
       | Meta is looking like the Mother Teresa of large corp LLM
       | providers which is crazy to say out loud (; JjutoJjut)
        
       | cbeach wrote:
       | Does anyone know which faction (e/acc vs decels) the new board
       | members Bret Taylor and Larry Summers will be on?
       | 
       | One thing IS clear at this point - their political alignment:
       | 
       | * Taylor a significant donor to Joe Biden ($713,637 in 2020):
       | https://nypost.com/2022/04/26/twitter-board-members-gave-tho...
       | 
       | * Summers is a former Democrat Treasury Secretary who has shifted
       | leftwards with age: https://www.newstatesman.com/the-weekend-
       | interview/2023/03/w...
        
       | davidthewatson wrote:
       | The most interesting thing here is not the cult of personality
       | battle between board and CEO. Rather, it's that these teams have
       | managed to ship consumer AI that has a liminal, asymptotic edge
       | where the smart kids can manipulate it into doing emergent things
       | that it was not designed to do. That is, many of the outcomes of
       | in-context learning could not be predicted at design time and
       | they are, in fact, mind-blowing, magical, and likely not safe for
       | consumption by those who believe that the machines are anywhere
       | near the spectrum from consciousness to sentience.
        
       | dizzydes wrote:
       | D'Angelo is still there... there goes that theory.
        
       | DebtDeflation wrote:
       | >We have reached an agreement in principle for Sam Altman to
       | return to OpenAI as CEO with a new initial board of Bret Taylor
       | (Chair), Larry Summers, and Adam D'Angelo.
       | 
       | Is Ilya off the board then?
       | 
       | Why is Adam still on?
       | 
       | Brett and Larry are good choices, but they need to get that board
       | up to 10 or so people representing a balance of perspectives and
       | interests very quickly.
        
       | al_be_back wrote:
       | Losing the CEO must not push significant number of your staff to
       | throw hissy fits and jump ship - it doesn't instill confidence in
       | investors, partners, and crucially customers.
       | 
       | as this event turned into a farce, it's evident that neither the
       | company nor it's key investors accounted much for the "bus
       | factor/problem" i.e loosing a key-person threatened to destroy
       | the whole enterprise.
       | 
       | for me this a failure in Managing Risk 101.
        
       | causi wrote:
       | Kicking Sam out was a bad move. Begging him back is worse.
       | Instead of having an OpenAI whose vision you disagree with, now
       | we have an OpenAI with no vision at all that's simply blown back
       | and forth.
        
       | fredgrott wrote:
       | MS and OpenAI did not win here, but one of their competitors
       | did...whoops.
       | 
       | Why did I say that? Look at the product release by the
       | competitors these past few days. 2nd, Sam pushing for AI chips
       | implies that chatGPT's future breakthroughs are hardware bounded.
       | Hence, the road to AGI is not through chatGPT.
        
       | roody15 wrote:
       | "The company also agreed to revamp the board of directors that
       | had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of
       | Salesforce, as chair and also appointed Larry Summers, former
       | U.S. Treasury Secretary, to the board."
       | 
       | Not looking good for the "Open" part of OpenAI.
        
         | otteromkram wrote:
         | Could have said the same thing once Microsoft got involved.
        
       | garrison wrote:
       | If OpenAI remains a 501(c)(3) charity, then any employee of
       | Microsoft on the board will have a fiduciary duty to advance the
       | mission of the charity, rather than the business needs of
       | Microsoft. There are obvious conflicts of interest here. I don't
       | expect the IRS to be a fan of this arrangement.
        
         | flagrant_taco wrote:
         | I don't expect the government to regulate any of this
         | aggressively. AI is much to important to the government and
         | military to allow pesky conflicts of interest to slow down any
         | competitive advantage we may have.
        
           | dgrin91 wrote:
           | If you think that OpenAI is the Gov's only source of high
           | quality AI research then I have a bridge to sell you.
        
             | jakderrida wrote:
             | If you think the person you're replying to was talking
             | about regulating OpenAI specifically and not the industry
             | as a whole, I have ADHD medicine to sell you.
        
               | swores wrote:
               | The context of the comment thread you're replying to was
               | a response to a comment suggesting the IRS will get
               | involved in the question of whether MS have too much
               | influence over OpenAI, it was not the subject of general
               | industry regulation.
               | 
               | But hey, at least you fitted in a snarky line about ADHD
               | in the comment you wrote while not having paid attention
               | to the 3 comments above it.
        
               | freedomben wrote:
               | if up-the-line parent wasn't talking about regulation of
               | AI in general, then what do you think they meant by
               | "competitive advantage"? Also, governments have to set
               | policy and enforce that policy. They can't (or shouldn't
               | at least) pick and choose favorites.
               | 
               | Also GP snark was a reply to snark. Once somebody opens
               | the snark, they should expect snark back. It's ideal for
               | nobody to snark, and big for people not to snark back at
               | a snarker, but snarkers gonna snark.
        
         | baking wrote:
         | My guess is that the non-profit has never gotten this kind of
         | scrutiny now and the new directors are going to want to get
         | lawyers involved to cover their asses. Just imagine their
         | positions when Sam Altman really does something worth firing.
         | 
         | I think it was a real mistake to create OpenAI as a public
         | charity and I would be hesitant to step into that mess. Imagine
         | the fun when it tips into a private foundation status.
        
           | danaris wrote:
           | Well, I think that's really the question, isn't it?
           | 
           | Was it a mistake to create OpenAI as a public charity?
           | 
           | Or was it a mistake to operate OpenAI as if it were a
           | startup?
           | 
           | The problem isn't really either one--it's the inherent
           | conflict between the two. IMO, the only reason to see
           | creating it as a 501(c)(3) being a mistake is if you think
           | cutting-edge machine learning is _inherently_ going to be
           | targeted by people looking to make a quick buck off of it.
        
             | blackoil wrote:
             | OpenAI the charity would have survived only as an ego
             | project for Elon doing something fun with minor impact.
             | 
             | Only the current setup is feasible if they want to get the
             | kind of investment required. This can work if the board is
             | pragmatic and has no conflict of interest, so preferably
             | someone with no stake in anything AI either biz or
             | academic.
        
               | baking wrote:
               | I think the only way this can end up is to convert to a
               | private foundation and make sizable (8 figures annually)
               | grants to truly independent AI safety (broadly defined)
               | organizations.
        
             | baking wrote:
             | To create a public charity without public fundraising is a
             | no go. Should have been a private foundation because that
             | is where it will end up.
        
             | ToucanLoucan wrote:
             | > IMO, the only reason to see creating it as a 501(c)(3)
             | being a mistake is if you think cutting-edge machine
             | learning is inherently going to be targeted by people
             | looking to make a quick buck off of it.
             | 
             | I mean that's certainly been my experience of it thus far,
             | is companies rushing to market with half-baked products
             | that (allegedly) incorporate AI to do some task or another.
        
               | danaris wrote:
               | I was specifically thinking of people seeing a non-profit
               | doing stuff with ML, and trying to finagle their way in
               | there to turn it into a profit for themselves.
               | 
               | (But yes; what you describe is absolutely happening left
               | and right...)
        
           | qwery wrote:
           | > I think it was a real mistake to create OpenAI as a public
           | charity
           | 
           | Sure, with hindsight. But it didn't require much in the way
           | of _foresight_ to predict that some sort of problem would
           | arise from the not-for-profit operating a hot startup that is
           | by definition poorly aligned with the stated goals of the
           | parent company. The writing was on the wall.
        
             | fooop wrote:
             | Speaks more to a fundamental misalignment between societal
             | good and technological progress. The narrative (first born
             | in the Enlightenment) about how reason, unfettered by
             | tradition and nonage, is our best path towards happiness no
             | longer holds. AI doomerism is an expression of this
             | breakdown, but without the intellectual honesty required to
             | dive to the root of the problem and consider whether
             | Socrates may have been right about the corrupting influence
             | of writing stuff down instead of memorizing it.
             | 
             | What's happening right now is people just starting to
             | reckon with the fact that technological progress on it's
             | own is necessarily unaligned with human interests. This
             | problem has always existed, AI just makes it acute and
             | unavoidable since it's no longer possible to invoke the
             | long-tail of "whatever problem this fix creates will just
             | get fixed later". The AI alignment problem is at it's core
             | a problem of reconciling this, and it will inherently fail
             | in absence of explicitly imposing non-Enlightenment values.
             | 
             | Seeking to build openAI as a nonprofit, as well as ousting
             | Altman as CEO are both initial expressions of trying to
             | reconcile the conflict, and seeing these attempts fail will
             | only intensity it. It will be fascinating to watch as
             | researchers slowly come to realize what the roots of the
             | problem are, but also the lack of the social machinery
             | required to combat the problem.
        
             | baking wrote:
             | I think it could have easily been predicted just from the
             | initial announcements. You can't create a public charity
             | simply from the donations of a few wealthy individuals. A
             | public charity has to meet the public support test. A
             | private foundation would be a better model but someone
             | decided they didn't want to go that route. Maybe should
             | have asked a non-profit lawyer?
        
               | faramarz wrote:
               | Maybe the vision is to eventually bring UBI into it and
               | cap earn outs. Not so wild given Sam's world coin and his
               | UBI efforts when he was YC president.
        
               | baking wrote:
               | The public support test for public charities is a 5-year
               | rolling average, so "eventually" won't help you. The idea
               | of billionaires asking the public for donations to
               | support their wacky ideas is actually quite humorous.
               | Just make it a private foundation and follow the
               | appropriate rules. Bill Gates manages to do it and he's a
               | dinosaur.
        
             | broast wrote:
             | Wishfully I hope there was some intent from the beginning
             | on exposing the impossibility of this contradictory model
             | to the world, so that a global audience can evaluate on how
             | to improve our system to support a better future.
        
             | zerohalo wrote:
             | Exactly this. OpenAI was started for ostensibly the right
             | reasons. But once they discovered something that would both
             | 1) take a tremendous amount of compute power to scale and
             | develop, and 2) could be heavily monetized, they choose the
             | $ route and that point the mission was doomed, with the
             | board members originally brought in to protect the mission
             | holding their fingers in the dyke.
        
             | ncallaway wrote:
             | > is by definition poorly aligned
             | 
             | If OpenAI is struggling to hard with the corporate
             | alignment problem, how are they going to tackle the outer
             | and inner alignment problems?
        
           | purple_ferret wrote:
           | Perhaps creating OpenAI as a charity is what has allowed it
           | to become what it is, whereas other for-profit competitors
           | are worth much less. How else do you get a guy like Elon Musk
           | to 'donate' $100 million to your company?
           | 
           | Lots of ventures cut corners early on that they eventually
           | had to pay for, but cutting the corners was crucial to their
           | initial success and growth
        
             | baking wrote:
             | Elon only gave $40 million, but since he was the primary
             | donor I suspect he was the one who was pushing for the
             | "public charity" designation. He and Sam were co-founders.
             | Maybe it was Sam who asked Elon for the money, but there
             | wasn't anyone else involved.
        
           | Turing_Machine wrote:
           | > I think it was a real mistake to create OpenAI as a public
           | charity and I would be hesitant to step into that mess.
           | 
           | I think it could have worked either as a non-profit _or_ as a
           | for-profit. It 's this weird jackass hybrid thing that's
           | produced most of the conflict, or so it seems to me. Neither
           | fish nor fowl, as the saying goes.
        
           | ryukoposting wrote:
           | Are there any similar cases of this "non-profit board
           | overseeing a (huge) for-profit company" model? I want to like
           | the concept behind it. Was this inevitable due to the
           | leadership structure of OpenAI, or was it totally preventable
           | had the right people been on the board? I wish I had the
           | historical context to answer that question.
        
             | lacker wrote:
             | Yes, for example Novo Nordisk is a pharmaceutical company
             | controlled by a nonprofit, worth around $100B.
             | 
             | https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation
             | 
             | There are other similar examples like Ikea.
             | 
             | But those examples are for mature, established companies
             | operating under a nonprofit. OpenAI is different. Not only
             | does it have the for-profit subsidiary, but the for-profit
             | needs to frequently fundraise. It's natural for fundraising
             | to require renegotiations in the board structure, possibly
             | contentious ones. So in retrospect it doesn't seem
             | surprising that this process would become extra contentious
             | with OpenAI's structure.
        
         | brookst wrote:
         | There's no indication a Microsoft appointed board member would
         | be a Microsoft employee (though the they could be of course),
         | and large nonprofits often have board members that come from
         | for-profit companies.
         | 
         | I don't think the IRS cares much about this kind of thing. What
         | would be the claim? They OpenAI is pushing benefits to
         | Microsoft, a for-profit entity that pays taxes? Even if you
         | assume the absolute worst, most nefarious meddling, it seems
         | like an issue for SEC more than IRS.
        
         | TigeriusKirk wrote:
         | Larry Summers is in place to effectively give the govt seal of
         | approval on the new board, for better and worse.
        
           | ilrwbwrkhv wrote:
           | Isn't he a big Jeffrey Epstein fanboy? Ethical AGI is in safe
           | hands.
           | 
           | https://www.thecrimson.com/article/2023/5/5/epstein-
           | summers-...
        
             | futuretaint wrote:
             | nothing screams 'protect public interest' more than Wall
             | Streets biggest cheerleader during 2008 financial crisis.
             | who's next, Richard S. Fuld Jr ? Should the Enron guys be
             | included ?
        
             | kossTKR wrote:
             | It's obvious this class of people love their status as neu-
             | feudal lords above the law living as 18th century
             | libertines behind closed doors.
             | 
             | But i guess people here are either waiting for wealth to
             | trickle down on them or believe the torrent of
             | psychological operations so much peoples minds close down
             | when they intuit the circular brutal nature of hierarchical
             | class based society, and the utter illusion democracy or
             | meritocracy is.
             | 
             | The uppermost classes have been trickters through all of
             | history. What happened to this knowledge and the
             | countercultural scene in hacking? Hint; it was psyopped in
             | the early 90's by "libertarianism" and worship of
             | bureaucracy to create a new class of cybernetic soldiers
             | working for the oligarchy.
        
               | ilrwbwrkhv wrote:
               | I agree. The best young minds grinding leet code to get
               | into Google is the biggest symptom of it.
        
               | DSingularity wrote:
               | The sad part isn't the rampant sickness. The saddest part
               | is all the "intellectual" professors who enable,
               | encourage, and celebrate this.
               | 
               | It's sickening.
        
           | mcast wrote:
           | If you wanted to wear a foil hat, you might think this
           | internal fighting was started from someone connected to TPTB
           | subverting the rest of the board to gain a board seat, and
           | thus more power and influence, over AGI.
           | 
           | The hush-hush nature of the board providing zero explanation
           | for why sama was fired (and what started it) certainly
           | doesn't pass the smell test.
        
         | paulddraper wrote:
         | What if I told you...Bill Gates was/is on the board of the non-
         | profit Bill and Melinda Gates Foundation?
         | 
         | Lol HN lawyering is hilarious.
        
           | fatbird wrote:
           | Indeed, it is hilarious.
           | 
           | The Foundation has nothing to do with MS and can't possibly
           | be considered a competitor, acquisition target, supplier, or
           | any other entity where a decision for the Foundation might
           | materially harm MS (or the reverse). There's no potential
           | conflict of interest between the missions of the two.
           | 
           | Did you think OP meant there was some inherent conflict of
           | interest with charities?
        
             | paulddraper wrote:
             | Have you _seen_ OpenAI 's current board?
             | 
             | Explain how an MS employee would have greater conflict of
             | interest.
        
               | uxp8u61q wrote:
               | Conflict of interest with what? The other board members?
               | That's utterly irrelevant. Look up some big companies
               | boards some day. You'll see.
        
               | paulddraper wrote:
               | See earlier
               | 
               | > If OpenAI remains a 501(c)(3) charity, then any
               | employee of Microsoft on the board will have a fiduciary
               | duty to advance the mission of the charity, rather than
               | the business needs of Microsoft. There are obvious
               | conflicts of interest here.
               | 
               | https://news.ycombinator.com/item?id=38378069
        
         | stikit wrote:
         | OpenAI is not a charity. Microsoft's investment is in OpenAI
         | Global, LLC, a for-profit company.
         | 
         | From https://openai.com/our-structure
         | 
         | - First, the for-profit subsidiary is fully controlled by the
         | OpenAI Nonprofit. We enacted this by having the Nonprofit
         | wholly own and control a manager entity (OpenAI GP LLC) that
         | has the power to control and govern the for-profit subsidiary.
         | 
         | -Second, because the board is still the board of a Nonprofit,
         | each director must perform their fiduciary duties in
         | furtherance of its mission--safe AGI that is broadly
         | beneficial. While the for-profit subsidiary is permitted to
         | make and distribute profit, it is subject to this mission. The
         | Nonprofit's principal beneficiary is humanity, not OpenAI
         | investors.
         | 
         | -Third, the board remains majority independent. Independent
         | directors do not hold equity in OpenAI. Even OpenAI's CEO, Sam
         | Altman, does not hold equity directly. His only interest is
         | indirectly through a Y Combinator investment fund that made a
         | small investment in OpenAI before he was full-time.
         | 
         | -Fourth, profit allocated to investors and employees, including
         | Microsoft, is capped. All residual value created above and
         | beyond the cap will be returned to the Nonprofit for the
         | benefit of humanity.
         | 
         | -Fifth, the board determines when we've attained AGI. Again, by
         | AGI we mean a highly autonomous system that outperforms humans
         | at most economically valuable work. Such a system is excluded
         | from IP licenses and other commercial terms with Microsoft,
         | which only apply to pre-AGI technology.
        
           | ezfe wrote:
           | The board is the charity though, which is why the person
           | you're replying to made the remark about MSFT employees being
           | appointed to the board
        
             | UrineSqueegee wrote:
             | A charity is a type of not-for-profit organisation however
             | the main difference between a nonprofit and a charity is
             | that a nonprofit doesn't need to reach a 'charitable
             | status' whereas a charity, to qualify as a charity, needs
             | to meet very specific or strict guidelines
        
               | ezfe wrote:
               | Yes, I misspoke - I meant nonprofit
        
               | zja wrote:
               | You were right though, OpenAI Inc, which the board
               | controls, is a 501c3 charity.
        
           | dragonwriter wrote:
           | > OpenAI is not a charity.
           | 
           | OpenAI is a charity nonprofit, in fact.
           | 
           | > Microsoft's investment is in OpenAI Global, LLC, a for-
           | profit company.
           | 
           | OpenAI Global LLC is a subsidiary two levels down from
           | OpenAI, which is expressly (by the operating agreement that
           | is the LLC's foundational document) subordinated to OpenAI's
           | charitable purpose, and which is completely controlled
           | (despite the charity's indirect and less-than-complete
           | ownership) by OpenAI GP LLC, a wholly owned subsidiary of the
           | charity, on behalf of the OpenAI charity.
           | 
           | And, particularly, the OpenAI board is. _as the excerpts you
           | quote in your post expressly state_ , the board of the
           | nonprofit that is the top of the structure. It controls
           | everything underneath because each of the subordinate
           | organizations foundational documents give it (well, for the
           | two entities with outside invesment, OpenAI GP LLC, the
           | charity's wholly-owned and -controlled subsidiary) complete
           | control.
        
             | hackernewds wrote:
             | well not anymore, as they cannot function as a nonprofit.
             | 
             | also infamously they fundraised as a nonprofit, but
             | retracted to admit they needed a for profit structure to
             | thrive, which Elon is miffed about and Sam has defended
             | explicitly
        
               | dragonwriter wrote:
               | > well not anymore, as they cannot function as a
               | nonprofit.
               | 
               | There's been a lot of news lately, but unless I've missed
               | something, even with the tentative agreement of a new
               | board for the charity nonprofit, they are and plan to
               | remain a charity nonprofit with the same nominal mission.
               | 
               | > also infamously they fundraised as a nonprofit, but
               | retracted to admit they needed a for profit structure to
               | thrive
               | 
               | No, they admitted they needed to sell products rather
               | than merely take donations to survive, and needed to be
               | able to return profits from doing that to investors to
               | scale up enough to do that, so they formed a for-profit
               | subsidiary with its own for-profit subsidiary, both
               | controlled by another subsidiary, all subordinated to the
               | charity nonprofit, to do that.
        
               | DebtDeflation wrote:
               | >they are and plan to remain a charity nonprofit
               | 
               | Once the temporary board has selected a permanent board,
               | give it a couple of months and then get back to us. They
               | will almost certainly choose to spin the for-profit
               | subsidiary off as an independent company. Probably with
               | some contractual arrangement where they commit x funding
               | to the non-profit in exchange for IP licensing. Which is
               | the way they should have structured this back in 2019.
        
               | tempestn wrote:
               | "Almost certainly"? Here's a fun exercise. Over the
               | course of, say, a year, keep track of all your
               | predictions along these lines, and how certain you are of
               | each. Almost certainly, expressed as a percentage, would
               | be maybe 95%? Then see how often the predicted events
               | occur, compared to how sure you are.
               | 
               | Personally I'm nowhere near 95% confident that will
               | happen. I'd say I'm about 75% confident it won't. So I
               | wouldn't be utterly shocked, but I would be quite
               | surprised.
        
               | kyle_grove wrote:
               | I'm pretty confident (close to the 95% level) they will
               | abandon the public charity structure, but throughout this
               | saga, I have been baffled by the discourse's willingness
               | to handwave away OpenAI's peculiar legal structure as
               | irrelevant to these events.
        
               | tempestn wrote:
               | Within a few months? I don't think it should be possible
               | to be 95% confident of that without inside info. As you
               | said, many unexpected things have happened already. IMO
               | that should bring the most confident predictions down to
               | the 80-85% level at most.
        
           | strangesmells06 wrote:
           | > First, the for-profit subsidiary is fully controlled by the
           | OpenAI Nonprofit. We enacted this by having the Nonprofit
           | wholly own and control a manager entity (OpenAI GP LLC) that
           | has the power to control and govern the for-profit
           | subsidiary.
           | 
           | Im not criticizing. Big fan of avoiding being taxed to fund
           | wars....but its just funny to me it seems like theyre sort of
           | having their cake and eating it too with this kind of
           | structure.
           | 
           | Good for them.
        
         | voxic11 wrote:
         | Even if the IRS isn't a fan, what are they going to do about
         | it? It seems like the main recourse they could pursue is they
         | could force the OpenAI directors/Microsoft to pay an excise tax
         | on any "excess benefit transactions".
         | 
         | https://www.irs.gov/charities-non-profits/charitable-organiz...
        
         | mwattsun wrote:
         | Microsoft doesn't have to send an employee to represent them on
         | the board. They could ask Bill Gates.
        
           | murakamiiq84 wrote:
           | Actually I think Bill would be a pretty good candidate.
           | Smart, mature, good at first principles reasoning, deeply
           | understands both the tech world and the nonprofit world, is a
           | tech person who's not socially networked with the existing SF
           | VCs, and (if the vague unsubstantiated rumors about Sam are
           | correct) is one of the few people left with enough social
           | cachet to knock Sam down a peg or two.
        
             | lucubratory wrote:
             | Larry Summers, Bill Gates, if they keep on like that they
             | can fill the board with all of Epstein's "associates".
        
         | pc86 wrote:
         | Others have pointed out several reasons this isn't actually a
         | problem (and that the premise itself is incorrect since
         | "OpenAI" is not a charity), but one thing not mentioned: even
         | if the MS-appointed board member is a MS employee, yes they
         | will have a fiduciary duty to the organizations under the
         | purview of the board, but unless they are _also_ a board member
         | of Microsoft (extraordinarily unlikely) they have no such
         | fiduciary duty to Microsoft itself. So in the also unlikely
         | scenario that there is a vote that conflicts with their
         | Microsoft duties, and in the even more unlikely scenario that
         | they don 't abstain due to that conflict, they have a legal
         | responsibility to err on the side of OpenAI and no legal
         | responsibility to Microsoft. Seems like a pretty easy decision
         | to make - and abstaining is the easiest unless it's a
         | contentious 4-4 vote and there's pressure for them to choose a
         | side.
         | 
         | But all that seems a lot more like an episode of Succession and
         | less like real life to be honest.
        
           | throwoutway wrote:
           | It's still a conflict of interest. One that they should
           | avoid. Microsoft COULD appoint someone who they like and
           | shares their values, that is not a MSFT employee. That would
           | be a preferred approach but one that I doubt a megacorp would
           | take
        
             | ghaff wrote:
             | Both profit and non-profit boards have members that have
             | potential conflicts of interest all the time. So long as
             | it's not too egregious no one cares, especially not the
             | IRS.
        
           | dragonwriter wrote:
           | > and that the premise itself is incorrect since "OpenAI" is
           | not a charity
           | 
           | OpenAI is a 501c3 charity nonprofit, and the OpenAI board
           | under discussion is the board of that charity nonprofit.
           | 
           | OpenAI Global LLC is a for-profit subsidiary of a for-profit
           | subsidiary of OpenAI, both of which are controlled, by their
           | foundational agreements that gie them legal existence, by a
           | different (AFAICT not for-profit but not legally a nonprofit)
           | LLC subsidiary of OpenAI (OpenAI GP LLC.)
        
           | oatmeal1 wrote:
           | Microsoft is going to appoint someone who benefits Microsoft.
           | Whether a particular vote would violate fiduciary duty is
           | subjective. There's plenty of opportunity for them to
           | prioritize the welfare of Microsoft over OAI.
        
           | Xelynega wrote:
           | Whats the point of Microsoft appointing a board member if not
           | to sway decision in ways that benefit them?
        
         | _b wrote:
         | > There are obvious conflicts of interest here.
         | 
         | There are almost always obvious conflicts of interest. In a
         | normal startup, VCs have a legal responsibility to act in the
         | interest of the common shares, but in practice, they overtly
         | act in the interest of the preferred shares that their fund
         | holds.
        
           | hyperhopper wrote:
           | The more and more I see the way complex share structures are
           | used, the more I think they should be outlawed
        
         | bradleybuda wrote:
         | Major corporate boards are rife with "on paper" conflicts on
         | interest - that's what happens when you want people with real
         | management experience to sit on your board and act like
         | responsible adults. This happens in every single industry and
         | has nothing to do with tech or with OpenAI specifically.
         | 
         | In practice, board bylaws and common sense mean that
         | individuals recuse themselves as needed and don't do stupid
         | shit.
        
           | iandanforth wrote:
           | "In practice, board bylaws and common sense mean that
           | individuals ... don't do stupid shit."
           | 
           | Were you watching a different show than the rest of us?
        
             | badloginagain wrote:
             | And we're seeing the result in real-time. Stupid shit doers
             | have been replaced with hopefully-less-stupid-shit-doers.
             | 
             | It's a real shame too, because this is a clear loss for the
             | AI Alignment crowd.
             | 
             | I'm on the fence about the whole alignment thing, but at
             | least there is a strong moral compass in the field-
             | especially compared to something like crypto.
        
               | alsetmusic wrote:
               | > at least there is a strong moral compass in the field
               | 
               | Is this still true when the board gets overhauled after
               | trying to uphold the moral compass.
        
               | saalweachter wrote:
               | And when the CEO's other thing is a cryptocurrency?
        
               | lacrimacida wrote:
               | Sama's moral compass clearly has north pointing at money
               | and that will definitely get him to a different
               | destination.
        
             | jjoonathan wrote:
             | No, this is the part of the show where the patronizing
             | rhetoric gets trotted out to rationalize discarding the
             | principles that have suddenly become inconvenient for the
             | people with power.
        
               | photochemsyn wrote:
               | No worries. The same kind of people who devoted their
               | time and energy to creating open-source operating systems
               | in the era of Microsoft and Apple are now devoting their
               | time and energy to doing the same for non-lobotomized
               | LLMs.
               | 
               | Look at these clowns (Ilya & Sam and their angry talkie-
               | bot), it's a revelation, like Bill Gates on Linux in
               | 2000:
               | 
               | https://www.youtube.com/watch?v=N36wtDYK8kI
        
             | hinkley wrote:
             | I get a lostredditor vibe way too often here. Oddly more
             | than Reddit.
             | 
             | I think people forget sometimes that comments come with a
             | context. If we are having a conversation about Deep Water
             | Horizon someone will chime in about how safe deep sea oil
             | exploration is and how many failsafes blah blah blah.
             | 
             | "Do you know where you are right now?"
        
               | Juicyy wrote:
               | Its a more technical space then reddit. Youre gonna have
               | more know it alls spewing
        
               | jachee wrote:
               | You know that know-it-all should be hyphenated, right?
               | 
               | ...
               | 
               | ;)
        
               | LordDragonfang wrote:
               | >I think people forget sometimes that comments come with
               | a context.
               | 
               | I mean, this is definitely one of my pet peeves, but the
               | wider context of this conversation is _specifically a
               | board doing stupid shit_ , so that's a very relevant
               | counterexample to the thing being stated. Board members
               | _in general_ often do stupid /short-sighted shit
               | (especially in tech), and I don't know of any examples of
               | corporate board members recusing themselves.
        
               | mhluongo wrote:
               | Common example of recusal is CEO comp when the CEO is on
               | the board.
        
               | alsetmusic wrote:
               | That's what I would term a black-and-white case. I don't
               | think there's anyone with sense who would argue in good
               | faith that a CEO should get a vote on their own salary.
               | There are many degrees of grey between outright
               | corruption and this example, and I think the concern lies
               | within.
        
               | mhh__ wrote:
               | So?
        
               | alsetmusic wrote:
               | I get what you're saying, but I also live in the world
               | and see the mechanics of capitalism. I may be a person
               | who's interested in tech, science, education, archeology,
               | etc. That doesn't mean that I don't also have political
               | views that sometimes overlap with a lot of other very-
               | online people.
               | 
               | I think the comment to which you replied has a very
               | reddit vibe, no doubt. But also, it's a completely valid
               | point. Could it have been said differently? Sure. But I
               | also immediately agreed with the sentiment.
        
               | hinkley wrote:
               | Oh I wasn't complaining about the parent, I was
               | complaining it needed to be said.
               | 
               | We are talking about a failure of the system, in the
               | context of a concrete example. Talking about how the
               | system actually works is only appropriate if you are
               | drawing specific arguments up about how this situation is
               | an anomaly, and few of them do that.
               | 
               | Instead it often sounds like "it's very unusual for the
               | front to fall off".
        
               | iandanforth wrote:
               | I apologize, the comment's irony overwhelmed my snark
               | containment system.
        
               | Obscurity4340 wrote:
               | This comment is perfectionXD
        
             | freedomben wrote:
             | You need to be able to separate macro-level and micro-
             | level. GP is responding to a comment about the IRS caring
             | about the conflict-of-interest on paper. The IRS has to
             | make and follow rules at a _macro_ level. Micro-level
             | events obviously can affect the macro view, but you don 't
             | completely ignore the macro because something bad happened
             | at the micro level. That's how you get knee-jerk
             | reactionary governance, which is highly emotional.
        
             | dev_tty01 wrote:
             | Yes, and we were also watching the thousands and thousands
             | of companies where these types of conflicts are handled
             | easily by decent people and common sense. Don't confuse the
             | outlier with the silent majority.
        
           | ip26 wrote:
           | Reminds me of the "revolving door" problem. Obvious risk of
           | corruption and conflict of interest, but at the same time
           | experts from industry are the ones with the knowledge to be
           | effective regulators. Not unlike how many good patent
           | attorneys were previously engineers.
        
           | dragonwriter wrote:
           | A corporation acting (due to influence from a conflicted
           | board member that doesn't recuse) contrary to the interests
           | of its stockholders and in the interest of the conflicted
           | board member or who they represent potentially creates
           | liability of the firm to its stockholders.
           | 
           | A charity acting (due to the influence of a conflicted board
           | member that doesn't recuse) contrary to its charitable
           | mission in the interests of the conflicted board member or
           | who they represent does something similar with regard to
           | liability of the firm to various stakeholders with a legally-
           | enforceable interest in the charity and its mission, _but
           | also_ is also a _public_ civil violation that can lead to IRS
           | sanctions against the firm up to and including monetary
           | penalties and loss of tax exempt status _on top of_ whatever
           | private tort liability exists.
        
           | fouc wrote:
           | OpenAI isn't a typical corporation but a 501(c)(3), so bylaws
           | & protections that otherwise might exist appear to be lacking
           | in this situation.
        
             | dragonwriter wrote:
             | 501c3's also have governing internal rules, and the threat
             | of penalties and loss of status imposed by the IRS gives
             | them additional incentive to safeguard against even the
             | appearance of conflict being manifested into how they
             | operate (whether that's avoiding conflicted board members
             | or assuring that they recuse where a conflict is relevant.)
             | 
             | If OpenAI didn't have adequate safeguards, either through
             | negligence or becauase it was in fact being run
             | deliberately as a fraudulent charity, that's a particular
             | failure of OpenAI, not a "well, 501c3's inherently don't
             | have safeguard" thing.
        
               | kevin_thibedeau wrote:
               | Trump Foundation was a 501c3 that laundered money for 30
               | years without the IRS batting an eye.
        
               | hnbad wrote:
               | The Bill and Melinda Gates Foundation is a 501c3 and I'd
               | expect that even the most techno-futurist free-market
               | types on HN would agree that no matter what alleged
               | impact it has, it is also in practice creating profitable
               | overseas contracts for US corporations that ultimately
               | provide downstream ROI to the Gates estate.
               | 
               | Most people just tend to go about it more intelligently
               | than Trump but "charitable" or "non-profit" doesn't mean
               | the organization exists to enrich the commons rather than
               | the moneyed interests it represents.
        
           | throwaway-blaze wrote:
           | No conflict, no interest.
        
           | dizzydes wrote:
           | Larry Summers practically invented this stuff...
        
         | hackernewds wrote:
         | Not to mention, the mission of the Board cannot be "build safe
         | AGI" anymore. Perhaps something more consistent with expanding
         | shareholder value and capitalism, as the events of this weekend
         | has shown.
         | 
         | Delivering profits and shareholder value is the sole and
         | dominant force in capitalism. Remains to be seen whether that
         | is consistent with humanity's survival
        
         | boh wrote:
         | Whenever there's an obvious conflict, assume it's not enforced
         | or difficult to litigate or has relatively irrelevant
         | penalties. Experts/lawyers who have a material stake in getting
         | this right have signed off on it. Many (if not most) people
         | with enough status to be on the board of a fortune 500 company
         | tend to also be on non-profit boards. We can go out on a limb
         | and suppose the mission of the nonprofit is not their top
         | priority, and yet they continue on unscathed.
        
           | hinkley wrote:
           | Do you remember before Bill Gates got into disease prevention
           | he thought that "charity work" could be done by giving away
           | free Microsoft products? I don't know who sat him down and
           | explained to him how full of shit he was but they deserve a
           | Nobel Peace Prize nomination.
           | 
           | Just because someone says they agree with a mission doesn't
           | mean they have their heads screwed on straight. And my thesis
           | is that the more power they have in the real world the worse
           | the outcomes - because powerful people become progressively
           | immune to feedback. This has been working swimmingly for me
           | for decades, I don't need humility in a new situation.
        
           | Xelynega wrote:
           | > Experts/lawyers who have a material stake in getting this
           | right have signed off on it.
           | 
           | How does that work when we're talking about non-profit
           | motives? The lawyers are paid by the companies benefitting
           | from these conflicts, so how is it at all reassuring to hear
           | that the people who benefit from the conflict signed off on
           | it?
           | 
           | > We can go out on a limb and suppose the mission of the
           | nonprofit is not their top priority, and yet they continue on
           | unscathed.
           | 
           | That's the concern. They've just replaced people who "maybe"
           | cared about the mission statement with people who you've
           | correctly identified care more about profit growth than the
           | nonprofit mission.
        
         | jklein11 wrote:
         | I'm a little bit confused, are you saying that the IRS would
         | have some sort of beef with employees of Microsoft serving on
         | the board of a 501(c)(3)?
        
         | zerohalo wrote:
         | OpenAI's charter is dead. I expect future boards to amend it.
        
           | dragonwriter wrote:
           | Its useful PR pretext for their regulatory advocacy, and
           | subjective enough that if they are careful not to be too
           | obvious about specifically pushing one company's commercial
           | interest, they can probably get away with it forever, so why
           | would it be any deader than when Sam was CEO before and not
           | substantively guided by it.
        
           | ric2b wrote:
           | People keep saying this but is there any evidence that any of
           | this was related to the charter?
        
             | Xelynega wrote:
             | The only evidence I have is that the board members that
             | were removed had less business connections than the ones
             | that replaced them.
             | 
             | The point of the board is to ensure the charter is being
             | followed, when the biggest concern is "is our
             | commercialization getting in the way of our charter" what
             | else does it mean to replace "academics" with
             | "businesspeople"?
        
         | augustulus wrote:
         | how can they not remain a charity?
        
         | 627467 wrote:
         | I don't get the drama with "conflict of interests"... Aren't
         | board members generally (always?) in representation of major
         | shareholders? Isn't it obvious that shareholders have interests
         | that are likely to be in conflict with each other or even the
         | own organization? Thats why board members are supposed to check
         | each other, right?
        
           | Xelynega wrote:
           | OpenAI is a non profit and the board members are not allowed
           | to own shares in the for profit.
           | 
           | That means the remaining conflicts are when the board has to
           | make a decisions between growing the profit or furthering the
           | mission statement. I wouldn't trust the new board appointed
           | by investors to ever make the correct decision in these
           | cases, and they already kicked out the "academic" board
           | members with the power to stop them.
        
         | mattmcknight wrote:
         | The non-profit could sell off its interest in the for-profit
         | company and use the money for AGI research.
        
       | bvan wrote:
       | All involved have clearly demonstrated the lack of credibility in
       | self-governance or the ability to make big-boy decisions. All
       | reassurances from now on will sound hollow.
        
       | NorwegianDude wrote:
       | Why is people so interested in this? Why exactly was he fired? I
       | did not get why when I read the news, so I find it strange that
       | people care if they don't even know what it's about. Do we know
       | for sure what this was/is about?
        
       | minzi wrote:
       | I would be surprised if the original board's reasons for caving
       | in were not influenced by personal factors. They must've been
       | receiving all kinds of threats from those involved and from
       | random twitter extremists.
       | 
       | It is troubling because it shows that this "external" governance
       | meant to make decisions for the good of humanity is unable to
       | enforce decisions. The internal employees were obviously swayed
       | by financial gain as well. I don't think that I would behave
       | differently were I in their shoes honestly. However, this does
       | definitively mean that they are a product and profit driven
       | group.
       | 
       | I think that Sam Altman is dishonest and a depressing example of
       | what modern Americans idealize. He has all these ideals he
       | preaches but will happily turn on if it upsets his ego. On top of
       | that he is held up as some star innovator when in reality he
       | built nothing himself. He just identified one potential
       | technological advancement and threw money at it with all his
       | billionaire friends.
       | 
       | Gone are the days of building things in a garage with a mission.
       | Founders are no longer visionary engineers and designers. The
       | path now is clear. Convince some rich folks you're worthy of
       | being rich too. When they adopt you into wealth you can start
       | throwing shit at the wall until something sticks. Eventually
       | something will and you can claim visionary status. Now your
       | presence in the billionaire club is beyond reproach because
       | you're a "founder".
        
         | InCityDreams wrote:
         | >They must've been receiving all kinds of threats from those
         | involved and from random twitter extremists.
         | 
         | Oooh, yeah. "Must have".
        
       | dangerface wrote:
       | Keeping D'Angelo on the board is an obvious mistake, he has too
       | much conflicting interest to be level headed and has demonstrated
       | that. The only people that benefited from all this are Microsoft
       | and D'Angelo. Give it a year and we will see part 2 of all this.
       | 
       | Further where is the public accountability? I thought the board
       | was to act in the interests of the public but they haven't
       | communicated anything. Are we all just supposed to pretend this
       | never happend and that the board will now act in the public
       | interest?
       | 
       | We need regulations to hold these boards which hold so much power
       | accountable to the public. No reasonable AI regulations can be
       | made until the public are included in a meaningful way, anyone
       | that pushes for regulations without the public is just trying to
       | control the industry and establish a monopoly.
        
       | EarthAmbassador wrote:
       | Larry effing Summers?!
       | 
       | Really?
       | 
       | Was Henry Kissinger unavailable?
        
       | alienicecream wrote:
       | High Street salesman takes over Frankenstein's lab. Can't wait to
       | see what's going to happen next.
        
       | sys_64738 wrote:
       | Why have OpenAI take to poaching employees from M$ now?
        
       | nojvek wrote:
       | What this proves is that OpenAI interests are now entrenched with
       | profit.
       | 
       | I'm assuming most of the researchers there probably realize there
       | is a loooot of money to be made and they have to optimize for
       | that.
       | 
       | They are deffo pushing the frontier of AI.
       | 
       | However I wish OpenAI doesn't get to AGI first.
       | 
       | I don't think it will be the best for all of humanity.
       | 
       | I'm scared.
        
       | pimpampum wrote:
       | So Altman started it and ended up winning it, clearly his coup.
       | Sad how employees were duped into standing behind him.
        
       | donohoe wrote:
       | Larry Summers!?
        
       | throwaway74852 wrote:
       | So OpenAI's board is now exclusively white men, and predominantly
       | tech insiders? Lovely to have such a diverse group behind this
       | technology Could this be more comical?
        
       | iteratethis wrote:
       | Sam's power was tested and turned out to be absolute.
       | 
       | Sam was doing whatever he wanted, got caught, and now can
       | continue to do what he wants with even more backing.
        
       | nomaD_ wrote:
       | Hiring engineers at 900K salary & pretending to be non-profit
       | does not work. Turns out, 97% of them wanted to make money.
       | 
       | Government should have banned big tech investment in AI companies
       | a year ago. If they want, they can create their own AI but buying
       | one should be off the table.
        
       | Pigalowda wrote:
       | Shows over I guess. Feels like the ending to GoT. I'm not sure I
       | even care what happened to begin it all anymore.
        
       | rceDia wrote:
       | The "giveaway" is the fact that "Microsoft is happy" with the
       | return of Mr. Altman. Can't wait for the former boards tell-all
       | story. Bets on: how a founder of cutting edge tech company wanted
       | world peace and no harm but outside capital forces steered him to
       | other "unfathomable riches" option. It happens.
        
       | Mrirazak1 wrote:
       | The Steve jobs of our TikTok generation. Came back very quickly
       | in comparison to the 12 years but still.
        
       | ChoGGi wrote:
       | I'm sure that first meeting will be... Interesting.
        
       | lysecret wrote:
       | Fascinating, I see a lot of VC/Msfot has overthrown our NPO
       | governing structure because of profit incentives narrative.
       | 
       | I don't think this is what really happened at all. The reason
       | this decision was made was because 95% of employees sided with
       | Sam on this issue, and the board didn't explain themselves in any
       | way at all. So it was Sam + 95% of employees + All investors
       | against the board. In which case the board should lose (since
       | they are only governing for themselves here).
       | 
       | I think in the end a good and fair outcome. I still think their
       | governing structure is decent to solve the AGI problem, this
       | particular board was just really bad.
        
         | greenie_beans wrote:
         | next time, can't wait to see what happens when capital is on
         | the opposite side of the 95% of employees.
        
         | r_thambapillai wrote:
         | Of course, the profit incentive also applies to all the
         | employees (which isn't necessarily a bad thing, its good to
         | align the company's goals with those of the employees). But
         | when the executives likely have 10s of millions of dollars on
         | the line, and many of the IC's will likely have single digit
         | millions on the line as well, it doesn't seem exactly
         | straightforward to view this as the employees are unbiased
         | adjudicators of what's in the interest of the non-profit
         | entity, which is _supposed_ to be what 's in charge.
         | 
         | It is sort of strange that our communal reaction is to say
         | "well this board didn't act anything like a normal corporate
         | board": of course it didn't, that was indeed the _whole_ point
         | of not having a normal corporate board in charge.
         | 
         | Whatever you think of Sam, Adam, Ilya etc, the one conclusion
         | that seems safe to reach is that in the end, the
         | profit/financial incentives ended up being far more important
         | than the NGOs mission, no matter what legal structure was in
         | place.
        
         | jkaplan wrote:
         | 1. Microsoft was heavily involved in orchestrating the 95% of
         | employees to side with Sam -- through promising them money/jobs
         | and through PR/narrative 2. The profit incentives apply to
         | employees too
         | 
         | Bigger picture, I don't think the
         | "money/VC/MSFT/commercialization faction destroyed the
         | safety/non-profit faction" is mutually exclusive with "the
         | board fucked up." IMO, both are true
        
         | campbel wrote:
         | I don't think the board was big enough for starters. Of the
         | folks on their, only one (Adam) had experience as a leader of a
         | for profit venture. Helen probably lacks the leadership
         | background to make any progress pushing her priorities.
        
       | BryantD wrote:
       | We're not gonna see it but I'd love to see Sam's new contract and
       | particularly any restraints on outside activities.
        
       | geniium wrote:
       | This was a nice ride. Nice story to follow
        
       | theGnuMe wrote:
       | Larry Summers is an interesting choice. Any ideas why? I know he
       | was Sheryl Sandberg's mentor/professor which gives him a tech
       | connection. However, I've watched him debate Paul Krugman on
       | inflation in some economic lectures and it almost felt like Larry
       | was out of his element as in Larry was outgunned by Paul... but
       | maybe he was having an off day or it was a topic he is not an
       | expert in. But I don't know the history, haven't read either of
       | their books and I am not an economist. But it was something I
       | noticed.. almost like he was out of touch.
       | 
       | That has nothing to do with AI though.
        
       | jafitc wrote:
       | OpenAI's Future and Viability
       | 
       | - OpenAI has damaged their brand and lost trust, but may still
       | become a hugely successful company if they build great products
       | 
       | - OpenAI looks stronger now with a more professional board, but
       | has fundamentally transformed into a for-profit focused on
       | commercializing LLMs
       | 
       | - OpenAI still retains impressive talent and technology assets
       | and could pivot into a leading AI provider if managed well
       | 
       | ---
       | 
       | Sam Altman's Leadership
       | 
       | - Sam emerged as an irreplaceable CEO with overwhelming employee
       | loyalty, but may have to accept more oversight
       | 
       | - Sam has exceptional leadership abilities but can be
       | manipulative; he will likely retain control but have to keep
       | stakeholders aligned
       | 
       | ---
       | 
       | Board Issues
       | 
       | - The board acted incompetently and destructively without clear
       | reasons or communication
       | 
       | - The new board seems more reasonable but may struggle to govern
       | given Sam's power
       | 
       | - There are still opposing factions on ideology and
       | commercialization that will continue battling
       | 
       | ---
       | 
       | Employee Motivations
       | 
       | - Employees followed the money trail and Sam to preserve their
       | equity and careers
       | 
       | - Peer pressure and groupthink likely also swayed employees more
       | than principles
       | 
       | - Mission-driven employees may still leave for opportunities at
       | places like Anthropic
       | 
       | ---
       | 
       | Safety vs Commercialization
       | 
       | - The safety faction lost this battle but still has influential
       | leaders wanting to constrain the technology
       | 
       | - Rapid commercialization beat out calls for restraint but may
       | hit snags with model issues
       | 
       | ---
       | 
       | Microsoft Partnership
       | 
       | - Microsoft strengthened its power despite not appearing involved
       | in the drama
       | 
       | - OpenAI is now clearly beholden to Microsoft's interests rather
       | than an independent entity
        
         | qualifiedai wrote:
         | No structure or organization is stronger when their leader
         | emerged as "irreplaceable".
        
           | rmbyrro wrote:
           | In this case, I don't see as a flaw, but really as Sam's
           | abilities to lead a highly cohesive group and keep it highly
           | motivated and aligned.
           | 
           | I don't personally like him, but I must admit he displayed a
           | lot more leadership skills than I'd recognize before.
           | 
           | It's inherently hard to replace someone like that in any
           | organization.
           | 
           | Take Apple, after losing Jobs. It's not that Apple was a
           | "weak" organization, but really Jobs that was extraordinary
           | and indeed irreplaceable.
           | 
           | No, I'm not comparing Jobs and Sam. Just illustrating my
           | point.
        
             | prh8 wrote:
             | What's the difference between leadership skills and cult of
             | following?
        
               | spurgu wrote:
               | I think an awesome leader would naturally create some
               | kind of cult following, while the opposite isn't true.
        
               | Popeyes wrote:
               | Just like former President Trump?
        
               | marcosdumay wrote:
               | There are two possible ways to read "the opposite" from
               | the GP.
               | 
               | "A cult follower does not make an exceptional leader" is
               | the one you are looking for.
        
               | 0perator wrote:
               | While cult followers do not make exceptional leaders,
               | cult leaders are almost by definition exceptional
               | leaders, given they're able to lead the un-indoctrinated
               | into believing an ideology that may not be upheld against
               | critical scrutiny.
               | 
               | There is no guarantee or natural law that an exceptional
               | leader's ideology will be exceptional. Exceptionality is
               | not transitive.
        
               | thedaly wrote:
               | Results
        
               | TheOtherHobbes wrote:
               | Leadership Gets Shit Done. A cult following wastes
               | everyone's time on ineffectual grandstanding and ego
               | fluffing while everything around them dissolves into
               | incompetence and hostility.
               | 
               | They're very orthogonal things.
        
               | rvnx wrote:
               | I also imagine the morale of the people who are currently
               | implementing things, and getting tired of all these
               | politics about who is going to claim success for their
               | work.
        
               | rmbyrro wrote:
               | Have you ever seen a useful product produced by a cult?
        
             | pk-protect-ai wrote:
             | Can't you imagine a group of people motivated to conduct AI
             | research? I don't understand... All nerds are highly
             | motivated in their areas of passion, and here we have AI
             | research. Why do they need leadership instead of simply
             | having an abundance of resources for the passionate work
             | they do?
        
               | DSingularity wrote:
               | As far as it goes for me the only endorsements that
               | matter are those of the core engineering and research
               | teaches of OpenAI.
               | 
               | All these opinions of outsiders don't matter. It's
               | obvious that most people don't know Sam personally or
               | professionally and are going off of the combination of:
               | 1. PR pieces being pushed by unknown entities 2. positive
               | endorsements from well known people who are likely know
               | him
               | 
               | Both those sources are suspect. We don't know the
               | motivation behind their endorsements and for the PR
               | pieces we know the author but we don't know commissioner.
               | 
               | Would we feel as positive about Altman if it turns out
               | that half the people and PR pieces endorsing him are
               | because government officials pushing for him? Or if the
               | celebrities in tech are endorsing him because they are
               | financially incentivized?
               | 
               | The only endorsements that matter are those of OpenAI
               | employees (ideally those who are not just in his camp
               | because he made them rich).
        
               | gcanyon wrote:
               | Someone has to set direction. The more people that are
               | involved in that decision process, the slower it will go.
               | 
               | Having no leadership at all guarantees failure.
        
               | jjk166 wrote:
               | It's not hard to motivate them to do the fun parts of the
               | job, the challenge is in convincing some of those highly
               | motivated and passionate nerds to not work on the fun
               | thing they are passionate about and instead do the boring
               | and unsexy work that is nevertheless critical to overall
               | success; to get people with strong personal opinions
               | about how a solution should look to accept a different
               | plan just so that everyone is on the same page, to ensure
               | that people actually have access to the resources they
               | need to succeed without going so overboard that the
               | endeavor lacks the reserves to make it to the finish
               | line, and to champion the work of these nerds to the non-
               | nerds who are nevertheless important stakeholders.
        
             | scythe wrote:
             | Jobs was really unusual in that he was not only a good
             | leader, but also an ideologue with the right obsession at
             | the right time. (Some people like the word "visionary".)
             | That obsession being "user experience". Today it's a
             | buzzword, but in 2001 it was hardly even a term.
             | 
             | The leadership moment that first comes to mind when I think
             | of Steve Jobs isn't some clever hire or business deal, it's
             | "make it smaller".
             | 
             | There have been a very few people like that. Walt Disney
             | comes to mind. Felix Klein. Yen Hongchang [1]. (Elon Musk
             | is maybe the ideologue without the leadership.)
             | 
             | 1: https://www.npr.org/sections/money/2012/01/20/145360447/
             | the-...
        
           | osigurdson wrote:
           | Seriously, even in a small group of a few hundred people?
        
             | catapart wrote:
             | I dunno, seems like a pretty self-evident theory? If your
             | leader is irreplaceable, regardless of group size, that's a
             | single point of failure. I can't figure how a single point
             | of failure could ever make something "stronger". I can see
             | arguments for necessity, or efficiency, given contrivances
             | and extreme contexts. But "stronger" doesn't seem like the
             | assessment for whatever necessitating a single point of
             | failure would be.
        
               | vipshek wrote:
               | "Stronger" is ambiguous. If you interpret it as
               | "resilience" then I agree having a single point of
               | failure is usually more brittle. But if you interpret it
               | as "focused", then having a single charismatic leader can
               | be superior.
               | 
               | Concretely, it sounds like this incident brought a lot of
               | internal conflicts to the surface, and they got more-or-
               | less resolved in some way. I can imagine this allows
               | OpenAI to execute with greater focus and velocity going
               | forward, as the internal conflict that was previously
               | causing drag has been resolved.
               | 
               | Whether or not that's "better" or "stronger" is up to
               | individual interpretation.
        
               | hughw wrote:
               | I guess though, a lot of organizations never develop a
               | cohesive leader at all, and the orgs fall apart. They
               | never had an irreplaceable leader though!
        
               | osigurdson wrote:
               | A company is essentially an optimization problem, meant
               | to minimize / maximize some set of metrics. Usually a
               | companies goal is simply to maximize NPV but in OpenAI's
               | case the goal is to maximize AI while minimizing harm.
               | 
               | "Failure" in this context essentially means arriving at a
               | materially suboptimal outcome. Leaders in this situation,
               | can easily be considered "irreplaceable" particularly in
               | the early stages as decisions are incredibly impactful.
        
           | dimitrios1 wrote:
           | This is false, and I see the corollary as a project having a
           | BDIF, especially if the leader is effective. Sam is
           | unmistakably effective.
        
             | acchow wrote:
             | Have you or anyone close to you ever had to take multiple
             | years of leave from work from a car accident or health
             | condition?
        
               | slingnow wrote:
               | Nope, I've never even __heard__ of someone having to take
               | multiple years of leave from work for any reason. Seems
               | like a fantastically rare event.
        
               | thingification wrote:
               | Not sure if that's intended as irony, but of course, if
               | somebody is taking multiple years off work, you would be
               | less likely hear about it because by definition they're
               | not going to join the company you work for.
               | 
               | I don't think long-term unemployment among people with a
               | disability or other long-term condition is "fantasticaly
               | rare", sadly. This is not the frequency by length of
               | unemployment, but:
               | 
               | https://www.statista.com/statistics/1219257/us-
               | employment-ra...
        
               | yeck wrote:
               | In my immediate family I have 3 people that have taken
               | multi-year periods away from work for health reasons. Two
               | are mental health related and the other severe arthritis.
               | 2 of those 3 will probably never work again for the rest
               | of their lives.
               | 
               | I've worked with a contractor that went into a coma
               | during covid. Nearly half a year in a coma, then rehab
               | for many more months. Guy is working now, but not shape.
               | 
               | I don't know the stats, but I'd be surprised if long
               | medical leaves are as rare as you think.
        
               | filleduchaos wrote:
               | Yeah, there are thousands of hospitals across the US and
               | they don't run 24/7 shifts just to treat the flu or
               | sprained ankles. Disabling events happen a _lot_.
               | 
               | (A seriously underrated statistic IMO is how many women
               | leave the workforce due to pregnancy-related disability.
               | I know quite a few who haven't returned to full-time work
               | for years after giving birth because they're still
               | dealing with cardiovascular and/or neurological issues.
               | If you aren't privy to their medical history it would be
               | very easy to assume that they just decided to be stay-at-
               | home mums.)
        
               | dimitrios1 wrote:
               | Have you ever worked with someone who treats their work
               | as their life? They are borderline psychopaths. As if a
               | health condition or accident will stop them. They'll be
               | taking work calls on the hospital bed.
        
           | rvnx wrote:
           | And correlation does not imply causality.
           | 
           | Example: Put a loser as CEO of a rocket ship, and there is a
           | huge chance that the company will still be successful.
           | 
           | Put a loser as CEO of a sinking ship, and there is a huge
           | chance that the company will fail.
           | 
           | The exceptional CEOs are those who turn failures into
           | successes.
           | 
           | The fact this drama has emerged is the symptom of a failure.
           | 
           | In a company with a great CEO this shouldn't be happening.
        
           | Aunche wrote:
           | I don't think Sam is necessarily irreplaceable. It's just
           | that Helen Toner and co were so detached from the rest of the
           | organization they might as well been on Mars, as demonstrated
           | by their interim CEO pick instantly turning against them.
        
         | nurumaik wrote:
         | Gpt-generated summary?
        
           | Mistletoe wrote:
           | That was my first thought as well. And now it is the top
           | comment on this post. Isn't this brave new world OpenAI made
           | wonderful?
        
             | nickpp wrote:
             | If it's a good comment, does it really matter if a human or
             | an AI wrote it?
        
               | makeworld wrote:
               | Yes.
        
               | nickpp wrote:
               | Please expand on that.
        
               | iamflimflam1 wrote:
               | This is the most cogent argument against AI I've seen so
               | far.
               | 
               | https://youtu.be/iGJcF4bLKd4?si=Q_JGEZnV-tpFa1Tb
        
               | nickpp wrote:
               | I am sorry, I greatly respect and admire Nick Cave, but
               | that letter sounded to me like the lament of a scribe
               | decrying the invention of the printing press.
               | 
               | He's not wrong, something _is_ lost and it has to do with
               | what we call our  "humanity", but the benefits greatly
               | outweigh that loss.
        
               | makeworld wrote:
               | If you think humanity being lost is acceptable, then it's
               | hard to discuss anything else on this topic.
        
               | nickpp wrote:
               | > you think humanity being lost is acceptable
               | 
               | I never said that.
        
               | Mistletoe wrote:
               | I think this summarizes it pretty well. Even if you don't
               | mind the garbage, the future AI will feed on this
               | garbage, creating AI and human brain gray goo.
               | 
               | https://ploum.net/2022-12-05-drowning-in-ai-generated-
               | garbag...
               | 
               | https://en.wikipedia.org/wiki/Gray_goo
        
               | nickpp wrote:
               | Is this a real problem model trainers actually face or is
               | it an imagined one? The Internet is already full of
               | garbage - 90% of the unpleasantness of browsing these
               | days is filtering through mounts and mounds of crap. Some
               | is generated, some is written, but still crap full of
               | wrong and lies.
               | 
               | I would've imagined training sets were heavily curated
               | and annotated. We already know how to solve this problem
               | for training humans (or our kids would never learn
               | anything useful) so I imagine we could solve it similarly
               | for AIs.
               | 
               | In the end, if it's quality content, learning it is
               | beneficial - no matter who produced it. Garbage needs to
               | be eliminated and the distinction is made either by human
               | trainers or already trained AIs. I have no idea how to
               | train the latter but I am no expert in this field - just
               | like (I suspect) the author of that blog.
        
               | makeworld wrote:
               | The value of a creation cannot be solely judged by its
               | output. It's hard to explain, it's better to intuit it.
        
         | miohtama wrote:
         | > Employees followed the money trail and Sam to preserve their
         | equity and careers
         | 
         | Would you not when the AI safety wokes decide the torch the
         | rewards of your hard work of grinding for years? I feel there
         | is less groupthink and everyone saw the board as it is and
         | their inability lead, or even act rationally. OpenAI did not
         | just become a sinking ship, but it was unnecessary sunk by
         | someone not skin in the game and your personal wealth and
         | success was tied to the ship.
        
           | brookst wrote:
           | Yeah, this is like using "groupthink" to describe people
           | fleeing a burning building. There's maybe some measure of
           | literal truth, but it's an odd way to frame it.
        
           | acjohnson55 wrote:
           | How do you know the "wokes" aren't the ones who were grinding
           | for years?
           | 
           | I suspect OpenAI has an old guard that is disproportionately
           | ideological about AI, and a much larger group of people who
           | joined a rocket ship led by the guy who used to run YC.
        
         | seydor wrote:
         | who would want to work for an irreplaceable CEO long term
        
           | rvnx wrote:
           | Desperate people who have no choice than to wait for someone
           | to remove their golden handcuffs.
        
         | paulddraper wrote:
         | > Peer pressure and groupthink likely also swayed employees
         | more than principles
         | 
         | What makes this "likely"?
         | 
         | Or is this just pure conjecture?
        
           | mrfox321 wrote:
           | What would you do if 999 employees openly signed a letter and
           | you are the remaining holdout.
        
             | paulddraper wrote:
             | Is your argument that the 1 employee operated on peer
             | pressure, or the other 999?
             | 
             | Could it possibly be that the majority of OpenAI's
             | workforce sincerely believed a midnight firing of the CEO
             | were counterproductive to their organization's goals?
        
               | dymk wrote:
               | It's almost certain that all employees did not behave the
               | same way for the exact same reasons. And I don't see
               | anyone making an argument about what the exact numbers
               | are, nor does it really matter. Just that some portion of
               | employees were swayed by pressure once the letter reached
               | some critical signing mass.
        
               | paulddraper wrote:
               | > some portion
               | 
               | The logic being that if any opinion has above X% support,
               | people are choosing it based on peer pressure.
        
               | mrfox321 wrote:
               | The key is that the support is not anonymous.
        
               | mrfox321 wrote:
               | Doing the math, it is extremely unlikely for a lot of
               | coin flips to skew from the weight of the coin.
               | 
               | To that end, observing unanimous behavior may imply some
               | bias.
               | 
               | Here, it could be people fearing being a part of the
               | minority. The minority are trivially identifiable, since
               | the majority signed their names on a document.
               | 
               | I agree in your stance that a majority of the workforce
               | disagreed with the way things were handled, but that
               | proportion is likely a subset of the proportion who
               | signed their names on the document, for the reasons
               | stated above.
        
               | paulddraper wrote:
               | > it is extremely unlikely for a lot of coin flips to
               | skew from the weight of the coin
               | 
               | So clearly this wasn't a 50/50 coin flip.
               | 
               | The question at hand is whether the skew against the
               | board was sincere or insincere.
               | 
               | Personally, I assume that people are acting in good
               | faith, unless I have evidence to the contrary.
        
               | mrfox321 wrote:
               | I'm not saying it's 50/50.
               | 
               | But future signees are influenced by previous signees.
               | 
               | Acting in good faith is different from bias.
        
         | orsenthil wrote:
         | > - Mission-driven employees may still leave for opportunities
         | at places like Anthropic
         | 
         | Which might have an oversight from AMZN instead of MSFT ?
        
         | sam0x17 wrote:
         | > Peer pressure and groupthink likely also swayed employees
         | more than principles
         | 
         | Chilling to hear the corporate oligarchs completely disregard
         | the feelings of employees and deny most of the legitimacy
         | behind these feelings in such a short and sweeping statement
        
           | DSingularity wrote:
           | Honestly he has a point -- but the bigger point to be made is
           | financial incentives. In this case it matters because of the
           | expressed mission statement of OpenAI.
           | 
           | Let's say there was some non-profit claiming to advance the
           | interests of the world. Let's say it paid very well to hire
           | the most productive people but they were a bunch of
           | psychopaths who by definition couldn't care less about
           | anybody but themselves. Should you care about their opinions?
           | If it was a for profit company you could argue that their
           | voice matter. For a non-profit, however, a persons opinion
           | should only matter as far as it is aligned with the non-
           | profit mission.
        
         | ensocode wrote:
         | Good points. Anyway I guess nobody will remember the drama in
         | some months so I think the damage done is very manageable for
         | OAI.
        
         | jxi wrote:
         | Was this really motivated by AI safety or was it just Helen
         | Toner's personal vendetta against Sam?
         | 
         | It doesn't feel like anything was accomplished besides wasting
         | 700+ people's time, and the only thing that has changed now is
         | Helen Toner and Tasha McCauley are off the board.
        
           | hn_throwaway_99 wrote:
           | As someone who was very critical of _how_ the board acted, I
           | strongly disagree. I felt like this Washington Post article
           | gave a very good, balanced overview. I think it sounds like
           | there were substantive issues that were brewing for a long
           | time, though no doubt personal clashes had a huge impact on
           | how it all went down:
           | 
           | https://www.washingtonpost.com/technology/2023/11/22/sam-
           | alt...
        
           | cbeach wrote:
           | Curious how a relatively unknown academic with links to China
           | [1] attained a board seat on America's hottest and most
           | valuable AI company.
           | 
           | Particularly as she openly expressed that "destroying" that
           | company might be the best outcome. [2]
           | 
           | > During the call, Jason Kwon, OpenAI's chief strategy
           | officer, said the board was endangering the future of the
           | company by pushing out Mr. Altman. This, he said, violated
           | the members' responsibilities. Ms. Toner disagreed. The
           | board's mission was to ensure that the company creates
           | artificial intelligence that "benefits all of humanity," and
           | if the company was destroyed, she said, that could be
           | consistent with its mission.
           | 
           | [1] https://www.chinafile.com/contributors/helen-toner [2]
           | https://www.nytimes.com/2023/11/21/technology/openai-
           | altman-...
        
             | Zpalmtree wrote:
             | Wow, very surprised this is the first I'm hearing of this,
             | seems very suspect
        
             | hn_throwaway_99 wrote:
             | Oh lord, spare me with the "links to China" idiocy. I once
             | ate a fortune cookie, does that mean I have "links to
             | China" too?
             | 
             | Toner got her board seat because she was basically Holden
             | Karnofsky's designated replacement:
             | 
             | > Holden Karnofsky resigns from the Board, citing a
             | potential conflict because his wife, Daniela Amodei, is
             | helping start Anthropic, a major OpenAI competitor, with
             | her brother Dario Amodei. (They all live(d) together.) The
             | exact date of Holden's resignation is unknown; there was no
             | contemporaneous press release.
             | 
             | > Between October and November 2021, Holden was quietly
             | removed from the list of Board Directors on the OpenAI
             | website, and Helen was added (Discussion Source). Given
             | their connection via Open Philanthropy and the fact that
             | Holden's Board seat appeared to be permanent, it seems that
             | Helen was picked by Holden to take his seat.
             | 
             | https://loeber.substack.com/p/a-timeline-of-the-openai-
             | board
        
               | cbeach wrote:
               | Perhaps you're not aware. Living in Beijing is not
               | equivalent to "once eating a fortune cookie"
               | 
               | > it seems that Helen was picked by Holden to take his
               | seat.
               | 
               | So you can only speculate as to how she got the seat.
               | Which is exactly my point. We can only speculate. And
               | it's a question worth asking, because governance of
               | America's most important AI company is a very important
               | topic right now.
        
           | jkaplan wrote:
           | > was it just Helen Toner's personal vendetta against Sam
           | 
           | I'm not defending the board's actions, but if anything, it
           | sounds like it may have been the reverse? [1]
           | 
           | > In the email, Mr. Altman said that he had reprimanded Ms.
           | Toner for the paper and that it was dangerous to the
           | company... "I did not feel we're on the same page on the
           | damage of all this," he wrote in the email. "Any amount of
           | criticism from a board member carries a lot of weight."
           | Senior OpenAI leaders, including Mr. Sutskever... later
           | discussed whether Ms. Toner should be removed
           | 
           | [1] https://www.nytimes.com/2023/11/21/technology/openai-
           | altman-...
        
             | jxi wrote:
             | Right, so getting Sam fired was retaliation for that.
        
         | amalcon wrote:
         | _> - Microsoft strengthened its power despite not appearing
         | involved in the drama_
         | 
         | Depending on what you mean by "the drama", Microsoft was very
         | clearly involved. They don't appear to have been in the loop
         | prior to Altman's firing, but they literally offered jobs to
         | everyone who left in solidarity with same. Do we really think
         | things like that were not intended to change people's minds?
        
           | FirmwareBurner wrote:
           | _> but they literally offered jobs to everyone who left in
           | solidarity with same_
           | 
           | Offering people jobs is neither illegal nor immoral, no? And
           | wasn't HN also firmly on the side of abolishing non-competes
           | and non-soliciting from employment contracts to facilitate
           | freedom of employment movement and increase industry wages in
           | the process?
           | 
           | Well then, there's your freedom of employment in action. Why
           | be unhappy about it? I don't get it.
        
             | spankalee wrote:
             | > Offering people jobs is neither illegal nor immoral
             | 
             | The comment you responded to made neither of those claims,
             | just that they were "involved".
        
             | notahacker wrote:
             | I'm pretty sure there's a middle ground between _recruiters
             | for Microsoft should be banned from approaching other
             | companies ' staff to fill roles_ and _Microsoft should be
             | able to dictate decisions made by other companies ' boards
             | by publicly announcing that unless they change track it
             | will attempt to hire every single one of their employees to
             | newly created roles_.
             | 
             | Funnily enough a bit like there's a middle ground between
             | _Microsoft should not be allowed to create browsers or have
             | license agreements_ and _Microsoft should be allowed to
             | dictate bundling decisions made by hardware vendors to
             | control access to the Internet_
             | 
             | It's not freedom of employment when funnily enough those
             | jobs aren't actually available to any AI researchers not
             | working for an organisation Microsoft is trying to control.
        
           | malfist wrote:
           | The GP looks to me like an AI summary. Which would fit with
           | the hallucination that microsoft wasn't involved.
        
             | chankstein38 wrote:
             | That's a good callout. I was reading over it and confused
             | who this person was and why they were summarizing but yeah
             | they might've just told ChatGPT to summarize the events of
             | what happened.
        
           | gcanyon wrote:
           | I'd go further than just saying "they were involved" --- by
           | offering jobs to everyone who wanted to come with Altman,
           | they were effectively offering to acquire OpenAI, which is
           | worth ~$100B, for (checks notes) zero dollars.
        
             | breadwinner wrote:
             | You mean zero _additional_ dollars. They already gave
             | (checks notes) $13 Billion dollars and own half of the
             | company.
        
               | rvnx wrote:
               | + according to the rumors on Bloomberg.com / CNBC:
               | 
               | The investment is refundable and has high priority:
               | Microsoft has a priority to receive 75% of the profit
               | generated until the 10B USD have been paid back
               | 
               | + _(checks notes)_ in addition (!) OpenAI has to spend
               | back the money in Microsoft Cloud Services (where
               | Microsoft takes a cut as well).
        
             | gsuuon wrote:
             | How has the valuation of OpenAI increased by $20B since
             | this weekend? I feel like every time I see that number it
             | goes up by $10B.
        
               | tacoooooooo wrote:
               | you're off by a bit, the announcement of Sam returning as
               | CEO actually increased OpenAI valuation to $110B last
               | night
        
               | sebzim4500 wrote:
               | $110B? Where are you getting this valuation of $120B?
        
             | theptip wrote:
             | If the existing packages are worth more than MSFT pay AI
             | researchers (they are, by a lot) then it's not acquiring
             | OAI for $0. Plausibly it could cost in the $B to buy put
             | every single equity holder, at a $80B+ valuation.
             | 
             | Still a good deal, but your accounting is off.
        
         | RationalDino wrote:
         | The one piece of this that I question is the employee
         | motivations.
         | 
         | First, they had offers to walk to both Microsoft and Salesforce
         | and be made good. They didn't have to stay and fight to have
         | money and careers.
         | 
         | But more importantly, put yourself in the shoes of an employee
         | and read
         | https://web.archive.org/web/20231120233119/https://www.busin...
         | for what they apparently heard.
         | 
         | I don't know about anyone else. But if I was being asked to
         | choose sides in a he-said, she-said dispute, the board was
         | publicly hinting at really bad stuff, and THAT was the
         | explanation, I know what side I'd take.
         | 
         | Don't forget, when the news broke, people's assumption from the
         | wording of the board statement was that Sam was doing shady
         | stuff, and there was potential jail time involved. And they
         | justify smearing Sam like that because two board members
         | thought they heard different things from Sam, and he gave what
         | looked like the same project to two people???
         | 
         | There were far better stories that they could have told. Heck,
         | the Internet made up many far better narratives than the board
         | did. But that was the board's ACTUAL story.
         | 
         | Put me on the side of, "I'd have signed that letter, and money
         | would have had nothing to do with it."
        
           | TheGRS wrote:
           | I was thinking the same. The letter symbolized a deep
           | distrust with leadership over the mission and direction of
           | the company. I'm sure financial motivations were involved,
           | but the type of person working at this company can probably
           | get a good paycheck at a lot of places. I think many work at
           | OpenAI for some combination of opportunity, prestige, and
           | altruism, and the weekend probably put all 3 into question.
        
         | neonbjb wrote:
         | As an employee of OpenAI: fuck you and your condescending
         | conclusions about my peers and my motivations.
        
           | jprete wrote:
           | I'm curious about your perceptions of the (median)
           | motivations of OpenAI employees - although of course I
           | understand if you don't feel free to say anything.
        
           | alextheparrot wrote:
           | Users here often get the narrative and motivations deeply
           | wrong, I wouldn't take it too personally (Speaking as a peer)
        
           | iamflimflam1 wrote:
           | "condescending conclusions" - ask anyone outside of tech how
           | they feel when we talk to them...
        
         | windowshopping wrote:
         | This comment bugs me because it reads like a summary of an
         | article, but it's just your opinions without any explanations
         | to justify them.
        
         | scooke wrote:
         | Many are still going to use this; few will bother to ponder and
         | break the event down like this.
        
       | account-5 wrote:
       | Farse, plain and simple.
        
       | orsenthil wrote:
       | What's even the lesson learnt here?
       | 
       | 1. Keep doing your work, and focus on building your product. 2.
       | Ignore the noise, go back to 1.
        
       | rennsport_eth wrote:
       | I love you, but you are not serious people.
        
       | thepasswordis wrote:
       | Yeah I don't know. I think you'd be kind of nuts to build
       | anything on their APIs anymore.
       | 
       | Sure I'll keep using ChatGPT in a personal capacity/as search.
       | But no way I'd trust my business to them
        
         | campbel wrote:
         | Working out nicely for Msft then. You can use GPT4 via Azure
         | already.
        
       | kibwen wrote:
       | This will be remembered as the biggest waste of time and energy
       | since the LK-99 fiasco.
        
       | evan_ wrote:
       | What a waste of time
        
       | bmitc wrote:
       | What a gigantic mess. Everyone looks bad in this: Altman,
       | Microsoft, the OpenAI board, OpenAI employees, etc.
       | 
       | It also has confirmed that greed and cult of personality win in
       | the end.
        
       | voiceblue wrote:
       | For some reason this reminds me of the Coke/New Coke fiasco,
       | which ended up popularizing Coke Classic more than ever before.
       | 
       | > Consumers were outraged and demanded their beloved Coke back -
       | the taste that they knew and had grown up with. The request to
       | bring the old product back was so loud that soon journalists
       | suggested that the entire project was a stunt. To this accusation
       | Coca-Cola President Don Keough replied on July 10, 1985:
       | "We are not that dumb, and we are not that smart."
       | 
       | https://en.wikipedia.org/wiki/New_Coke
        
         | freedomben wrote:
         | That is one of the greatest lines of all time. Classic
        
         | jdlyga wrote:
         | I tried New Coke when it was re-released for Stranger Things.
         | It really is a lot better than Coca Cola Classic. It's a shame
         | that it failed.
        
         | tacocataco wrote:
         | Thanks for sharing.
         | 
         | I would have guessed the stunt was to hide the switch from
         | sugar to High Fructose Corn syrup.
        
       | beepbooptheory wrote:
       | When the first CEO appeared on the earth he got tied to cliff so
       | the birds could eat him. It seems like that was a good call.
        
       | incahoots wrote:
       | Cue the "it's a Christmas Miracle!"
        
       | diamondfist25 wrote:
       | Adam D'Angelo keeping everyone straight on the mission of OpenAI.
       | What a true boss in the face of woke mob
        
       | iamleppert wrote:
       | So dangerous on so many levels. Just let him start his own AI
       | group, competition is good!
       | 
       | Instead he will come away with this untouchable. He'll get to
       | stack the board like he wanted to. Part of being on a board of
       | directors is sticking to your decisions. They are weak and
       | weren't prepared for the backlash of one person.
        
       | melvinmelih wrote:
       | "You could parachute Sam into an island full of cannibals and
       | come back in 5 years and he'd be the king." - Paul Graham
        
         | rsanek wrote:
         | http://paulgraham.com/fundraising.html
        
       | gsuuon wrote:
       | At least they'll be operating under the original charter - it
       | sounds like the mission continues. Not sure about this new board
       | but hard to imagine they'd make the same sort of mistake.
        
       | archsurface wrote:
       | I'm not American - I'm unclear what all this fuss is about? From
       | where I am it looks like some arbitrary company politics in a
       | hyped industry with a guy whose name I've seen mentioned on this
       | site occasionally but really comes across as just a SV or San
       | Fran cult of personality type. Am I missing something? Is there
       | some substance to this story or is it just this week's industry
       | soap opera?
        
       | jcutrell wrote:
       | I wonder what Satya will say here; will the AI CEO position there
       | just evaporate?
        
       | carapace wrote:
       | So it's the Osiris myth?
        
       | hackerlight wrote:
       | So OpeNAI charter still in place? Once OpenAI reaches AGI,
       | Microsoft won't be able to access the tech. Then what will happen
       | to Microsoft when other commercial competitors catch up and also
       | reach AGI one or two years later?
        
       | taway1874 wrote:
       | Some perspective ...
       | 
       | One developer (Ilya) vs. One businessman (Sam) -> Sam wins
       | 
       | Hundreds of developers threaten to quit vs. Board of Directors
       | (biz) refuse to budge -> Developers win
       | 
       | From the outside it looks like developers held the power all
       | along ... which is how it should be.
        
         | rexarex wrote:
         | Money won.
        
         | jessenaser wrote:
         | Yes, 95% agreement in any company is unprecedented but:
         | 
         | 1. They can get equivalent position and pay at the new
         | Microsoft startup during that time, so their jobs are not at
         | risk.
         | 
         | 2. Sam approved each hire in the first place.
         | 
         | 3. OpenAI is selecting for the type of people who want to work
         | at a non-profit with a goal in mind instead of another company
         | that could offer higher compensation. Mission driven vs profit
         | driven.
         | 
         | Either way on how they got to that conclusion of banding
         | together to quit, it was a good idea, and it worked. And it is
         | a check on power for a bad board of directors, when otherwise a
         | board of directors cannot be challenged. "OpenAI is nothing
         | without its people".
        
           | andersa wrote:
           | > OpenAI is selecting for the type of people who want to work
           | at a non-profit with a goal in mind instead of another
           | company that could offer higher compensation. Mission driven
           | vs profit driven.
           | 
           | Maybe that was the case at some point, but clearly not
           | anymore ever since the release of ChatGPT. Or did you not see
           | them offer completely absurd compensation packages, i.e. to
           | engineers leaving Google?
           | 
           | I'd bet more than half the people are just there for the
           | money.
        
           | brrrrrm wrote:
           | > 1. They can get equivalent position and pay at the new
           | Microsoft startup during that time, so their jobs are not at
           | risk.
           | 
           | citation?
        
             | davio wrote:
             | https://x.com/kevin_scott/status/1726971608706031670?s=20
        
         | philipwhiuk wrote:
         | Are you sure Ilya was the root of this.
         | 
         | He backed it and then signed the pledge to quit if it wasn't
         | undone.
         | 
         | What's the evidence he was behind it and not D'Angelo?
        
           | dr_dshiv wrote:
           | If we only look at the outcomes (dismantling of board),
           | Microsoft and Sam seem to have the most motive.
        
           | __loam wrote:
           | I'm not sure I buy the idea that Ilya was just some hapless
           | researcher who got unwillingly pulled into this. Any one of
           | the board could have voted not to remove Sam and stop the
           | board coup, including Ilya. I'd bet he only got cold feet
           | after the story became international news and after most of
           | the company threatened to resign because their bag was in
           | jeopardy.
        
             | Xelynega wrote:
             | That's a strange framing. In that scenario would it not be
             | that he made the decision he thought was right and aligned
             | with openais mission initially, then when seeing the public
             | support Sam had he decided to backtrack so he had a future
             | career?
        
           | jiveturkey wrote:
           | wake up people! (said rhetorically, not accusatory or any
           | other way)
           | 
           | This is Altman's playbook. He did a similar ousting at
           | Reddit. This was planned all along to overturn the board.
           | Ilya was in on it.
           | 
           | I'm not normally a conspiracy theorist. But fool me ... you
           | can't be fooled again. As they say in Tennessee
        
             | bugglebeetle wrote:
             | What's the backstory on Reddit?
        
               | occamsrazorwit wrote:
               | Yishan (former Reddit CEO) describes how Altman
               | orchestrated the removal of Reddit's owner: https://www.r
               | eddit.com/r/AskReddit/comments/3cs78i/whats_the...
               | 
               | Note that the response is Altman's, and he seems to
               | support it.
               | 
               | As additional context, Paul Graham has said a number of
               | times that Altman is one of the most power-hungry and
               | successful people he know (as praise). Paul Graham, who's
               | met hundreds if not thousands of experienced leaders in
               | tech, says this.
        
             | bossyTeacher wrote:
             | what happenned in reddit?
        
         | adverbly wrote:
         | There are three dragons:
         | 
         | Employees, customers, government.
         | 
         | If motivated and aligned, any of these three could end you if
         | they want to.
         | 
         | Do not wake the dragons.
        
           | pdntspa wrote:
           | The Board is another one, if you're CEO.
        
             | elliotec wrote:
             | I think the parent comment's point is that the board is not
             | one, since the board was defeated (by the employee dragon).
        
               | pdntspa wrote:
               | I think the analogy is kind of shaky. The board tried to
               | end the CEO, but employees fought them and won.
               | 
               | I've been in companies where the board won, and they
               | installed a stoolie that proceeded to drive the company
               | into the ground. Anybody who stood up to that got fired
               | too.
        
               | davesque wrote:
               | I have an intuition that OpenAI's mid-range size gave the
               | employees more power in this case. It's not as hard to
               | coordinate a few hundred people, especially when those
               | people are on top of the world and want to stay there. At
               | a megacorp with thousands of employees, the board
               | probably has an easier time bossing people around.
               | Although I don't know if you had a larger company in mind
               | when you gave your second example.
        
               | pdntspa wrote:
               | No, I'm thinking a smaller company, like 50 people, $20m
               | ARR. Engineering-focused, but not tech
        
             | adverbly wrote:
             | My comment was more of a reflection of the fact that you
             | might have multiple different governance structures to your
             | organization. Sometimes investors are at the top. Sometimes
             | it's a private owner. Sometimes there are separate kinds of
             | shares for voting on different things. Sometimes it's a
             | board. So you're right, the depending on the governance
             | structure you can have additional dragons. But, you can
             | never prevent any of these three from being a dragon. They
             | will always be dragons, and you can never wake them up.
        
           | bossyTeacher wrote:
           | Or tame the dragons. AFAIK Sam hired the employees. Hence
           | they are loyal to him
        
         | sokoloff wrote:
         | Is your first "-> Sam wins" different than what you intended?
        
         | hsavit1 wrote:
         | seems like the union of developers is stronger than the company
         | itself. hence why unions are so frowned upon by big tech
         | corporate leadership
        
           | JacobThreeThree wrote:
           | And yet, this union was threatening to move to a company
           | without unions.
        
         | jejeyyy77 wrote:
         | $$$ vs. Safety -> $$$ wins.
         | 
         | Employees who have $$$ incentive threaten to quit if that is
         | taken away. News at 8.
        
           | baby wrote:
           | Why are you assuming employees are incentivized by $$$ here,
           | and why do you think the board's reason is related to safety
           | or that employees don't care about safety? It just looks like
           | you're spreading FUD at this point.
        
             | jejeyyy77 wrote:
             | of course the employees are motivated by $$$ - is that even
             | a question?
        
               | Xelynega wrote:
               | No, it's just counter to the idea that it was "employee
               | power" that brought sam back.
               | 
               | It was capital and the pursuit of more of it.
               | 
               | It always is.
        
             | hackerlight wrote:
             | The large majority of people are motivated by $$$ (or fame)
             | and if they all tell me otherwise I know many of them are
             | lying.
        
             | mi_lk wrote:
             | It's you who are naive if you really think the majority of
             | those 7xx employees care more about safe AGI than their own
             | equity upside
        
               | nh23423fefe wrote:
               | Why would anyone care about safe agi? its vaporware.
        
               | mecsred wrote:
               | Everything is vaporware until it gets made. If you wait
               | until a new technology definitively exists to start
               | caring about safety, you have guaranteed it will be
               | unsafe.
               | 
               | Lucky for us this fiasco has nothing to do with AGI
               | safety, only AI technology. Which only affects automated
               | decision making in technology that's entrenched in every
               | fact of our lives. So we're all safe here!
        
               | superturkey650 wrote:
               | > If you wait until a new technology definitively exists
               | to start caring about safety, you have guaranteed it will
               | be unsafe.
               | 
               | I don't get this perspective. The first planes, cars,
               | computers, etc. weren't initially made with safety in
               | mind. They were all regulated after the fact and
               | successfully made safer.
               | 
               | How can you even design safety into something if it
               | doesn't exist yet? You'd have ended up with a plane where
               | everyone sat on the wings with a parachute strapped on if
               | you designed them with safety first instead of letting
               | them evolve naturally and regulating the resulting
               | designs.
        
               | FartyMcFarter wrote:
               | The difference between unsafe AGI and an unsafe plane or
               | car is that the plane/car are not existential risks.
        
               | optymizer wrote:
               | How is it an 'existential risk'? Its body of knowledge is
               | publicly available, no?
        
               | FartyMcFarter wrote:
               | What do you mean by "its"? There isn't any AGI yet.
               | ChatGPT is far from that level.
        
               | bcrosby95 wrote:
               | The US government got involved in regulating airplanes
               | long before there were any widely available commercial
               | offerings:
               | 
               | https://en.wikipedia.org/wiki/United_States_government_ro
               | le_...
               | 
               | If you're trying to draw a parallel here then safety and
               | the federal government needs to catch up. There's already
               | commercial offerings that any random internet user can
               | use.
        
               | superturkey650 wrote:
               | I agree, and I am not saying that AI should be
               | unregulated. At the point the government started
               | regulating flight, the concept of an airplane had existed
               | for decades. My point is that until something actually
               | exists, you don't know what regulations should be in
               | place.
               | 
               | There should be regulations on existing products (and
               | similar products released later) as they exist and you
               | know what you're applying regulations to.
        
               | mecsred wrote:
               | I understand where you're coming from and I think that's
               | reasonable in general. My perspective would be: you can
               | definitely iterate on the technology to come up with
               | safer versions. But with this strategy you have to make
               | an unsafe version first. If you got in one of the first
               | airplanes ever made the likely hood of crashing is pretty
               | high.
               | 
               | At some point, our try it until it works approach will
               | bite us. Consider the calculations done to determine if
               | fission bombs would ignite the atmosphere. You don't want
               | to test that one and find out. As our technology improves
               | exponentially we're going to run into that situation more
               | and more frequently. Regardless if you think it's AGI or
               | something else, we will eventually run into some
               | technology where one mistake is a cataclysm. How many
               | nuclear close calls have we already experienced.
        
               | jononor wrote:
               | The principles, best practices and tools of safety
               | engineering can be applied to new projects. We have
               | decades of experience now. Not saying it will be perfect
               | on the first try, or that we know everything that is
               | needed. But the novel aspects of AI are not an excuse to
               | not try.
        
               | stillwithit wrote:
               | Exactly what an OpenAI developer would understand. All
               | the more reason to ride the grift that brought them this
               | far
        
               | concordDance wrote:
               | Uh, I reckon many do. Money is easy to come by for that
               | type of person and avoiding killing everyone matters to
               | them.
        
             | DirkH wrote:
             | Assuming employees are not incentivized by $$$ here seems
             | extraordinary and needs a pretty robust argument to show it
             | isn't playing a major factor when there is this much money
             | involved.
        
         | dylan604 wrote:
         | It's not like this is the first:
         | 
         | One developer (Woz) vs One businessman (Jobs) -> Jobs wins
        
         | zerohalo wrote:
         | more like $$ wins.
         | 
         | It's clear most employees didn't care much about OpenAI's
         | mission -- and I don't blame them since they were hired by the
         | __for-profit__ OpenAI company and therefore aligned with
         | __its__ goals and rewarded with equity.
         | 
         | In my view the board did the right thing to stand by OpenAI's
         | original mission -- which now clearly means nothing. Too bad
         | they lost out.
         | 
         | One might say the mission was pointless since Google, Meta,
         | MSFT would develop it anyway. That's really a convenience
         | argument that has been used in arms races (if we don't build
         | lots of nuclear weapons, others will build lots of nuclear
         | weapons) and leads to ... well, where we are today :(
        
           | joewferrara wrote:
           | Where we are today is a world where people do not generally
           | worry about nuclear bombs being dropped. So seems like a
           | pretty good outcome in that example.
        
             | Xelynega wrote:
             | The nuclear arms race lead to the cold war, not a "good
             | outcome" IMO. It wasn't until nations started imposing
             | those regulations that we got to the point we're at today
             | with nuclear weapons.
        
         | Quentincestino wrote:
         | OpenAI developers are redefining the state-of-the-art of AI
         | each 6 months, if the company lose them they already can go
         | bankrupt
        
         | m00x wrote:
         | Ilya signed the letter saying he would resign if Sam wasn't
         | brought back. Looks like he regretted his decision and
         | ultimately got played by the 2 departing board members.
         | 
         | Ilya is also not a developer, he's a founder of OpenAI and was
         | the CSO.
        
         | awb wrote:
         | It's a cost / benefit analysis.
         | 
         | If people are easily replaceable then they don't hold nearly as
         | much power, even en mass.
        
         | nikcub wrote:
         | The employees rapidly and effectively formed a quasi-union to
         | grant themselves a very powerful seat at the table.
        
       | dang wrote:
       | All: there are over 1800 comments in this thread. If you want to
       | read them all, click More at the bottom of each page, or like
       | this: (edit: er, yes they do have to be wellformed don't they):
       | 
       | https://news.ycombinator.com/item?id=38375239&p=2
       | 
       | https://news.ycombinator.com/item?id=38375239&p=3
       | 
       | https://news.ycombinator.com/item?id=38375239&p=4 (...etc.)
        
       | macrael wrote:
       | What a delightful shit show. I don't even personally care whether
       | Sam Altman is running OpenAI but it brings me no end of
       | schadenfreude to see a bunch of AI Doomers make asses of
       | themselves. Ethical Altruism truly believes that AI could destroy
       | all of human life on the planet which is a preposterous belief.
       | There are so many better things to worry about, many of which are
       | happening right now! These people are not serious and should not
       | hold serious positions of power. It's not hard to see the dangers
       | of AI: replacing a lot of make-work that exists in the world,
       | giving shoddy answers with high confidence, taking humans out of
       | the loop of responsible decision making, but I cannot believe
       | that it will become so smart that it becomes an all powerful god.
       | These people worship intelligence (hence why they believe that
       | with infinite intelligence comes infinite power) but look what
       | happens when they actually have power! Ridiculous.
        
       | rashidae wrote:
       | Could someone do a sentiment analysis from the comments and share
       | it with all of us who can't read all the 1,700+ comments?
        
       | Ruq wrote:
       | that fast huh?
        
       | daveguy wrote:
       | Hi dang,
       | 
       | Seeing a bug in your comment here:
       | 
       | https://news.ycombinator.com/item?id=38382563
       | 
       | You reference the pages like this:
       | 
       | https://news.ycombinator.com/item?id=38375239?p=2
       | 
       | The second ? should be an & like this:
       | 
       | https://news.ycombinator.com/item?id=38375239&p=2
       | 
       | Please feel free to delete this message after you've received it.
        
         | paulddraper wrote:
         | Also, while we're at it:
         | 
         | "Nobody will be happier than I when this bottleneck (edit: the
         | one in our code--not the world) is a thing of the past" [1]
         | 
         | HN plans to be multi-core?!?! A bigger scoop than OpenAI
         | governance!
         | 
         | Anything more you can share?
         | 
         | [1] https://news.ycombinator.com/item?id=38351005
        
         | saliagato wrote:
         | Why do you (dang) always write a comment specifying that people
         | can read more and even providing some links when it's clear
         | that when you reach the bottom of the page you have to click
         | "read more" to indeed read more. Isn't it a bit useless?
        
           | bartread wrote:
           | Because people don't, that's why.
        
           | pvg wrote:
           | _when it 's clear_
           | 
           | It isn't that clear. People missing ui elements they have to
           | scroll to is one of the most common ways of missing ui
           | elements.
        
         | pvg wrote:
         | If you want to reach the mods just email hn@ycombinator.com
        
           | daveguy wrote:
           | Thank you for the advice. I will do that in the future.
        
       | zerohalo wrote:
       | Google, Meta and now OpenAI. So long, responsible and safety AI
       | guardrails. Hello, big money.
       | 
       | Disappointed by the outcome, but perhaps mission-driven AI
       | development -- the reason OpenAI was founded -- was never
       | possible.
       | 
       | Edit: I applaud the board members for (apparently, it seems)
       | trying to stand up for the mission (aka doing the job that they
       | were put on the board to do), even if their efforts were doomed.
        
         | risho wrote:
         | you just don't understand how markets work. if openai slows
         | down then they will just be driven out by competition. that's
         | fine if that's what you think they should do, but that won't
         | make ai any safer, it will just kill openai and have them
         | replaced by someone else.
        
           | zerohalo wrote:
           | you're right about market forces, however:
           | 
           | 1) openAI was explicitly founded to NOT develop AI based on
           | "market forces"; it's just that they "pivoted" (aka abandoned
           | their mission) once they struck gold in order to become
           | driven by the market
           | 
           | 2) this is exactly the reasoning behind nuclear arms races
        
           | WanderPanda wrote:
           | You can still be a force for decentralization by creating
           | actually open ai. For now it seems like Meta AI research is
           | the real open ai
        
             | insanitybit wrote:
             | What does "actually open" mean? And how is that more
             | responsible? If the ethical concern of AI is that it's too
             | powerful or whatever, isn't building it in the open
             | _worse_?
        
               | WanderPanda wrote:
               | Depends on how you interpret the mission statement of
               | building ai for all of humanity. It's questionable that
               | humanity is better off if ai only accrues to one or a few
               | centralised entities?
        
         | paulddraper wrote:
         | > I applaud the board members for (apparently, it seems) trying
         | to stand up for the mission
         | 
         | What about this is apparent to you?
         | 
         | What statement has the board made on how they fired Altman "for
         | the mission"?
         | 
         | Have I missed something?
        
           | alsetmusic wrote:
           | To me, commentary online and on podcasts universally leans on
           | the idea that he appears to be very focused on money (from
           | the outside) in seeming contradiction to the company charter:
           | 
           | > Our primary fiduciary duty is to humanity.
           | 
           | Also, the language of the charter has watered down a stronger
           | commitment that was in the first version. Others have quoted
           | it and I'm sure you can find it on the internet archive.
        
             | paulddraper wrote:
             | > commentary online and on podcasts
             | 
             | :/
        
       | nbzso wrote:
       | Stop dreaming about alignment. All bets are off. This is the
       | start of AI arms race. Think globally for a second. Yes,
       | everybody wants to be a millionaire or billionaire. This is the
       | current culture we are living in. Corporations have unprecedented
       | power waved into the governments, but governments still have a
       | monopoly on violence. People cannot switch to the new abstraction
       | layer (UBI, Social Rating) for two or five years. They will keep
       | a consumer-oriented mindset before the option to have one is
       | erased. Where you think this is going? To a better Democracy?
       | This is the Cold War V.2 scenario unfolding.
        
       | jacquesm wrote:
       | 49% stock (lower bound) + 90% of employees (upper bound) > board.
       | 
       | To be updated as more evidence rolls in.
        
       | jrflowers wrote:
       | This here is what we call a load-bearing "in principle"
        
       | xyst wrote:
       | OpenAI board f'd around and found out the consequences of their
       | poor decisions. The decision to back pedal from previous position
       | just shows the level of disconnect between these 2 entities.
       | 
       | If I was an investor. I would be scared.
        
       | zerohalo wrote:
       | I think we now have an idea of what will happen if AGI is
       | actually reached and efforts are made to contain or restrain it.
        
       | nbzso wrote:
       | Larry Summers? Microsoft? Alignment? Bye, bye humanity.
        
       | joduplessis wrote:
       | There is so much vagueness around this whole OpenAI thing that
       | it's difficult taking anything seriously anymore - it's almost
       | hearsay at this point. Yesterday it was Altman's personal
       | interests, now it's a breakthrough model, tomorrow it's something
       | else. At the very least it's fantastic marketing (albeit at the
       | expense of their customers).
        
       | Obscurity4340 wrote:
       | Is this the most famous post?
        
       | carapace wrote:
       | One of the oldest AGI jokes:
       | 
       | Q: What's AGI?
       | 
       | A: When the machine wakes up and asks, "What's in it for me?"
       | 
       | - - - -
       | 
       | So long, and thanks for all the fish.
        
       | toasted-subs wrote:
       | Seems so strange all of this happened
        
       ___________________________________________________________________
       (page generated 2023-11-23 23:01 UTC)