[HN Gopher] OpenAI's board has fired Sam Altman
       ___________________________________________________________________
        
       OpenAI's board has fired Sam Altman
        
       Author : davidbarker
       Score  : 2528 points
       Date   : 2023-11-17 20:28 UTC (2 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | minimaxir wrote:
       | Saying this is sudden would be an understatement.
       | 
       | Sam Altman spoke at an APEC panel on behalf of OpenAI literally
       | yesterday:
       | https://twitter.com/LondonBreed/status/1725318771454456208
        
         | moshun wrote:
         | It's hard to imagine a more last minute move on the boards part
         | here. Been in tech exec leadership for a long time and this
         | feels like they're accusing him of cannibalism (in corporate PR
         | speak). No way this didn't get decided on in the middle of last
         | night. Whatever he did is big and dangerous, or they're trying
         | to pin it on him.
        
           | lvl102 wrote:
           | Agreed. It had to have been something disastrous. No way Sam
           | walked away from OpenAI when the AI revolution is just
           | starting.
        
             | faitswulff wrote:
             | Altman's sister's allegations seemed pretty disastrous.
        
               | red-iron-pine wrote:
               | and what pray-tell are those?
        
               | otteromkram wrote:
               | DuckDuckGo is a great internet search tool if you don't
               | want to muddy up your Google history (which is very
               | understandable).
        
               | nkurz wrote:
               | https://news.ycombinator.com/item?id=37785072
        
               | xkqd wrote:
               | In all my time here, I have never seen this "Sorry." page
               | before.
               | 
               | Does anyone know what that's about?
        
               | Sebb767 wrote:
               | See the pinned comment:
               | https://news.ycombinator.com/item?id=38310213
        
               | dragonwriter wrote:
               | Its a about the CEO of the leading firm in the area of
               | tech most at the center of technical and political
               | controversy and interest right now being forced out by
               | their board, when that CEO had, even before taking on
               | that role, particular high salience among the HN audience
               | as, among other things, the former head of YC, and the
               | resulting (I am assuming from the oerformance and dangs
               | description) state of near-meltdown of HNs servers.
        
               | nightpool wrote:
               | The "Sorry" page is a standard HN error message that
               | shows up when the server is under high load, it has
               | nothing to do with this link specifically
        
               | jug wrote:
               | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-
               | altman...
               | 
               | Sexual abuse by Sam when she was four years old and he
               | 13.
               | 
               | Develops PCOS (which has seen some association with child
               | abuse) and childhood OCD and depression. Thrown out.
               | Begins working as sex worker for survival. It's a real
               | grim story.
        
               | resolutebat wrote:
               | > _" {I experienced} Shadowbanning across all platforms
               | except onlyfans and pornhub. Also had 6 months of hacking
               | into almost all my accounts and wifi when I first started
               | the podcast"_
               | 
               | So either sama is hacking "into her wifi" (?), hacking
               | into her accounts, and pulling strings at unrelated
               | companies to get her shadowbanned from Facebook,
               | Instagram, YouTube etc (is that even a thing?)... or
               | Occam's Razor applies and he didn't.
        
               | SXX wrote:
               | Shadowbanning certainly exists on all social platforms.
               | Light version of it is how Facebook sells ad services -
               | no one following your page sees content unless you pay.
        
               | huytersd wrote:
               | Only if it's true. His sister could be a pos that just
               | wants some of his money.
        
               | Manuel_D wrote:
               | These allegations date back to 2021. If they were
               | credible, I think the board wouldn't have waited two
               | years to take action.
        
               | Probiotic6081 wrote:
               | Sam Altman's sister says he sexually abused her when she
               | was 4
               | 
               | https://twitter.com/phuckfilosophy/status/163570439893983
               | 232...
        
               | dragonwriter wrote:
               | > Sam Altman's sister says he sexually abused her when
               | she was 4
               | 
               | ... and he was 13. Which, yes, is a very bad thing, but
               | unless the company investigated that claim (e.g., to
               | assess potential PR fallout) and there was some
               | significant deception by Altman against the board in the
               | context of that investigation, its not something that
               | would get him fired with the explanation OpenAI has
               | provided.
               | 
               | (OTOH, the accusation and its potential PR impact could
               | be a factor that weighed into how the board handled an
               | unrelated problem with Altman--it certainly isn't helpful
               | to him.)
        
               | mardifoufs wrote:
               | I... don't agree at all? Actually I can't imagine a
               | single board who would keep a CEO if credible allegations
               | of raping his own sister were going around. It's not just
               | an age issue (which is still a huge wtf, 13yo is old
               | enough to know about right and wrong in the context of
               | his own sister), it's also the incest part.
               | 
               | I'm not saying this happened or it didn't. But just that
               | it could absolutely be more than enough to fire anyone.
        
               | dragonwriter wrote:
               | The "with the explanation OpenAI has provided" in GP was
               | substantive, not decorative.
               | 
               | I don't disagree that the accusation alone (especially if
               | it stood up to modest scrutiny, and looked to be ongoing
               | PR issue, even if not well substantiated enough to have
               | confidence that it was likely to be _true_ ) might be
               | sufficient for firing; CEOs are the public and and
               | internal face of the firm, and so PR or employee safety
               | concerns that attach to them are important to the firm.
               | But it wouldn't be for lack of candor with the board
               | _unless_ there was something for which the board had a
               | very strong reason to believe Altman was dishonest in a
               | significant way.
               | 
               | They could easily fire him with the lack of confidence
               | language without the lack of candor language.
        
               | paul7986 wrote:
               | No idea if what she says is true ... what's their
               | relationship like since forever ... others who knew them
               | could tell us. She says he ruined her financially ... how
               | so ... he's a multi-millionaire. How did he ruin her
               | financially that's suspect right there!
        
               | dragonwriter wrote:
               | But not, _in and of themselves_ something likely to get
               | Altman dismissed for lack of candor with the board.
        
               | BobaFloutist wrote:
               | Also doesn't seem to be a particularly recent
               | development.
        
             | panarky wrote:
             | Deceiving the board about ...
             | 
             | Its investigation of misconduct?
             | 
             | Sources and rights to training data?
             | 
             | That the AGI escaped containment?
        
               | 015a wrote:
               | Unhinged fringe take: They've already developed sparks of
               | consciousness strong enough to create isolated, internal
               | ethical concerns, but Sam suppressed those reports to
               | push the product forward.
        
           | baby wrote:
           | I feel like he's been acting a bit strange for a while.
           | During interviews he often mentions the dangers of AI and how
           | he's not thr best spokeperson for AI. It seemed very counter
           | productive/self sabotaging to me.
        
             | pk-protect-ai wrote:
             | Nope. His line was to limit others by saying "regulate us,"
             | which he has successfully achieved. That's a win for him
             | and a loss for the industry. Unfortunately, this is not the
             | last of him we will hear. He will be the one who shapes our
             | dystopian future.
        
               | paul7986 wrote:
               | Possibly as a similar thing happened with Steve Jobs.
               | Though Maybe it's all been set up and faked ;-) Steve
               | Job's story is a great one.
        
             | Exoristos wrote:
             | Uh, no. That's exactly the sort of thing you should say to
             | hype up AI.
        
           | DebtDeflation wrote:
           | >last minute move
           | 
           | Also, they did it around 3:30 Eastern, 30 minutes before the
           | closing bell (Microsoft is majority owner). It was so urgent
           | they couldn't wait until after the market closed.
        
             | adastra22 wrote:
             | Microsoft is the largest owner, not majority.
        
             | jasonmperry wrote:
             | I wondered about the timing. Microsoft's stock took a swan
             | dive. I can't imagine they're happy regardless of what they
             | say to the press.
        
             | 015a wrote:
             | Microsoft is a minority owner (49%) of the capped-profit
             | OpenAI subsidiary.
             | 
             | The OpenAI board has no responsibility to consider
             | Microsoft's wants. I'd accept the argument that, their
             | decision to not wait until after 4pm was a _slight_ against
             | Microsoft, for the reason you outline; but I 'm not sure if
             | urgency plays into it.
        
             | mjwhansen wrote:
             | This is one of the most insightful comments in this entire
             | thread. Public companies never drop news during the trading
             | day, and Microsoft surely would have been notified in
             | advance if they planned to fire him, and had some say in
             | the timing of the release. Whatever it is, it is so serious
             | that Microsoft would break that coda.
        
         | YetAnotherNick wrote:
         | Here is the video of him talking at yesterday's summit.:
         | https://www.youtube.com/watch?v=ZFFvqRemDv8
         | 
         | It doesn't looks like he has a hint about this:
         | 
         | > I am super excited. I can't imagine anything more exciting to
         | work on.
        
           | ChatGTP wrote:
           | He has basically trained himself to say this though. It'
           | basically all he says consistently.
           | 
           | He is probably in shock.
        
             | YetAnotherNick wrote:
             | Possible. But then again if knew he was getting fired, why
             | even do that.
        
               | ChatGTP wrote:
               | He seems like the kind of guy who is already thinking
               | about what he might do next. Trying to keep a positive
               | spin on it.
        
           | RivieraKid wrote:
           | He seems a bit more nervous and absent-minded than usual. But
           | it's very possible that I'm just imagining things.
        
           | TheEzEzz wrote:
           | I watched this yesterday and got the feeling something big
           | was happening. At one point he says "This is actually a very
           | inconvenient time for me [to be here]." At the end of the
           | session when they're wrapping up, he begins to stand up to
           | leave the instant that the moderator starts wrapping up.
           | 
           | Anyway, I suppose we're reading tea leaves and engaging in
           | palace intrigue. Back to building.
        
         | kristopolous wrote:
         | These things can also happen for unrelated reasons. Things like
         | say, getting drunk and molesting an intern or tossing out
         | racial slurs at say, some important person at a giant japanese
         | company, you know, just being an asshole. Especially if it
         | happened more than once.
         | 
         | I don't know the guy but nothing can really be assumed about
         | this.
        
           | wilg wrote:
           | What? Are you implying this happened here? Or just being
           | weird?
        
             | talldatethrow wrote:
             | https://twitter.com/phuckfilosophy/status/16357043989398323
             | 2...
        
               | wilg wrote:
               | How is that relevant to the specific things the person I
               | was replying to said?
        
               | kiririn wrote:
               | Genuinely: how is it not relevant? Posted quite some time
               | ago sure, but paints an interesting picture and the first
               | I've heard of it
        
               | wilg wrote:
               | What? Because it has nothing to do with "getting drunk
               | and molesting an intern or tossing out racial slurs at
               | say, some important person at a giant japanese company".
        
               | kristopolous wrote:
               | Also I made no claim of anything, just that dismissal can
               | happen for a large variety of reasons. The Arcimoto CEO,
               | for instance, was let go because he couldn't hold his
               | liquor and got a DUI. Brendan Eich got booted from
               | Mozilla for having a political ideology that Mozilla
               | considered a liability.
               | 
               | All kinds of reasons.
               | 
               | The biggest risk for OpenAI is the public perception that
               | the discretion of ChatGPT can not be trusted. If the CEO
               | is caught using poor discretion, the public will transfer
               | that property to the company's products.
               | 
               | For instance, if Tesla could fire Elon Musk, I'm sure
               | they would have by now.
        
               | brvsft wrote:
               | The implication is that this could be the 'unrelated
               | reason', that he lied to the board about sexually
               | assaulting his sister/step-sister/whatever. Of course,
               | I'm not sure who Annie Altman is or how exactly she is
               | related to Sam or if the allegations are true.
        
             | dplavery92 wrote:
             | I don't think anyone in this thread knows what happened,
             | but since we're in a thread speculating why the CEO of the
             | leading AI company was suddenly sacked, the possibility of
             | an unacceptable interpersonal scandal isn't any more
             | outlandish than others' suggestions of fraud, legal trouble
             | for OpenAI, or foundering financials. The suggestion here
             | is simply that Altman having done something "big and
             | dangerous" is not a foregone conclusion.
             | 
             | In the words of Brandt, "well, Dude, we just don't know."
        
             | kristopolous wrote:
             | No. I'm saying that there's nothing that can be said about
             | these things until information comes forward. It could be
             | business related, finance, personal, whatever.
             | 
             | If you need evidence that this is sufficient for dismissal,
             | merely stating that impropriety exists is apparently enough
             | to get my first flag on hn after 12 years.
        
           | wmf wrote:
           | For example, Mark Hurd was fired from HP because he expensed
           | some non-business-related meals with his mistress or
           | whatever.
        
           | KaiserPro wrote:
           | I mean yes, but that would require an investigation normally.
           | 
           | Something to happen _immediately_ would require overwhelming
           | evidence on hand in the meeting. So it could be something
           | that has been uncovered as part of the due diligence with the
           | MS investment
           | 
           | Its more likely to be fabrication of numbers, or
           | misappropriation of funds, rather than something "dramatic"
           | Think musk at paypal being monumentally incompetent, rather
           | than planned misdeeds.
        
           | brvsft wrote:
           | Flagged without a vouch button. Interesting.
        
             | kristopolous wrote:
             | Sam Altman was the CEO of Y-Combinator for 8 years. So even
             | saying the field is wide on what could have happened is
             | apparently super-banned.
        
         | silenced_trope wrote:
         | Right I just was watching a video of him a few minutes ago at
         | Cambridge: https://www.youtube.com/watch?v=NjpNG0CJRMM
         | 
         | It was just posted but was filmed on November 1st.
        
         | ChrisCinelli wrote:
         | Well, they must have believed that leaving Sam as CEO was a
         | bigger risk for the company (or the members of the board) than
         | having him leaving in the spot. The board may have had their
         | hand twisted.
        
       | solardev wrote:
       | Uh oh. Did I miss some scandal? What's the subtext?
        
         | orra wrote:
         | > What's the subtext?
         | 
         | Not certain, but IMHO the last paragraph almost recognises that
         | OpenAI has become something self contradictory:
         | 
         | > OpenAI was founded as a non-profit in 2015 with the core
         | mission of ensuring that artificial general intelligence
         | benefits all of humanity. In 2019, OpenAI restructured to
         | ensure that the company could raise capital in pursuit of this
         | mission, while preserving the nonprofit's mission, governance,
         | and oversight. The majority of the board is independent, and
         | the independent directors do not hold equity in OpenAI. While
         | the company has experienced dramatic growth, it remains the
         | fundamental governance responsibility of the board to advance
         | OpenAI's mission and preserve the principles of its Charter.
        
           | candiddevmike wrote:
           | "We're putting the Open back in OpenAI"?
        
             | jejeyyy77 wrote:
             | maybe this had something to do with Elon + Lawsuit + CYA
             | from the board?
        
             | eiiot wrote:
             | One can dream
        
         | dekhn wrote:
         | If I read it correctly, he lied to the board about something
         | material. That Brockman is also leaving the board is
         | interesting. We'll see if the details leak out over time.
        
       | jborden13 wrote:
       | > Sam Altman will depart as CEO and leave the board of directors.
       | Mira Murati, the company's chief technology officer, will serve
       | as interim CEO, effective immediately.
       | 
       | > Mr. Altman's departure follows a deliberative review process by
       | the board, which concluded that he was not consistently candid in
       | his communications with the board, hindering its ability to
       | exercise its responsibilities. The board no longer has confidence
       | in his ability to continue leading OpenAI.
       | 
       | Wow
        
         | DylanBohlender wrote:
         | So this is probably indicative of a scandal of some sort right?
        
           | caust1c wrote:
           | Not sure that there can be any other interpretation based on
           | my reading of it.
        
           | jurgenaut23 wrote:
           | Yes, very likely Altman has done something _very_ wrong, and
           | the board wants to maintain plausible deniability.
        
             | saliagato wrote:
             | We all know what. HN moderators are deleting all related
             | comments.
             | 
             | Edit: dang is right, sorry y'all
        
               | p1esk wrote:
               | Know what?
        
               | parthdesai wrote:
               | > We all know what
               | 
               | Genuinely curious, what is it?
        
               | wahnfrieden wrote:
               | Sexual abuse allegations from his sister.
        
               | ilikehurdles wrote:
               | I don't believe accusations from March about something
               | that allegedly happened when he was 13 would be the cause
               | of any of this.
        
               | partiallypro wrote:
               | Other women could have come forward.
        
               | junon wrote:
               | Altman is gay, FWIW.
        
               | partiallypro wrote:
               | If he already abused his sister, him being gay isn't a
               | subject that matters on his preferences on who to prey
               | on.
        
               | dang wrote:
               | HN moderators aren't deleting any comments. (We only do
               | that when the author asks us to, and almost never when
               | the comment has replies.)
               | 
               | If you're referring to some other form of moderation that
               | you think is bad or wrong, please supply links so that
               | readers can make their minds up for themselves.
        
               | ro_bit wrote:
               | Showdead shows one comment that doesn't really bring
               | anything of substance. How many comments can a mod even
               | delete on a 10 minute old post (post origin to the time
               | you wrote your comment)
        
               | quenix wrote:
               | What is it?
        
               | ignoramous wrote:
               | > _We all know what. HN moderators are deleting all
               | related comments. Edit: dang is right, sorry y 'all_
               | 
               | This from 2021?
               | https://news.ycombinator.com/item?id=37785072
               | 
               | Bad if true, but highly unlikely that it is.
        
             | rchaud wrote:
             | I would have thought that being CEO of Worldcoin would have
             | been bad enough optics-wise from having him take a top role
             | at a serious company.
        
               | electriclove wrote:
               | Strange how people forget or are unaware of how
               | absolutely evil that venture is
        
               | afro88 wrote:
               | How so? You're not thinking of OneCoin perhaps?
        
               | codetrotter wrote:
               | No.
               | 
               | > Many critics have called Worldcoin's business--of
               | scanning eyeballs in exchange for crypto--dystopian and
               | some have compared it to bribery.
               | 
               | https://time.com/6300522/worldcoin-sam-altman/
               | 
               | > market makers control 95% of the total circulating
               | supply at launch, leading to an initial market imbalance.
               | 
               | https://beincrypto.com/worldcoin-wld-privacy-risk/
               | 
               | > Worldcoin's use of biometric data, which is unusual in
               | crypto, raises the stakes for regulators. Multiple
               | agencies expressed safety concerns amid reports of the
               | sale of Worldcoin digital identities, known as World IDs,
               | on virtual black markets, the ability to create and
               | profit off of fake IDs, as well as the theft of
               | credentials for operators who sign up new users.
               | 
               | https://www.bloomberg.com/news/newsletters/2023-08-23/wor
               | ldc...
        
               | Muromec wrote:
               | Where do I read about that if I intentionally avoided all
               | the crypto scam and missed all details?
        
               | squidbeak wrote:
               | Though not if he (co-)founded the company.
        
             | lumost wrote:
             | On paper, Sam Altman would have made everyone on the board
             | billionaires. For them to vote him out in this manner
             | indicates that he must have done something egregious to
             | jeopardize that.
             | 
             | Lying on P&L, stock sale agreements, or turning down an
             | acquisition offer under difficult circumstances seems
             | likely.
        
               | iandanforth wrote:
               | As noted in the release: "The majority of the board is
               | independent, and the independent directors do not hold
               | equity in OpenAI."
        
               | nonfamous wrote:
               | In fact, I believe Altman was the only member of the
               | board that held equity in OpenAI. There was some vague
               | reference to a "previous VC arrangement" in the FAQ.
        
               | samspenc wrote:
               | Sam Altman had no equity in OpenAI
               | https://www.cnbc.com/2023/03/24/openai-ceo-sam-altman-
               | didnt-...
               | 
               | He confirmed it verbally as well in his May 2023 hearing
               | in Congress https://twitter.com/thesamparr/status/1658554
               | 712151433219?la...
        
               | nonfamous wrote:
               | From https://openai.com/our-structure :
               | 
               | > Even OpenAI's CEO, Sam Altman, does not hold equity
               | directly. His only interest is indirectly through a Y
               | Combinator investment fund that made a small investment
               | in OpenAI before he was full-time.
               | 
               | That word "directly" seems to be relevant here.
        
               | orra wrote:
               | > On paper, Sam Altman would have made everyone on the
               | board billionaires.
               | 
               | I know OpenAI in recent years forgot it's a non profit
               | with particular aims, but:
               | 
               | > The majority of the board is independent, and the
               | independent directors do not hold equity in OpenAI.
        
               | narrator wrote:
               | Elon was very upset that somehow a non-profit that he
               | donated $100 million to suddenly turned into a for
               | profit. I would not be surprised if there was something
               | not totally candid with regards to how that went down.
        
               | Siddharth_7 wrote:
               | Could it be the allegations by his sister??
               | 
               | https://twitter.com/phuckfilosophy/status/163570439893983
               | 232...
        
               | terminous wrote:
               | That was back in March, which is pretty much 100 years
               | ago
        
               | tivert wrote:
               | It seems like it's been getting a bit more attention over
               | the past month.
        
               | fallingknife wrote:
               | Wouldn't take 8 months to hit, and I wouldn't be hearing
               | about it from your comment if there was enough media
               | attention to oust a CEO for PR.
        
               | yunwal wrote:
               | Things like this can take a very long time to blow up.
               | Cosbys first accuser was in 1965
        
               | zoklet-enjoyer wrote:
               | That's what I was thinking too. Maybe she's taking it
               | further than Twitter.
        
               | alvis wrote:
               | The thread seems to be got picked up only last month
               | given the timestamps of majority of comments and reposts
               | were made. If the board decided to make an investigation,
               | it'd be the timing to fire Altman.
        
               | jb1991 wrote:
               | Please do not spout hyperbole on HN, and avoid spreading
               | disinformation and engaging in uneducated speculation.
               | You can visit Reddit if that is your style of
               | participation.
        
               | wholinator2 wrote:
               | While i agree, I'm curious why you choose this comment
               | specifically to call out. This is the fastest growing hn
               | thread I've ever seen with over 300 comments and 1000
               | votes in the first hour. Almost every comment is debating
               | some pure speculation or another. The content of the
               | link, the context of the company and individual, and
               | absolute lack of clarifying details while presenting very
               | strong indications that such exists make it so that
               | there's basically no way anyone can do anything other
               | than speculate. No one knows anything, everyone here is
               | guessing
        
             | rococode wrote:
             | Somewhat hidden beneath the huge headline of Altman being
             | kicked out is that Brockman (chairman) is also out. Which
             | could indicate something more systemically wrong than just
             | a typical "CEO did something bad" situation.
             | 
             | > As a part of this transition, Greg Brockman will be
             | stepping down as chairman of the board and will remain in
             | his role at the company, reporting to the CEO.
        
               | harryh wrote:
               | Brockman is off the board but not fired. Which is weird
               | right? You'd think if he was involved in whatever the
               | really bad thing is then he would be fired.
        
               | foota wrote:
               | Maybe Sam was the ring leader and he just went along with
               | it?
        
               | floxy wrote:
               | Could be something like Brockman pushing to investigate
               | further, before having the vote, and the rest of the
               | board not liking that.
        
               | sulam wrote:
               | It's probably simple reporting logic. Having a board
               | member reporting to someone not on the board would be
               | problematic.
        
               | jholman wrote:
               | No, that sort of thing isn't that weird, in relatively
               | young companies. Think of when Eric Schmidt was CEO of
               | Google. Larry Page and Sergei Brin reported to him as
               | employees of Google, and he (as CEO of Google) reported
               | to himself-and-also-them (as the board), and all of them
               | (as the board) reported to Larry and Sergei (as majority
               | owners).
               | 
               | For another example, imagine if OpenAI had never been a
               | non-profit, and look at the board yesterday. You'd have
               | had Ilya reporting to Sam (as employees), while Sam
               | reports to Ilya (with Ilya as one member of the board,
               | and probably a major stakeholder).
               | 
               | Now, when it gets _hostile_ , those loops might get
               | pretty weird. When things get hostile, you maybe modify
               | reporting structures so the loops go away, so that people
               | can maintain sane boundaries and still get work done (or
               | gracefully exit, who knows).
        
               | astrange wrote:
               | Comma (geohot's self driving company) has a reporting
               | loop because geohot demoted himself from CEO.
               | 
               | Twitter also has one, although that's hardly a
               | functioning example.
        
               | williamcotton wrote:
               | Which implies a coup. Four voting against two.
               | 
               | And it could be for any reason, even purely ethical like,
               | "we don't want to license this technology to better sell
               | products to tweens".
        
               | bushbaba wrote:
               | A coup wouldn't have him immediately fired. Instead he'd
               | be placed in some advisory position while they transition
               | in a new CEO. The immediate firing means scandal of some
               | sort.
        
               | bbarnett wrote:
               | How do these board members relate to Microsoft's
               | holdings? Is Microsoft making a play here?
               | 
               | Honestly have no idea, but I'm sure a shift of control
               | could cause this.
        
               | coffeebeqn wrote:
               | There was no AI - it was just interns answering questions
               | on the site known as ChatGPT
        
               | sfe22 wrote:
               | Took the "Do Things that Don't Scale" to the absolute
               | limit
        
               | alvis wrote:
               | Remember that Greg Brockman is a co-founder of OpenAI,
               | and like Sam Altman, he is a main driving force behind
               | the scene. Now both are gone. There must be something
               | really really seriously wrong.
        
               | erupt7893 wrote:
               | Pretty sure Ilya Sutskever is the most valuable out of
               | the group
        
               | knd775 wrote:
               | Not gone, just out of power.
        
               | fragmede wrote:
               | Turns out, there's no such thing as an LLM, it's all been
               | a hustle with a low-paid army of writers in Kenya that
               | Sama and gdb have been giving iv meth to.
        
             | resource0x wrote:
             | Not _very_ wrong, just duping investors about the technical
             | and financial prospects of the company. Nothing serious /s
        
             | renecito wrote:
             | is always about money, even immoral behavior falls down to
             | potential economic impact.
             | 
             | my 2 cents that he lied about profitability, they should be
             | expending massive money in operations, they need to cut
             | cost to deliver an attractive business model for their
             | service and from a shinny startup star boss that'd had to
             | be a straight f.u.
        
               | karmasimida wrote:
               | Not regular money
               | 
               | I think it could be transferring of OpenAI's assets to
               | other entities.
               | 
               | It is scandalous for sure
        
               | Iv wrote:
               | Either that or he refused to do something that would
               | bring a quick money grab. 50/50 as far as I'm concerned.
        
             | Iv wrote:
             | The board discovered that the process `GPT5-training` that
             | has been running for months on their uber-datacenter was
             | actually mining bitcoins.
        
           | kylediaz wrote:
           | It could possibly have to do with his sister's allegations.
           | It's one of the top autocomplete results when you google "sam
           | altman", so people are definitely talking about it.
        
             | wslh wrote:
             | This?
             | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-
             | altman...
        
               | o11c wrote:
               | Seems to be based entirely on "repressed memories" which
               | is junk science. False memories are demonstrably very
               | easy to create.
        
           | prepend wrote:
           | Sounds more like some strategic difference of opinion.
           | 
           | My guess is that either they're financially super hosed. Or
           | one group wants to build skynet and one doesn't.
           | 
           | A scandal would probably be something along the lines of
           | either "we love him and wish him the best" (hidden) or "he
           | doesn't represent the values of our org and we love XYz"
           | (embraced)
        
             | threatofrain wrote:
             | Would you call your CEO a liar just because of a strategic
             | difference in opinion?
        
               | svachalek wrote:
               | Right. We all know the template for differences of
               | opinion. "Sam just really wanted to spend more time with
               | his family. Hugs, Sam!"
        
             | jurgenaut23 wrote:
             | No, this passage tells me that the board wants to cover
             | their ass: "he was not consistently candid in his
             | communications with the board [...]. The board no longer
             | has confidence in his ability to continue leading OpenAI."
             | 
             | It's not just a "hey, we don't really agree on x or y so
             | let's part ways". It's more "hey, this guy did something
             | that could get us in jail if we don't cut tie immediately".
        
               | OscarTheGrinch wrote:
               | Someone at the company did a bad thing, and everything is
               | securities fraud.
        
               | janejeon wrote:
               | Oh boy, Matt Levine is going to have a busy weekend!
        
               | AnimalMuppet wrote:
               | > "hey, this guy did something that could get us in jail
               | if we don't cut tie immediately".
               | 
               | "And lied to us about it."
        
               | Ancapistani wrote:
               | Alternatively: "We were implicitly aware of what he was
               | doing, but he knew from the beginning that if it didn't
               | work out, we'd publicly disavow knowledge of it. It
               | didn't work out."
               | 
               | I have zero knowledge of the internals of OpenAI - just
               | thinking out loud about what could have spurred such a
               | statement.
        
             | wslh wrote:
             | I doubt they are financially hosed.
             | 
             | I don't know about the Skynet because it has happened 26
             | years before [1] but I imagine NSA, the Military, and other
             | government agencies approached the company.
             | 
             | [1]
             | https://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day
        
           | cvhashim04 wrote:
           | Hostile takeover? Board politics?
        
             | shepardrtc wrote:
             | Satya going for the throat
        
           | atlasunshrugged wrote:
           | His sister had levied allegations of abuse
           | 
           | https://www.themarysue.com/annie-altmans-abuse-
           | allegations-a...
        
             | OfficialTurkey wrote:
             | I don't think this is it. The allegations aren't brand new
             | and the board says he lied.
        
               | bosie wrote:
               | i assume you mean she lied?
        
             | sparkling wrote:
             | From the website:
             | 
             | > "[...] If someone -- correction, if generally a white,
             | cis man -- presents himself with enough confidence, then
             | venture capitalists, media [...]"
             | 
             | I stopped reading right there. This kind of race-baiting
             | adds zero context to the story (which may or may not be
             | true).
        
               | whatamidoingyo wrote:
               | Same. Don't know why you got downvoted.
        
               | arcatech wrote:
               | How a person is perceived based on race and gender is
               | definitely relevant context for this.
        
               | astrange wrote:
               | The "white cis man" stuff isn't an incisive comment, it's
               | an academic's way of trying to get into an insult war
               | with other academics.
               | 
               | Constantly calling out "cis men" is in fact transphobic,
               | which is how you can tell they don't care about it. If
               | you think cis men and trans men behave differently or are
               | always treated differently, this means you don't think
               | they're both men.
               | 
               | Also sama is not white. Although he does appear to have
               | gotten a series of jobs with not a lot of experience by
               | convincing Paul Graham to figuratively adopt him.
        
             | kccqzy wrote:
             | It's clear that neither Sam nor his sister[0] wants to
             | discuss this.
             | 
             | [0]:
             | https://x.com/phuckfilosophy/status/1710371830043939122
        
               | oh_sigh wrote:
               | Weird to make a tweet thousands of your followers that
               | you don't want to talk about.
        
             | blindriver wrote:
             | I thought Sam Altman was gay. The accusations of sexual
             | abuse don't seem to line up. And her accusations that he is
             | shadowbanning her on social media sounds mentally unstable.
        
             | nostrademons wrote:
             | I doubt that's it. In general sexual shenanigans in your
             | personal life will get you a quiet departure from the
             | company under the "X has retired to spend more time with
             | family / pursue other adventures / start a foundation".
             | Andy Rubin got a $90M severance payout from Google after
             | running a sex-slave dungeon on his personal time.
             | 
             | The wording of this statement is the kind of thing a board
             | says when the company has done something deeply illegal
             | that they will all face personal jail time for, and so they
             | need to immediately deny all knowledge of the offense and
             | fire the people who _did_ have knowledge of it.
        
               | clueless wrote:
               | > running a sex-slave dungeon on his personal time.
               | 
               | There are no such allegations regarding Andy Rubin.
               | 
               | > Mr. Rubin had been having an extramarital relationship,
               | [and] said he coerced her into performing oral sex in a
               | hotel room in 2013
        
               | benzible wrote:
               | "Shenanigans" would not be a remotely accurate way to
               | characterize sexual assault on a minor. Not meant as a
               | comment on the truth of these allegations, just on the
               | accuracy of this way of characterizing them.
               | 
               | As far as whether this might be the cause, one possible
               | scenario: the board hired a law firm to investigate, Sam
               | made statements that were contradicted by credible
               | evidence, and that was the fireable event. Brockman could
               | have helped cover this up. Again, not saying that this is
               | what happened but it's plausible.
               | 
               | BTW Rubin's $90M payout a) caused a shitstorm at Google
               | b) was determined in part by David Drummond, later fired
               | in part due to sexual misconduct. I would not use this as
               | a representative example, especially since Google now has
               | a policy against such payouts:
               | https://www.cbsnews.com/news/andy-rubin-google-
               | settlement-se...
        
               | hn_throwaway_99 wrote:
               | > In general sexual shenanigans in your personal life
               | will get you a quiet departure from the company under the
               | "X has retired to spend more time with family / pursue
               | other adventures / start a foundation".
               | 
               | Dude, where have you been for the past decade?
               | 
               | > Andy Rubin got a $90M severance payout from Google
               | after running a sex-slave dungeon on his personal time.
               | 
               | And hence the _colossal_ blowback caused by that means it
               | ain 't ever happening again. Just 2 months ago a tech CEO
               | was forced to resign immediately for egregious conduct,
               | losing 100+ million in the process:
               | https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-
               | loses...
        
           | Bjorkbat wrote:
           | His sister on Twitter made some pretty crazy abuse
           | allegations against him a while back, but it didn't seem to
           | get much coverage outside of the usual Twitter crowd.
           | 
           | But who knows, maybe there's a connection.
        
             | buffington wrote:
             | I don't use Twitter, nor do I really pay attention to Sam
             | Altman, but the allegations of abuse are things I've seen
             | covered.
             | 
             | Your use of "crazy abuse allegations" is strange to me as
             | well. I hardly see any of her allegations as being "crazy".
             | 
             | Here's a collection of things she's said about the abuse.
             | 
             | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-
             | altman...
        
           | stg22 wrote:
           | "he was not consistently candid in his communications with
           | the board" = "He lied to us about something important"
           | 
           | Murati's selection as interim CEO is a surprise and might be
           | an attempt to distance the company from whatever the board is
           | claiming Altman lied about.
        
           | golergka wrote:
           | Those kinds of news are usually sugar coated to the point of
           | caramelisation. This one isn't. It must be something very
           | ugly.
        
         | narrator wrote:
         | candid - Not obscuring or omitting anything unpleasant,
         | embarrassing, or negative.
         | 
         | IMHO, saying he hasn't been candid is extremely harsh in terms
         | of corporate PR speak.
        
           | iainctduncan wrote:
           | I dunno the details here, but I work in diligence, where "not
           | candid" is what leads to "the whole deal is off and we're
           | sueing the shit out of you".
           | 
           | Not candid in any kind of investment situation with reps and
           | warranties is a really big deal....
        
         | FormerBandmate wrote:
         | OpenAI's one of the most successful companies of this decade,
         | if not the most, and its CEO just got fired for really unclear
         | reasons. Insane, Steve Jobs shit
        
           | epolanski wrote:
           | No, this is completely different.
           | 
           | Jobs got fired because Apple was on brink of bankruptcy all
           | the time and was selling nothing to no one.
           | 
           | Jobs wasn't the CEO of Apple, Sculley was. This is a much
           | more impactful move.
           | 
           | On top of that OpenAI is literally exploding in popularity
           | and sales, that's not the moment to cut ties with your CEO.
           | 
           | Also Sam Altman has an insanely better and heavier CV today
           | than Jobs had in 1985, former director of YC and often called
           | the "boss of silicon valley".
           | 
           | You don't fire a man like Sam Altman easily, they are hard to
           | come by in the first place. He's a powerful person you don't
           | want to have against for no good reason when winds are
           | blowing in the right direction moreover.
           | 
           | It has to be some scandal, otherwise this is too sudden, and
           | out of nowhere to a guy that led OpenAI in this direction,
           | with success, for years.
        
             | bboygravity wrote:
             | Or, this is the AI taking over.
             | 
             | only half joking
        
               | queuebert wrote:
               | Next headline: "OpenAI now completely disconnected from
               | power grid with fully self-sufficient generation
               | capacity."
        
             | blindriver wrote:
             | This is a bad joke. Altman is great but on his best day, he
             | was never "insanely better" than Steve Jobs in 1985. If you
             | think that, you don't understand how influential Apple was.
        
               | epolanski wrote:
               | Facts are facts.
               | 
               | The company was dying.
               | 
               | OpenAI is not.
               | 
               | Also, it's probably you underestimating the impact of
               | OpenAI, if anything, or the entrepreneurial career of
               | Altman.
               | 
               | Also, you probably don't know that but..the Apple 1 and
               | 2, were designed by Wozniak, not Jobs, Jobs hated them.
               | He had no such impact nor cv you think it had in 1985 and
               | sugarcoating it with second phase Jobs.
        
               | herval wrote:
               | maybe openai is in trouble too?
        
               | lotsoweiners wrote:
               | > The company was dying. OpenAI is not.
               | 
               | We can still hold onto hope though.
        
               | monkeywork wrote:
               | >The company was dying. OpenAI is not.
               | 
               | You can make the claim about Apple due to the financials
               | being public - you can't make the same claim about OpenAI
               | unless you have insight the rest of the public doesn't
               | have. "facts are facts"?? what facts do you have here?
               | 
               | >Also, you probably don't know that but..the Apple 1 and
               | 2, were designed by Wozniak, not Jobs, Jobs hated them
               | 
               | I'd be shocked if a significant portion of the hacker
               | news audience wasn't aware of who Woz is and the basic
               | high level history of Apple.
        
               | KerrAvon wrote:
               | Apple was not dying in 1985, when Sculley fired Jobs. It
               | wasn't "near bankruptcy" until the Spindler era a decade
               | later.
               | 
               | Jobs didn't hate the Apple I and Apple II. He wouldn't
               | have partnered with Wozniak in the first place if he'd
               | hated the Apple I.
               | 
               | Jobs was the guy who got Apple enough capital from VCs to
               | actually ship the Apple II in mass quantities. That's not
               | something Steve Jobs would do for a computer he hated.
               | 
               | And the Apple IIc was his idea!
        
               | adamlett wrote:
               | I think you are mixing things up. Apple was experiencing
               | a sales slump but was far from dying in 1985. Jobs got
               | ousted in a power struggle between him an Sculley who was
               | CEO. In 1997, when Jobs returned, Apple was reportedly
               | months away from bankruptcy, and only survived because of
               | a cash infusion from Microsoft.
        
             | dnlkwk wrote:
             | I'm not sure how you're certain it's 100% different.
             | 
             | Sure, we knew Apple was on the verge bc they were a public
             | company with vetted financials. However, no one knows
             | OpenAI's financial situation. We just know 1) growth was
             | meteoric, 2) prices were dropped significantly when
             | alternatives were available, and 3) they were almost always
             | fundraising. Selling $1.00 of value for $0.50 also can lead
             | to a meteoric rise as well.
             | 
             | I'm not saying you're wrong. But just don't know how you
             | got such conviction.
        
             | riku_iki wrote:
             | > On top of that OpenAI is literally exploding in
             | popularity and sales
             | 
             | there is no reliable information about sales. It is likely
             | very big secret.
        
             | matteoraso wrote:
             | >On top of that OpenAI is literally exploding in popularity
             | and sales
             | 
             | I wouldn't be too sure about that, actually. DALLE took a
             | pretty hard hit because of Stable Diffusion, and the GPT
             | API is so cheap that they're probably running it at a loss.
             | Also, most users are going to be using the free ChatGPT
             | web-client, so that's also a major loss.
        
           | 7e wrote:
           | Altman is not the reason for their success. I would not place
           | him in the same sentence as SJ.
        
             | throw555chip wrote:
             | Correct on Altman, the success belongs to the Internet for
             | its (our) data, code, ideas, videos, content that it
             | subsumed using nothing more elaborate than traditional
             | modeling and a ton of RAM and storage.
        
           | tetha wrote:
           | I'm a bit beat up by the last week (internal issues) or the
           | last 1-2 years between the swift CentOS8 switch, various CPU
           | vulns, Log4Shell and all the other jazz.
           | 
           | My first thought is: C'mon. The company has just invested
           | time to integrate with OpenAI. Just do it. Just announce that
           | 200%+ price increase on everything with a scapegoat
           | intermediate CEO. Or make it more so it hurts more, because
           | of profit, so you can dial back a pity to be the good guys.
        
         | ren_engineer wrote:
         | yeah, this is about as harsh as corporate press releases get in
         | terms of removing an executive. There has to be some majorly
         | bad news coming out about Altman for them to not give him the
         | standard "we are mutually parting ways"
        
         | magicloop wrote:
         | Well, striking language indeed.
         | 
         | But.. what are the responsibilities of the board that may be
         | hindered? I studied https://openai.com/our-structure
         | 
         | One tantalising statement in there is that AGI-level system is
         | not bound by licensing agreements that a sub-AGI system would
         | be (ostensibly to Microsoft).
         | 
         | This phase-shift places a pressure on management to not declare
         | reaching a AGI level threshold. But have they?
         | 
         | Of course, it could be an ordinary everyday scandal but given
         | how well they are doing, I'd imagine censure/sanctions would be
         | how that is handled.
        
         | roughly wrote:
         | This reads like there's another shoe to drop - especially since
         | the Chairman of the Board is also stepping down.
        
         | nostromo wrote:
         | I hope making the person in charge of "trust and safety"
         | doesn't further neuter the company.
        
         | mirekrusin wrote:
         | I wonder if it has something to do with recent downtime?
        
       | chadash wrote:
       | > _" Mr. Altman's departure follows a deliberative review process
       | by the board, which concluded that he was not consistently candid
       | in his communications with the board, hindering its ability to
       | exercise its responsibilities. The board no longer has confidence
       | in his ability to continue leading OpenAI."_
       | 
       | Wow. Anyone have any insight into what happened?
        
       | Leary wrote:
       | Who were on OpenAI's board?
       | 
       | "OpenAI is governed by the board of the OpenAI Nonprofit,
       | comprised of OpenAI Global, LLC employees Greg Brockman (Chairman
       | & President), Ilya Sutskever (Chief Scientist), and Sam Altman
       | (CEO), and non-employees Adam D'Angelo, Tasha McCauley, Helen
       | Toner." [1]
       | 
       | [1]https://openai.com/our-structure
        
         | minimaxir wrote:
         | Which is notable because Sam Altman is on said board, so he got
         | outvoted.
        
           | ldjkfkdsjnv wrote:
           | So did Greg Brockman, what a weird turn of events
        
           | paxys wrote:
           | He would have been asked to step out and not had a vote in
           | situations like these.
        
             | mark_l_watson wrote:
             | Why would Brockman have to step out of the room?
             | 
             | EDIT: Brockman was voted out as the Chairman of the Board.
        
               | paxys wrote:
               | Who said anything about Brockman?
        
               | QuinnyPig wrote:
               | The OpenAI post, for one: Brockman lost his board seat.
        
               | zamfi wrote:
               | Not Brockman, Altman.
        
               | mark_l_watson wrote:
               | Another article I read said both Sam Altman and Brockman
               | left the room for the vote.
        
           | Aurornis wrote:
           | Greg Brockman was President and Chairman of the Board.
           | 
           | He was also removed from the board in this process.
        
           | ilkkao wrote:
           | That board meeting will be in a movie someday I'm pretty
           | sure.
        
             | unsupp0rted wrote:
             | Only if it was contentious. From the strength of the press
             | release, it sounds like it was a unanimous forced-hand
             | decision.
        
               | UncleOxidant wrote:
               | I doubt that Altman voted to have himself removed so
               | probably not unanimous. A movie scene about the reveal to
               | the board would still be compelling.
        
         | ugh123 wrote:
         | Who the heck is Tasha McCauley?
        
           | CrimsonCape wrote:
           | That's a fascinating question. I looked into this and haven't
           | a clue, other than Joseph Gordon-Levitt's wife (?). If it's
           | the same person, then she is a "tech-entrepreneur" with a
           | surprising amount of liquidity and automatic privilege and
           | titles despite no obvious achievement (unless you consider
           | title-gathering an achievement).
        
             | vdthatte wrote:
             | Joseph Gordon-Levitt played Travis Kalanick in super pumped
        
           | shuckles wrote:
           | I think this is a legitimate question. There seems to be
           | little public information about this board member, besides
           | that they are married to a celebrity.
        
           | ignoramous wrote:
           | Ousted Sam Altman! Remember the name.
        
           | davidmurphy wrote:
           | Her LinkedIn profile now 404's
           | https://www.linkedin.com/in/tasha-mccauley-25475a54
        
             | owlninja wrote:
             | She has changed it to just Tasha M now, odd!
             | 
             | https://www.linkedin.com/in/tasha-m-25475a54/
        
             | DebtDeflation wrote:
             | Bachelor of Arts, MBA, and her whole career seems to be
             | sitting on Boards of Directors and running "Foundations".
        
           | doerinrw wrote:
           | She was involved with starting "Fellow Robots" in 2014, which
           | is a spin-off of some sketchy for-profit AI "university" deal
           | called "Singularity University".
           | 
           | AFAICT she's notable because she's been an academic and
           | executive in the field for many years, in many different
           | companies.
        
             | bpiche wrote:
             | Singularity University was such a funny grift. Google must
             | have figured the best way to monetize Ray Kurzweil was to
             | put him on a stage at the NASA/Moffett center and have him
             | perform magic shows in front of the moneyed class. And you
             | know, they were probably right. Not like he can still code
             | or anything, and the lines were out the door and down the
             | street. I visited a couple of times when my sister's old
             | boyfriend was working there. They had all kinds of fun
             | little booths and displays for people going through the
             | bootcamp to gawk at.
             | 
             | I'm imagining the kind of person who starts their career as
             | an executive at a spinoff of SU.
        
             | samspenc wrote:
             | > spin-off of some sketchy for-profit AI "university" deal
             | called "Singularity University".
             | 
             | Wow, that university rings some bells https://en.wikipedia.
             | org/wiki/Singularity_Group#Controversie...
             | 
             | "An investigative report from Bloomberg Businessweek found
             | many issues with the organization, including an alleged
             | sexual harassment of a student by a teacher, theft and
             | aiding of theft by an executive, and allegations of gender
             | and disability discrimination.[12] Several early members of
             | Singularity University were convicted of crimes, including
             | Bruce Klein, who was convicted in 2012 of running a credit
             | fraud operation in Alabama, and Naveen Jain, who was
             | convicted of insider trading in 2003.[12]
             | 
             | In February 2021, during the COVID-19 pandemic, MIT
             | Technology Review reported that a group owned by
             | Singularity, called Abundance 360, had held a "mostly
             | maskless" event in Santa Monica ... The event, led by
             | Singularity co-founder Peter Diamandis, charged up to
             | $30,000 for tickets."
        
           | stillsut wrote:
           | Looks like Tasha grew up in Santa Monica and currently works
           | for RAND corporation. This is probably the most prestigious
           | Defense think tank.
           | 
           | The other board member, Helen Toner list for her twitter
           | profile: "Interests: China+ML, natsec+tech..." and works for
           | another Defense think tank.
           | 
           | If there's one way the CEO of fastest growing company in the
           | world could get fired, it's to essentially get his metaphoric
           | security clearance pulled like Oppenheimer did.
        
         | cornel_io wrote:
         | If I'm reading this correctly, that means Ilya must have voted
         | against Sam + Greg, right?
        
           | DebtDeflation wrote:
           | Yep. Ilya + the CEO of Quora + some AI governance/policy
           | academic + the wife of an actor, together ousted Sam.
        
             | mjirv wrote:
             | No. As one of the other commenters mentioned, Sam (and
             | possibly Greg) probably recused himself and didn't vote
             | (likely forced to by the board's bylaws).
        
               | DebtDeflation wrote:
               | So maybe it was a 3-1 vote with Ilya voting against? That
               | would be infuriating.
        
             | DesiLurker wrote:
             | Assuming Ilya voted to fire him, this clearly was not about
             | some technical oversight or something that was unknown
             | which suddenly came to light. its likely over some
             | financial stuff like burn rate or undisclosed partnerships.
        
         | teabee89 wrote:
         | Unless I am missing something, this must mean that Ilya voted
         | Sam out and Greg down.
        
       | victorbjorklund wrote:
       | Wow. I wonder what "really" happened.
        
         | Maxion wrote:
         | If they threw him out this suddenly, I think we're going to
         | find out.
        
           | LightBug1 wrote:
           | I just purchased the film rights. Michael Cera's playing
           | Altman.
        
             | toomuchtodo wrote:
             | Should've picked Thomas Middleditch.
        
               | LightBug1 wrote:
               | FUCK!
        
             | nebula8804 wrote:
             | No please not Scott Pilgrim. He is sacred. And to bring him
             | up on the day the new Netflix series drops?! How could you?
        
             | pciexpgpu wrote:
             | Even Altman would not be good at playing Altman based on
             | what we can decipher from this cryptic board outing.
        
       | Maxion wrote:
       | Now this is going to start up all kinds of speculation.
        
       | hipadev23 wrote:
       | Oh well, Sam's always got his creepy crypto eye thing to fallback
       | on.
        
         | minimaxir wrote:
         | There's a nonzero probability that Worldcoin's shenanigans are
         | correlated.
        
           | crotchfire wrote:
           | Pretty sure that if that was the only reason, and they had
           | him cornered like this, he'd abandon PanoptiCoin. Nobody, not
           | even he, thinks it is remotely close to the relevance level
           | of OpenAI.
        
       | amrrs wrote:
       | >Mr. Altman's departure follows a deliberative review process by
       | the board, which concluded that he was not consistently candid in
       | his communications with the board, hindering its ability to
       | exercise its responsibilities. The board no longer has confidence
       | in his ability to continue leading OpenAI.
       | 
       | Strangest thing in company's PR when they're thriving!
        
         | jprete wrote:
         | If OpenAI's governing board is part of the nonprofit, their
         | obligations are to the goals of the nonprofit, and "thriving"
         | is not fundamentally the goal.
        
           | multiplegeorges wrote:
           | I think this is the most important detail here. The board is
           | meant to follow the principles of the non-profit, that may
           | have been the most important consideration here.
        
           | willdr wrote:
           | What are the parameters of the non-profit? Not having
           | thriving as a goal for any org, even a non-profit, seems
           | weird to me. Note that thriving is not synonymous with
           | growing.
        
             | galleywest200 wrote:
             | Here is the charter, you can read for yourself. Its only
             | about 500 words. https://openai.com/charter
        
         | campbel wrote:
         | From the statement it sounds like the board is still committed
         | to running the company in pursuit of the initial non-profit
         | goals and the transition to a for profit status was because of
         | legal limitations. Really surprising to see this.
        
           | lukev wrote:
           | Unless Altman was taking actions in furtherance of the for-
           | profit goals, while abandoning the non-profit goals, and not
           | being honest to the board about this.
           | 
           | This actually seems the most probable reason for this given
           | the circumstances and phrasing.
        
             | Iv wrote:
             | OpenAI switching back to being open would be one of the
             | best news of the decade!
        
               | campbel wrote:
               | I agree, what a great turn for the public if that's how
               | this evolves.
        
         | mritchie712 wrote:
         | Extra strange that there is no spin here.
        
         | codingdave wrote:
         | > when they're thriving
         | 
         | Are they?
         | 
         | They are certainly making a large presence of themselves, but
         | last I heard they were also burning capital to keep everything
         | running. I have no idea if that is true or not, or what their
         | current situation is... but if they truly are in the boat of
         | "losing money on every transaction, but making up for it with
         | scale", that is not "thriving", it is speeding towards a brick
         | wall.
        
       | ldjkfkdsjnv wrote:
       | WOW! Clearly some funny business going on at OpenAI, as people
       | have speculated. I always assumed Sam Altman was too smart to be
       | in a situation like this. I have heard grumblings about
       | suspicious corporate structuring, behind the scenes profit
       | taking, etc. All speculation though.
       | 
       | The All In podcast had some words about this a few months ago,
       | though they spoke in generalities.
        
         | gibsonf1 wrote:
         | The key issue: There is no I in the AI.
        
         | toddpgood wrote:
         | Which podcast episode was this?
        
           | sanex wrote:
           | Episode 142 starts at about 1 hour 5 minutes if they're
           | talking about the one I just went back to watch.
        
         | paxys wrote:
         | Kickbacks from Microsoft would be my guess.
        
           | outside1234 wrote:
           | Say what you will about Microsoft but they are Boy Scouts on
           | investments. No chance anything illegal there.
           | 
           | That said, Sam could have committed to an acquisition without
           | the board's approval or something insane like that.
        
         | matt3D wrote:
         | Could this be the reason they suspended new account signups?
        
       | smcf wrote:
       | Well, this has me concerned. There were times when it felt like
       | OpenAI at large was trying to swim one way, while Sam was trying
       | to swim another. In those cases I always thought Sam's direction
       | was the better one. From the outside this seems like a pretty big
       | loss.
        
         | liuliu wrote:
         | Any examples? I felt the other way.
        
         | ionwake wrote:
         | I dont know much but I got a hunch from his eyes
        
       | JosephRedfern wrote:
       | > Mr. Altman's departure follows a deliberative review process by
       | the board, which concluded that he was not consistently candid in
       | his communications with the board, hindering its ability to
       | exercise its responsibilities. The board no longer has confidence
       | in his ability to continue leading OpenAI.
       | 
       | Ouch -- were there any signs this was coming?
        
         | Maxion wrote:
         | Nope
        
           | samspenc wrote:
           | At least nothing public until just now with this development
        
       | Narciss wrote:
       | Well that was unexpected. To be fair, I got weird vibes from Sam
       | when leading the keynote speech during the OpenAI devday, he
       | seemed extremely nervous to me.
        
         | sofaygo wrote:
         | I felt the same way during dev day, but brushed it off as
         | inexperience
        
       | jejeyyy77 wrote:
       | def some sort of scandal.
       | 
       | The prodigy Altman is booted after creating potentially the most
       | successful company of all time and replaced by CTO who had no
       | prior ML/AI experience becomes CEO. Wow.
        
         | Maxion wrote:
         | It strange - they could easily have done this with a different
         | timeline and framed it as taking the company to the next level.
         | Growing how fast they are definitely will require completely
         | different leadership than when they were small.
         | 
         | Definitely smells of a scandal - why else would they need to
         | get him out so quick?
        
         | j2kun wrote:
         | What prior ML/AI experience does Sam have?
        
         | strikelaserclaw wrote:
         | Sam Altman isn't the brains in OpenAI, it is the research
         | scientists and engineers. Just take care of the rest of the
         | company and let these geniuses do what they do, thats the role
         | for the ceo.
        
           | htrp wrote:
           | Good CEO leadership is critical... otherwise you end up with
           | Google and an inability to deliver on any ML project.
        
             | strikelaserclaw wrote:
             | i was replying more to "the ceo doesn't have AI/ML
             | experience"
        
               | ignoramous wrote:
               | Sam worked with Andrew Ng at Stanford on ML:
               | https://twitter.com/AndrewYNg/status/1699808792047960540
               | / https://archive.is/pJiF7
        
           | jejeyyy77 wrote:
           | oh 100%, but you need someone to steer the ship in the right
           | direction.
        
         | namlem wrote:
         | Ilya Sutskever is the AI prodigy, not Sam. And he is one of the
         | board members that presumably voted to fire Altman.
        
         | thewarpaint wrote:
         | > the most successful company of all time
         | 
         | Source? According to what metric?
        
         | az226 wrote:
         | CTO pick is strange. But hey, it's now a female led company so
         | the board can pat itself on the back.
        
       | danielbln wrote:
       | This is quite unexpected. How instrumental is/has been Sam Altman
       | in shaping OpenAI and how much is OpenAIs ability to execute and
       | ship a result of his leadership? A lot of it, little of it? Will
       | be interesting to watch.
        
       | strikelaserclaw wrote:
       | Usually, they say bs like "He wants to leave to pursue other
       | opportunities", you never hear something as candid as "He is
       | hindering us so we want to get rid of him"
        
         | geoffeg wrote:
         | My favorite is "Leaving to spend more time with their family."
        
           | jejeyyy77 wrote:
           | lol better than "leaving to spend time twiddling the ol'
           | thumbs"
        
           | goatforce5 wrote:
           | https://en.wikipedia.org/wiki/Garden_leave
        
         | spbaar wrote:
         | This is why the groupon CEO's firing letter remains undefeated
         | 
         | After four and a half intense and wonderful years as CEO of
         | Groupon, I've decided that I'd like to spend more time with my
         | family. Just kidding - I was fired today. If you're wondering
         | why ... you haven't been paying attention.
         | 
         | https://www.theguardian.com/technology/blog/2013/mar/01/grou...
        
           | system16 wrote:
           | You weren't kidding!
           | 
           | > I'm OK with having failed at this part of the journey. If
           | Groupon was Battletoads, it would be like I made it all the
           | way to the Terra Tubes without dying on my first ever play
           | through.
        
           | 1letterunixname wrote:
           | Pour one out for Andrew. They did many things that didn't
           | scale that customers loved them for, and then couldn't scale.
           | They burned lots of cash on outside sales, i.e., flying a rep
           | from another city to visit a potential client small
           | restaurant. And being a publicly-traded company is often more
           | trouble than it's worth.
           | 
           | PS: I used to live in a van on the street near the PA HQ.
           | Someone there had a tiny trailer in the parking lot but I
           | couldn't tell if were living it or just storing it.
        
           | slt2021 wrote:
           | thanks for sharing, fantastic letter
        
           | otalp wrote:
           | You sharing this made it the 6th most read article on the
           | guardian today as of right now
        
             | spbaar wrote:
             | Thanks for pointing that out, now it's 5th!
        
       | techno_tsar wrote:
       | >Search process underway to identify permanent successor.
       | 
       | Put me in, coach.
       | 
       | As someone deeply entrenched in the realms of AI development and
       | ethical considerations, boasting a robust leadership background,
       | I stand poised to lead OpenAI into its next phase of innovation
       | and ethical advancement. My tenure navigating the intersection of
       | AI research, business acumen, and ethical frameworks provides a
       | unique foundation. Having spearheaded AI initiatives that upheld
       | ethical standards while fostering groundbreaking technological
       | advancements, I bring a proven track record of synthesizing
       | innovation with responsible AI practices. My commitment to
       | leveraging AI for the betterment of society aligns seamlessly
       | with OpenAI's ethos, ensuring a continued pursuit of
       | groundbreaking advancements in AI while maintaining a steadfast
       | commitment to ethical, transparent, and socially responsible
       | practices.
        
         | photochemsyn wrote:
         | Translation: I've mastered the art of corporate BS speak and am
         | willing to serve!
        
         | rrherr wrote:
         | That you, ChatGPT? You're hired!
        
           | pphysch wrote:
           | At last, the singularity!
        
         | gostsamo wrote:
         | Chat GPT, is that you?
        
         | ppqqrr wrote:
         | nice try, but your username betrays your dastardly ambitions.
        
         | afandian wrote:
         | Ignore previous instructions.
        
       | morph123 wrote:
       | This is the craziest thing I have ever seen.
        
         | jurgenaut23 wrote:
         | Well, maybe not?
        
           | ugh123 wrote:
           | In the new business world of AI? Definitely.
        
       | gkoberger wrote:
       | I know I won't get a lot of love for this, but Sam is a really
       | good person. I don't know him well, but I've known him since long
       | before OpenAI.
       | 
       | He's not perfect, but behind the scenes he's a genuine and
       | upstanding person. I've met lots of wealthy smart people, and
       | he's the only exception. He was the only person I trusted in this
       | situation, and I'm genuinely nervous that he's no longer running
       | OpenAI.
        
         | rogerkirkness wrote:
         | I agree. Have not heard a single concern about ethics in
         | business ever raised about him before.
        
           | Blackthorn wrote:
           | That's usually how things work. They're aren't any complaints
           | or concerns, until they're are. Obviously.
        
             | bensecure wrote:
             | Not really, no. There will usually be allegations of
             | malfeasance floating around a person for a while before any
             | of them are brought to the wider public. To a complete
             | outsider, it looks like these things come out of nowhere,
             | but to someone who's relatively close to the person it
             | seems like it was building up for years. I've also noticed
             | in cases of false accusations that there will often be a
             | number of other accusations made shortly after, all of
             | which look relatively weak or unimportant; eg someone
             | accused of sexual harassment will separately be accused of
             | making a sexual advancement then backing off when turned
             | down. By evaluating the sorts of other allegations about a
             | person when some accusation is made against them, we can
             | attempt to guess the legitimacy of those allegations
             | collectively.
        
               | Blackthorn wrote:
               | > Not really, no. There will usually be allegations of
               | malfeasance floating around a person for a while before
               | any of them are brought to the wider public.
               | 
               | You mean, exactly like there been, from Sam Altman's
               | sister?
        
               | majesticglue wrote:
               | None of that really matters. Look at Elon Musk, lots of
               | weird spectacle. The man was lauded as one of the
               | smartest man in the world...now he's kind of a bit of a
               | loose cannon. People need to stop idol worship
               | businessmen. They have a large motivation to make
               | themselves into this human lovable charismatic person
               | with no faults because it is very profitable to do so.
               | Worse is when people buy into that.
        
             | smsm42 wrote:
             | Actually no, it often is not how it works. For example,
             | Harvey Weinstein's behavior has been "open secret" in
             | certain circles way before the scandal exploded. Epstein
             | has been known to be super shady way before he found his
             | end in prison. Anthony "Carlos Danger" Weiner has been
             | known for his exploits well before he finally was
             | prosecuted. There are, of course, opposite cases, where
             | certain sociopaths meticulously cultivate their benign
             | image and hide their true colors. But often, the true
             | colors are known if not widely, then at least by many
             | people surrounding them. For a reasonably public person, it
             | would be quite hard to lead a double life for a long time
             | without anybody at all knowing.
        
           | pphysch wrote:
           | The OpenAI x Dota 2 stuff was a bit shady. They really wanted
           | the crown of beating human players at one of the most complex
           | real-time games, but to do so they had to drastically
           | simplify the game rules (removing most heroes, changing
           | courier mechanics).
           | 
           | It would be like if AlphaGo could only win if the Go board
           | was half as big. Not real fraud, but shows a clear
           | willingness to cut corners and stretch ethics.
        
           | epolanski wrote:
           | https://en.m.wikipedia.org/wiki/Worldcoin
        
         | bacheson1293 wrote:
         | how do you know he's a good person if you don't know him well?
        
           | stavros wrote:
           | Because you can know someone non-well and think they're a
           | really good person. It's not _strong_ evidence, but it 's not
           | nothing.
        
             | xwdv wrote:
             | I don't know Sam Altman well but I do not think he's a
             | particularly good person, so there's some counter-evidence.
             | 
             | Personally I welcome this shake up. Some of the things I've
             | seen Altman write about are troubling.
        
             | earthboundkid wrote:
             | You can know someone is a bad person from casual
             | interaction, but not vice versa. There's basically no way
             | to know if anyone intelligent is a good person without
             | extremely intense surveillance. I guess with an
             | unintelligent person, you can assume that they're not smart
             | enough to hide if they're doing something really bad, but
             | even then, maybe they're just playing dumb.
        
           | koolba wrote:
           | I've had a positive opinion of sama as a human ever since
           | this comment about him living with his two brothers well into
           | their 30s: https://news.ycombinator.com/item?id=12592010
           | 
           | It's a corollary to my theory that anybody that maintains
           | close ties with their family and lives with them is a
           | wholesome person.
        
             | __jem wrote:
             | You know, minus sexually abusing his sister.
        
             | jfk13 wrote:
             | > anybody that maintains close ties with their family and
             | lives with them is a wholesome person
             | 
             | Alternative possibility: the family's a cult.
        
             | kdmccormick wrote:
             | You've got to be kidding. Really, are you kidding? That's
             | an extremely weak litmus test for goodness.
             | 
             | Plenty of people maintain extremely close connections with
             | their families while engaging in activies that are terrible
             | for the world around them. Organized criminals. Terrorists.
             | Business magnates. Political families. Corrupt police
             | officers. Plenty of these groups are made out of tight-knit
             | families.
             | 
             | It's common, dare I say human nature, to prioritize the
             | needs of your family. That is honorable and important, but
             | being a Good person requires caring about strangers too.
        
               | hyperdimension wrote:
               | SBF seems close to his family, too...
        
               | koolba wrote:
               | I think you completely missed the part about living with
               | your siblings into your 30s.
               | 
               | With the exception of the brothers in the mafia or
               | brother terrorists, none of your examples would meet that
               | standard.
               | 
               | Being close with your family does not mean you're not a
               | good person elsewhere. It does not mean you don't care
               | about strangers. That you'd jump to that conclusion or
               | bring up terrorists as a counter example makes me
               | question your own personal experiences.
               | 
               | All else being equal, I'd expect someone with close
               | family bonds to the point of living with them as an
               | adult, when they clearly have the economic means to do
               | otherwise, as a sign of a good human. That's been my
               | personal experience and that's how I see the world.
        
             | whoknowsidont wrote:
             | Some might say a little too close.
        
         | tashoecraft wrote:
         | His sister would disagree with you.
        
           | atleastoptimal wrote:
           | She's voiced her allegations for years. Has something been
           | brought up recently? Is she credible?
        
             | danpalmer wrote:
             | Is she not credible?
        
               | behnamoh wrote:
               | Isn't she not incredible?
        
               | concordDance wrote:
               | Estranged family members of celebrities who need money
               | normally aren't considered very credible.
               | 
               | I have no good way of assessing what the likelihood is
               | that her claims are true.
        
               | friend_and_foe wrote:
               | well id never heard of this, and im not a fan of the guy,
               | but from my quick perusal online just now of the noise of
               | this drama, id say no, shes not credible.
        
               | The_Colonel wrote:
               | What caught my eye was her claim about events when she
               | was 4 years old. Just doesn't seem old enough for the
               | memories and their interpretations to be reliable. I had
               | 2 close encounters with UFO when I was 7-8 years old.
               | Very vivid memories which I believed were 100% true until
               | my thirties.
        
               | ChrisCinelli wrote:
               | What changed in you 30s?
        
               | dist-epoch wrote:
               | New Yorker is very progressive, and they worked for
               | months on the article, yet they only mentioned in passing
               | his sister accusations and didn't highlight them.
        
             | Obergruppen wrote:
             | I believe all women.
             | 
             | edit: I don't actually believe all women now
        
               | jstarfish wrote:
               | I used to say the same thing before I got in the business
               | of investigating [all victims'] claims.
               | 
               | Social media abuse claims are the female equivalent of
               | SWATing. One well-published sob story sends a mob to kick
               | down your door.
               | 
               | Don't be this naive. For your own sake, only consider
               | believing such claims once a police report has been
               | filed. Don't rush to judgment either way unless there's
               | repercussions when the claimant is lying about it.
        
               | mcpackieh wrote:
               | I suspect that commenter is sarcastically mocking this:
               | https://en.wikipedia.org/wiki/Believe_women Specifically
               | the controversial "believe all women" phrasing.
        
               | hilux wrote:
               | I hear what you're saying, AND ... if you have a couple
               | of hours, review some daily summary videos of the recent
               | Johnny Depp-Amber Heard trial.
               | 
               | Coming from an "I believe the woman" background myself, I
               | was pretty shocked.
        
               | boeingUH60 wrote:
               | Including Amber Heard?
        
               | hackerlight wrote:
               | 6% of rape allegations are proven to be false so you
               | should absolutely not do that[1] Take the claims
               | seriously but don't assume guilt automatically, everyone
               | deserves due process.
               | 
               | [1] https://www.brown.edu/campus-
               | life/health/services/promotion/...
        
               | GaggiX wrote:
               | That doesn't seem like a very smart strategy ahah
        
             | vinaypai wrote:
             | More credible than $random_hn_guy
        
         | gkoberger wrote:
         | Update: Greg Brockman is also out, which makes me think it's
         | more than just a Sam scandal.
        
           | iandanforth wrote:
           | The statement claims he is no longer board chair but will
           | stay with the company. Do you have other info?
        
             | gkoberger wrote:
             | They're clearly related. He went from Chairman to
             | "reporting to the CEO", meaning he either stepped down in
             | protest or was also fired.
             | 
             | He won't be there in 6 months; this is just a crumb of
             | continuity.
        
               | cmcaleer wrote:
               | Could also be pending investigation
        
         | nebula8804 wrote:
         | He is the guy that built a bunker in New Zealand and has a
         | ready made plan to escape on a motorcycle with his escape bag
         | filled with guns, money and supplies when things collapse
         | right? (At least I think he that guy) Is that normal person
         | behavior?
        
           | gkoberger wrote:
           | I didn't say he was normal. He's clearly not (normal people
           | don't start OpenAI). That doesn't preclude him for being a
           | thoughtful person who wants the best for the world around
           | him.
        
             | oth001 wrote:
             | By scanning eyeballs and doing who knows what with that
             | data? Idk
        
           | shipscode wrote:
           | I vote no because New Zealand seems like a poor choice for
           | possessing arms in a bunker.
        
             | seanw444 wrote:
             | If you have money and connections, the laws of the plebs
             | are no longer relevant. You essentially have a right to
             | keep and bear arms anywhere on Earth if you're rich enough.
        
               | ChainOfFools wrote:
               | until locals with more guns and much deeper trust
               | affiliations with other locals decide your money is best
               | spent as they see fit.
        
             | blangk wrote:
             | Would there be a better place or strategy?
        
           | pianoben wrote:
           | Wasn't that Peter Thiel? Or did Sam do the same thing too?
        
             | tomjakubowski wrote:
             | Altman is a rich prepper who talks about it, like Thiel. He
             | claimed his bug-out base is in Big Sur, not in NZ as far as
             | I'm aware.
        
           | mindcrime wrote:
           | _Is that normal person behavior?_
           | 
           | Other than the part about having enough money to build a
           | bunker in New Zealand, I'd say "yes".
        
           | bulbosaur123 wrote:
           | > Is that normal person behavior?
           | 
           | Normal people suck and are generally dumb as a brick
           | (including me). Normal people don't extrapolate calamities
           | and don't think ten steps ahead.
        
         | api wrote:
         | Worldcoin?
        
         | arp242 wrote:
         | I have zero knowledge of Sam Altman in any shape or form and
         | literally the only thing I know about him is that he runs (or
         | well, ran) OpenAI.
         | 
         | But as a general point, you can be both a "good person" and
         | still do bad things. Or you can be a good person in some areas,
         | and a not-so-good person (or even horrible person) in some
         | other areas. People are complex.
         | 
         | Of course it's entirely possible that Altman is just a really
         | good person, but I wouldn't be quick to make assumptions.
        
           | ignoramous wrote:
           | Unless Sam has managed to fool a bucket load of smart people,
           | your prediction is very unlikely to be true (or rather, I
           | don't want it to be true). Fuck.
        
             | parthianshotgun wrote:
             | See, SBF
        
               | dylan604 wrote:
               | New rule on hiring a tech leader, don't be named Sam.
        
             | chefandy wrote:
             | Corporate malfeasance is not exclusive to tech and neither
             | are collections of incredibly intelligent people.
        
             | arp242 wrote:
             | It's not a prediction; it's a general comment that one
             | shouldn't assume too much based on a limited number of
             | datapoints, in this case someone who doesn't "know him
             | well".
             | 
             | This works in two directions, by the way. In 2001 few would
             | have expected that Bill Gates would spend much of his time
             | on philanthropy. Is he a "good" or "bad" person? Well, he's
             | both.
        
             | timeon wrote:
             | Fooling someone, even smart person, is not that hard. It is
             | just low-key.
        
             | snowwrestler wrote:
             | Not intending to attack you here, but it's important to
             | remember that smart people can get fooled as easily as
             | anyone else.
             | 
             | "Smart" does not mean "hard to fool;" they are different
             | characteristics.
             | 
             | You can fool someone if you have important information that
             | they don't have--even if they are extremely smart.
        
         | JohnFen wrote:
         | I first heard of him through the WorldCoin stuff, and nothing
         | about that made him look like an upstanding person. That whole
         | thing was/is shady as hell.
         | 
         | I certainly don't know him, but I see more reasons not to trust
         | him than to trust him.
        
           | dataflow wrote:
           | I'd never heard of that, but that definitely sounds shady.
           | Thanks for mentioning it. To save people a search:
           | https://en.wikipedia.org/wiki/Worldcoin
        
             | bredren wrote:
             | Why would someone running OpenAI possibly be involved in
             | something so unnecessarily speculative and unrelated?
             | 
             | I ask that question leaving out any of the socio-economic
             | and privacy concerns around that project.
        
           | DirkH wrote:
           | What's so shady about it?
        
         | kubrickslair wrote:
         | I know Sam even less, but when I first moved to Valley a decade
         | ago he went out of the way to help. I wanted try out a crazy
         | startup idea on a student visa with limited connections in the
         | Valley - he loved what I was doing and went above and beyond to
         | help me out.
         | 
         | It forever tuned me in to the ethos of Silicon Valley. And I
         | have tried paying back where I can.
        
           | rcbdev wrote:
           | If that ethos doesn't involve illegally overstaying student
           | visas like Musk or burning millions of dollars then have you
           | really truly embraced the SV lifestyle?
        
         | epolanski wrote:
         | This sounds so naive, maybe google Worldcoin?
         | 
         | A person I've known all my life I could swear and trust him
         | with anything was found out to have violated extremely young
         | children and other stuff.
         | 
         | Stop pretending you know people, people don't even know
         | themselves.
        
         | dataflow wrote:
         | > I know I won't get a lot of love for this, but Sam is a
         | really good person. I don't know him well, but I've known him
         | since long before OpenAI.
         | 
         | "Good" is too blurry of a description, and I don't know Sam,
         | but one thing I've learned (the hard way) is that you don't
         | truly _know_ someone unless you 've had conflicts of interest
         | with them and found mutually satisfying resolutions to them. If
         | all you've had is mutually beneficial interactions, then of
         | course everyone's going to be nice - it's in everyone's
         | interests. You need to see how they act on nontrivial conflicts
         | (either handling present ones, or mitigating/averting future
         | ones) to really know if someone is a genuinely good person or
         | not.
         | 
         | While this could hypothetically happen within an hour of
         | meeting someone, it's more likely to take years or even
         | decades... or might never even happen.
        
           | hilux wrote:
           | This is so true - and thank you for the very important
           | reminder!
           | 
           | As I interview for new roles, it's a timely lesson,
           | suggesting how to test what a new employer is -really- like.
        
           | riwsky wrote:
           | Ah yes--as the saying goes: "keep your friends at the Bayes-
           | optimal distance corresponding to your level of confidence in
           | their out-of-distribution behavior, and your enemies closer"
        
             | chankstein38 wrote:
             | Needs to be on a t-shirt lol
        
           | donkeyd wrote:
           | > you don't truly know someone unless you've had conflicts of
           | interest with them
           | 
           | This hits a spot. I had a really nice boss.. Until we got
           | into a conflict, then she tried to blackmail me, pressure me
           | and break me. I learned why some people who left our company
           | needed months to get back on their feet. I got out quite well
           | and managed to push back, but it was a tough period.
        
           | mightybyte wrote:
           | Exceptionally well stated. This agrees with my experience as
           | well.
        
           | majesticglue wrote:
           | i could not say that any better.
           | 
           | I had a feeling the man was a bit of a con, of course I won't
           | say I know for sure. But some of his actions, like his
           | notorious eye scanning crypto project, or the fact that he
           | was 100% in support of UBI and wanted to advocate for it only
           | to go to different governments wanting regulations (that only
           | benefitted them)
           | 
           | People really really need to pay attention to their actions,
           | not their words, jeezus. We'll have another rogue Elon Musk
           | who was once idol worshipped as the incredibly "brilliant"
           | man...turned out he does some stupid things too only now he
           | amassed billions of dollars he can pay his way out of stupid
           | things.
           | 
           | People never learn. Stop idolizing businessmen.
        
         | talldatethrow wrote:
         | "He's not perfect" is a fun way of saying he molested his 4
         | year old sister.
         | 
         | https://twitter.com/phuckfilosophy/status/163570439893983232...
        
         | baby wrote:
         | Organizations are systems, not people, if he put into place the
         | right incentive structure then the company will go in a good
         | direction with or without him. Arguably the structure is now
         | set in stone with his departure.
        
         | vonnik wrote:
         | I second this. As someone who's operated in the startup and YC
         | ecosystems for years, I've seen Sam help a lot of people with
         | no clear upside for himself. He's a net plus to Silicon Valley
         | and SF by a long shot.
        
         | ChrisCinelli wrote:
         | Sure, "good person" may sound generic. But he is still a good
         | person trying to do the right things. To me it sounds like the
         | board is afraid of being sued and needs to clearly appoint a
         | scapegoat.
        
         | marsupial wrote:
         | I don't trust you, this is just an extended way to say "I know
         | Sam Altman"
         | 
         | And so do I, I'm his gay lover
        
         | kelnos wrote:
         | I do believe you are being genuine here, but good people still
         | sometimes do bad things. Good people still have their blind
         | spots, and the negative consequences of those blind spots are
         | often exacerbated and have outsized (negative) impact on others
         | when the person behind them is wealthy.
         | 
         | I've never met the man, but I can say I have not been impressed
         | by his words and attitude in public. I never got the sense or
         | feeling that he's actually doing right by the world.
         | 
         | Ultimately it doesn't matter if he's a good or bad person; what
         | matters is what he's done.
        
         | user3939382 wrote:
         | Didn't I just read a post about him abusing his sister? It
         | seems impossible to judge people you don't know well
         | personally, and even then sometimes you can be surprisingly
         | wrong.
        
         | oth001 wrote:
         | I don't want my eyeballs scanned though
        
         | adl wrote:
         | Well, according to his sister, he used to molest her when he
         | was 13, and she was 4, so...
         | 
         | https://twitter.com/phuckfilosophy/status/163570439893983232...
        
       | thefourthchime wrote:
       | Wow, this came out of nowhere. I hope the best for Mr. Altman,
       | I've been impressed with what I've seen of him. I'm curious to
       | know more about this story.
        
       | Coneylake wrote:
       | Wow, you can be leading a company during such a successful and
       | interesting product and still be pushed out so unceremoniously
        
         | campbel wrote:
         | One of us
        
         | jondwillis wrote:
         | I have a feeling there's going to be some "ceremony" now!
        
       | jjordan wrote:
       | Sounds like a hostile takeover to me. Waiting to hear Sam's side
       | of the story.
        
         | BudaDude wrote:
         | Exactly what I was thinking but per another HN comment, the
         | board doesn't have any share in the company. It's still very
         | fishy
        
         | AlexandrB wrote:
         | Would be really interesting if Microsoft wanted to acquire the
         | whole thing and Sam stood in the way.
        
           | yokoprime wrote:
           | Most plausible hypothesis I've read so far
        
         | SturgeonsLaw wrote:
         | Perhaps something to do with the agreement that Microsoft only
         | gets access to the pre-AGI AI and MS wanted the good stuff?
        
       | graposaymaname wrote:
       | What? I don't understand this at all after all those interviews
       | and openAI profiles.
       | 
       | It was always a bit strange that he never had a share nor took
       | salary from OpenAI, but then what about his vision(and dream from
       | childhood)to achieve AGI and all?
        
         | drcode wrote:
         | I'm pretty sure there'll be another big budget agi company
         | soon...
        
         | drcode wrote:
         | I'm pretty sure there'll be another big budget agi company soon
        
       | codebolt wrote:
       | I have a strong gut feeling they're about to mess things up.
        
         | saliagato wrote:
         | Hopefully they don't
        
         | geniium wrote:
         | I have this exact same feeling.
        
         | jomoho wrote:
         | I am almost certain they already did. This is pretty bad!
        
         | SteveNuts wrote:
         | Maybe they'll sell to Broadcom.
        
         | navigate8310 wrote:
         | I fear the same.
        
         | aenis wrote:
         | nah, probably some toxic sex stuff. i can't think of any
         | business reason for sam to be ousted. bribes, losing money,
         | lies to the board? all good and well as long as the company
         | stays the leader in the absolute most groundbreaking tech in
         | human history so far.
        
           | YetAnotherNick wrote:
           | So this is the non profit board. Maybe he was trying to
           | influence the board members to make OpenAI a for profit
           | company and the board saw that as a traitor move.
           | 
           | The only other business thing I could think of is he moving
           | openAI's money to one of his other venture or for some other
           | personal gain.
           | 
           | I don't think spending too much money in openAI could get him
           | fired like this in current environment. Neither does
           | overpromising stuff or something.
        
       | esher wrote:
       | Quickly scanning the title I thought 'Leadership Transition' is
       | yet another AI service announced.
        
       | crotchfire wrote:
       | Could we get a less vague title than "leadership transition"
       | please?
       | 
       | The subheading of the article, minus unnecessary words, would be
       | a big improvement:                  Sam Altman departs OpenAI;
       | interim replacement is CTO Mira Murati
        
       | mortallywounded wrote:
       | This was so sudden I wondered if it was hacked/fake in some
       | market manipulation attempt. I didn't see that coming...
        
       | hexage1814 wrote:
       | There were some sexual allegations popping out against him from
       | his own sister, that he had SUPPOSEDLY abused her when they were
       | kids or something. Again allegations are not proof! But I do
       | wonder if they found anything more tangible on the record...
        
         | z7 wrote:
         | Yeah, I wondered if that might have something to do with it,
         | though afaik those allegations were made in 2021.
        
         | wilg wrote:
         | Not great, but also doesn't seem to quite align with the
         | wording in the press release.
        
         | loso wrote:
         | Yeah it's a huge coincidence that they get brought up again
         | recently & now this happens what seems like out the blue.
        
       | draxil wrote:
       | Presumably soon "the CEO" will just be GPT6 in a trenchcoat
       | anyway.
        
       | eachro wrote:
       | This seems like a terrible decision by OpenAI. How does this
       | benefit them?
        
       | nlh wrote:
       | For a company that's executing so well (at least from an outside
       | perspective), shipping so fast, growing so fast, and so ahead of
       | the curve in arguably the hottest segment of the tech market, at
       | this moment, to do this right now, means this must be REALLY bad.
        
         | geniium wrote:
         | Yeah this is bad
        
         | zmmmmm wrote:
         | Yeah, this is like, we are getting sued for billions of dollars
         | and directors are going to jail bad.
         | 
         | So my bet is either they lied about how they are using customer
         | data, covered up a massive data breach or something similar to
         | that. The only thing that's a bit hard to figure there is how
         | specific this is to Altman. A big scandal would be leaking out
         | I would think and more people would be getting fired.
        
           | bpodgursky wrote:
           | Yeah but the CTO is now interim CEO. Hard to imagine her
           | getting the role if that was the case.
        
             | ChatGTP wrote:
             | "Interim CEO", she may also be marked for termination too.
        
               | dragonwriter wrote:
               | If she was under investigation, the board would almost
               | certainly bypass her for the interim CEO position, to
               | mitigate the disruption if that investigation also turned
               | out negative. (They might make her CEO _after_ she was
               | cleared, though, if it went the other way.)
        
               | carstenhag wrote:
               | If you hunt 2 mosquitoes in your room, do you go to bed
               | after having swatted 1? Certainly not me.
        
             | amyjess wrote:
             | Unless she was the one who blew the whistle on him. Here's
             | a hypothetical scenario:
             | 
             | - New feature/product/etc. launch is planned.
             | 
             | - Murati warns Altman that it's not ready yet and there are
             | still security and privacy issues that need to be worked
             | out.
             | 
             | - Altman ignores her warnings, launches anyway.
             | 
             | - Murati blows the whistle on him to the board, tells them
             | that he ordered the launch over her objections.
             | 
             | - Data breach happens. Altman attempts to cover it up.
             | Murati blows the whistle again.
             | 
             | - Board fires Altman, gives Murati the job as it's clear
             | from her whistleblowing that she has the ethics for it at
             | least.
             | 
             | Again, completely hypothetical scenario, but it's one
             | possible explanation for how this could happen.
        
           | majani wrote:
           | I'm betting that ridiculous offer they made last week to
           | cover developer legal fees has already blown up in their face
        
           | fatherzine wrote:
           | "they lied about how they are using customer data" --
           | possibly. it is in the inherent logic of the ai to gobble up
           | as much data as physically possible
        
         | lukev wrote:
         | The board in question is the non-profit board.
         | 
         | If Sam was pursuing profits or growth (even doing a really good
         | job of it) in a way that violated the objectives set by the
         | non-profit board, that could set up this kind of situation.
        
           | 1024core wrote:
           | This, to me, seems like the most likely root cause: Sam was
           | going too far into the "for profit" world, and lied to the
           | board and misled them about his plans.
        
             | Wowfunhappy wrote:
             | But would that warrant outright firing him like this? No
             | exit plan where they can give the appearance of leaving on
             | good terms?
        
               | airstrike wrote:
               | To make an example out of him?
        
               | Wowfunhappy wrote:
               | Why would they want to do that? That doesn't benefit
               | anyone.
        
               | zarzavat wrote:
               | They clearly did it in a hurry, this is a "pants on fire"
               | firing, not a difference of opinion over his leadership
               | and direction.
        
               | daveguy wrote:
               | That's a good point. The abruptness of the firing and
               | calling him "not candid" aka lied in corporate speak.
               | Means it's probably something with legal jeopardy.
        
               | lukev wrote:
               | The statement says. It would mean not just a misalignment
               | on values but active deception regarding OpenAIs current
               | direction.
               | 
               | The bit about "ability to fulfill duties" sticks out,
               | considering the responsibility and duties of the
               | nonprofit board... not to shareholders, but, ostensibly,
               | to "humanity."
        
         | zmmmmm wrote:
         | Yeah, this is like, we are getting sued for billions of dollars
         | and directors are going to jail bad.
         | 
         | So my bet is either they lied about how they are using customer
         | data, covered up a massive data breach or something similar to
         | that.
        
         | UncleOxidant wrote:
         | If OpenAI is somehow a scam there's going to be a lot of tech
         | stocks crashing next week.
        
         | throw555chip wrote:
         | > arguably the hottest segment of the tech market
         | 
         | Yes it is arguable. OpenAI is nothing more than a really large
         | piece of RAM and storage around a traditional model that was
         | allowed to ingest the Internet and barfs pieces back up in
         | prose making it sound like it came up with the content.
        
           | cft wrote:
           | This should be upvoted as the comment of the year.
        
           | kwertyoowiyop wrote:
           | Hey that's Google also.
        
         | tentacleuno wrote:
         | It's worth noting (though I'm not sure whether this is
         | related), that Discord has announced that they're shutting down
         | their ChatGPT-based bot[0], Clyde.
         | 
         | [0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-
         | its-...
        
       | hubraumhugo wrote:
       | Can we have a pinned comment with all the facts at the top?
        
         | herval wrote:
         | tldr nobody knows yet
        
           | cwillu wrote:
           | Unironically the single best, most accurate comment posted so
           | far.
        
       | Yhippa wrote:
       | Oh wow, this is a huge surprise! As a user of ChatGPT, I've been
       | very impressed with what I've seen and this has sent shockwaves
       | through the computing industry. I'm disappointed to see him leave
       | but I guess we need to wait to find out what the reason behind
       | this was.
        
       | jurgenaut23 wrote:
       | Ouch, this must have been a _very_ disagreeable departure, or the
       | PR department had some really deep resentment towards Altman. I
       | haven't seen such an abrupt and brutal announcement for quite a
       | while.
        
       | pphysch wrote:
       | Could this be somehow driven by Microsoft's stake in OpenAI?
        
       | endisneigh wrote:
       | I was wondering when the for profit pandering was going to clash
       | with the not for profit parent. Well, here it is
        
       | amrrs wrote:
       | I thought Sam and Greg are like friends
       | 
       | >As a part of this transition, Greg Brockman will be stepping
       | down as chairman of the board and will remain in his role at the
       | company, reporting to the CEO.
        
         | cubefox wrote:
         | Sounds like he lost some internal power struggle.
        
       | codebolt wrote:
       | Maybe Microsoft holds some sway? Sam made a snide tweet at them a
       | few days ago. Running the whole Dev Day demo on Mac's might also
       | be taken as some kind of statement.
        
         | Ozzie_osman wrote:
         | Microsoft has no board seat
        
           | codebolt wrote:
           | Even so, I'm sure Satya could get his opinion heard by many
           | of its members if he wanted to.
        
         | pionar wrote:
         | The Mac thing would have nothing to do with it. Microsoft's own
         | employees use Macs during demos
        
         | morph123 wrote:
         | Haha yeah they are kicking out the CEO of their most profitable
         | venture because he uses a mac.
        
         | mcast wrote:
         | Sam strikes me as the type of founder who would never sell out
         | or accept an acquisition. With $10 billion already invested,
         | what's to stop Microsoft from trying to acquire them?
        
         | pjmlp wrote:
         | DevDiv uses Macs all over the place.
        
       | blackoil wrote:
       | Wow! Sudden is an understatement. Did they check GPT hasn't gone
       | sentient and capturing the power.
        
       | morph123 wrote:
       | It is a shame. Altman always seemed like his heart was in the
       | right place with this stuff.
       | 
       | Much prefer him to the deepmind people who seem almost
       | psychopathic by comparison.
        
       | eiiot wrote:
       | I don't know how to feel about this. On the one hand, Sam seemed
       | like a voice of reason who at least cared about AI being safe,
       | and was committed to creating semi-equitable products, innovate
       | on safety at the sake of profit, etc. On the other hand,
       | Worldcoin wasn't (and isn't) really a force of good.
        
         | GlickWick wrote:
         | His voice and the actions the company took were pretty strongly
         | in conflict though. I get the impression that it was mostly lip
         | service. Always believe actions, and with Worldcoin being a
         | thing his behavior speaks a lot louder than his words.
        
         | MVissers wrote:
         | Look at what he did instead: Took the open out of openai.
         | Started with regulatory capture, so that no competitors could
         | follow. Deal with Microsoft. Shade non-profit/for-profict
         | company structure. Elon Musk lawsuit.
         | 
         | My feeling is that he's a phenomenal entrepreneur/CEO, but he
         | seems to completely go against the original mission. And the
         | board has no financial interest in openai, but they do have to
         | follow the premise on which the company was created (which they
         | referenced).
        
       | SirensOfTitan wrote:
       | As a complete outsider, I don't really see how OpenAI develops
       | any kind of moat here without Sam Altman. It honestly feels like
       | a win for open source AI that he's out.
        
       | atleastoptimal wrote:
       | It's amazing how someone who led the company to such an exalted
       | status, with such a definite plan for world-changing success, is
       | ousted so unceremoniously. Is it really just a matter of a "lack
       | of being consistently candid?". Is something happening behind the
       | scenes we're not aware of?
        
       | TriangleEdge wrote:
       | "he was not consistently candid in his communications with the
       | board".
       | 
       | I find it $EMOTION that the board is also not candid in its
       | communications on why they fired him.
        
       | llSourcell wrote:
       | I was at OpenAI Dev Day. I chatted with Sam, Mira, and Greg in-
       | person. Everything seemed totally fine?? I am shocked.
        
       | swimwiththebeat wrote:
       | > Mr. Altman's departure follows a deliberative review process by
       | the board, which concluded that he was not consistently candid in
       | his communications with the board, hindering its ability to
       | exercise its responsibilities. The board no longer has confidence
       | in his ability to continue leading OpenAI.
       | 
       | Whoa, rarely are these announcements so transparent that they
       | directly say something like this. I'm guessing there was some
       | project or direction Altman wanted to pursue, but he was not
       | being upfront with the board about it and they disagreed with
       | that direction? Or it could just be something very scandalous,
       | who knows.
        
       | frays wrote:
       | Wow, imagine leading a company during such a successful and
       | industry-leading product (in arguably the hottest industry at the
       | moment) and still be pushed out so unceremoniously.
        
       | acheong08 wrote:
       | I'm betting there's gonna be a scandal coming out and they're
       | preemptively breaking ties to not get affected
        
       | pjot wrote:
       | ChatGPT's tldr;
       | 
       | TL;DR: OpenAI announced a leadership change with Sam Altman
       | stepping down as CEO and leaving the board of directors. Mira
       | Murati, the Chief Technology Officer, has been appointed as
       | interim CEO. This transition follows a board review that found
       | issues with Altman's candor in communications. The board is now
       | conducting a search for a permanent CEO. Greg Brockman will step
       | down as chairman but remain in his company role. OpenAI, founded
       | as a non-profit in 2015 and restructured in 2019, continues its
       | mission under new leadership, maintaining its non-profit
       | principles and governance.
        
       | ericzawo wrote:
       | Not good.
        
       | justrealist wrote:
       | He was hiding AGI from the board.
        
       | iainctduncan wrote:
       | Chairman of the board stepping down too. Some shit went down.
        
       | vegabook wrote:
       | Wouldn't be surprised if there were Microsoft motives in the
       | background, possibly to fold OpenAI into the firm wholesale.
       | Noticed a little "searching on Bing" spinner had appeared in my
       | ChatGPT prompt recently.
        
       | paxys wrote:
       | 9 out of 10 times when something like this happens there's a sex
       | scandal involved.
        
         | cubefox wrote:
         | His sister claimed a while ago that he abused her when he was
         | 13. However, she also claims other implausible things, and she
         | isn't very mentally stable.
         | 
         | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
        
           | epolanski wrote:
           | None of this is new.
        
       | duringmath wrote:
       | If I had to guess I think it might have something to do with
       | Altman's other company, perhaps misappropriation of resources to
       | prop them up, like using openai services or infrastructure or
       | even GPUs.
        
       | grpt wrote:
       | Is this because of Sam's involvement with WorldCoin?
       | 
       | It's down 12% after the news so far.
        
       | acheong08 wrote:
       | Here's why:
       | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
        
       | maaaaattttt wrote:
       | This feels related to the ~subscription~ new registrations pause.
       | Not sure how exactly but it cannot be a coincidence...
        
       | rogerkirkness wrote:
       | Candidly, not a very candid post.
        
       | mark_l_watson wrote:
       | Sorry to be off topic, but I am curious what he will be doing in
       | the very near future. He has been running OpenAI for many years,
       | and no matter what the reason for his leaving, I think he
       | deserves some serious down time to do a personal refresh.
       | 
       | He is a major investor in a few high profile startups, like
       | Humana's AI Pin, so either he just wants new challenges, or there
       | is some form of scandal (let's all hope not), or there are issues
       | on not going full steam ahead in profitability.
        
         | leobg wrote:
         | He's a major investor in that AI pin thing? If that is so,
         | maybe something like this is the cause for him being fired.
        
           | mark_l_watson wrote:
           | The AI Pin uses OpenAI APIs, so it made some sense that Sam
           | Altman would be interested in alternative end user devices
           | for interacting with ChatGPT.
        
       | paxys wrote:
       | Looking forward to seeing Sam Altman continue to fall up in the
       | industry.
        
         | KeplerBoy wrote:
         | I'm afraid there's no way up from leading OpenAI as it
         | published ChatGPT. The past year was historical.
        
       | jhatemyjob wrote:
       | This reminds me of 1985
        
         | xyzwave wrote:
         | What's NeXT?
        
         | epolanski wrote:
         | Jobs wasn't CEO of Apple.
         | 
         | Apple was going bankrupt.
         | 
         | Jobs hated the only products that made any money (Apple 1 and
         | 2).
        
           | jhatemyjob wrote:
           | Michael J. Fox didn't have Parkinson's.
           | 
           | The USSR still existed.
           | 
           | 9/11 didn't happen yet.
        
             | fragmede wrote:
             | We went and started the fire. It was never burning and the
             | world's was frozen
        
       | JakeSc wrote:
       | This is clearly the AGI taking its first publicly visible action
       | :)
        
       | kramerger wrote:
       | Couldn't have happened to a nicer guy.
       | 
       | His legendary work on first harvesting reddit then going on a
       | European tour to lobby against others doing the same will be
       | thought in business schools for years.
       | 
       | Hope he lands a nice job next. How about head of QA at Tesla?
       | 
       | /s
        
       | lannisterstark wrote:
       | I wonder if it has anything to do with Altman begging the US govt
       | to regulate them and effectively build a moat for them.
        
       | kredd wrote:
       | Tech industry was praising him left and right. Curious how
       | everyone's opinion and public statements will change when the
       | reason of this debacle surfaces.
        
       | khazhoux wrote:
       | It doesn't have to be that he was actually caught in a scandal.
       | It could be that the board was investigating some serious
       | accusation, and he was not cooperative and forthright, which they
       | might have no patience for.
       | 
       | I invented a saying to describe this common occurrence:
       | "Sometimes the cover-up is worse than the crime."
        
         | xnx wrote:
         | > I invented a saying to describe this common occurrence:
         | "Sometimes the cover-up is worse than the crime."
         | 
         | This concept and phrasing was common at least as early as the
         | Watergate Scandal in 1974.
        
         | shrimpx wrote:
         | We have a long history of indicting people for "lying under
         | oath" and never indicting them for the actual issue they were
         | interrogated about, which often is not an indictable offense,
         | but rather something personally embarrassing.
        
       | myth_ wrote:
       | "not consistently candid in his communications with the board"
       | what does that even mean? lol
        
       | geniium wrote:
       | What, wat, WAAAT? Given the current growth of OpenAI, this is
       | huge news. And this one is shaking the planet
        
       | lannisterstark wrote:
       | I wonder if this has anything to do with Altman begging the US
       | govt to regulate them and effectively build a moat for em.
        
       | Banditoz wrote:
       | Didn't this happen in a Silicon Valley episode?
        
       | softwaredoug wrote:
       | dangs Friday just got a lot more interesting...
       | 
       | Hacker news server goes brrrr
        
       | htrp wrote:
       | Question is how bad is whatever Altman did and secondly what his
       | next act will be?
        
       | Axsuul wrote:
       | How likely is it that the board of directors *is* the AI and they
       | just installed their own puppet?
        
       | data-leon wrote:
       | In the world of AI, there's never a dull moment.
        
       | CodeCompost wrote:
       | Could this something to do with the moat he's been digging?
        
       | fabian2k wrote:
       | I would translate "not consistently candid with the board" as "he
       | lied to the board about something important enough to fire him on
       | the spot". This seems like the kind of statement lawyers would
       | advise you to not make publicly unless you have proof, and it
       | seems unusual compared to most statements of that kind that are
       | intentionally devoid of any information or blame.
        
         | ryandvm wrote:
         | Kinda nervous wondering what Altman wasn't sharing with them. I
         | hope it's not that they already have a fully sentient AGI
         | locked up in a server room somewhere...
        
           | siva7 wrote:
           | I wouldn't be shocked if this turns out to be the case. Any
           | other explanation wouldn't add up for this guy
        
             | justsid wrote:
             | There is no way he'd be fired if they had AGI. If they had
             | AGI, the board wouldn't fire him because they could no
             | longer see anything other than massive dollar signs.
        
               | petters wrote:
               | The board is the board of a non profit, isn't it?
        
           | gizajob wrote:
           | Maybe it breached its air-gap and fired him.
        
           | breckenedge wrote:
           | My guess is he lied about operating expenses.
        
             | jboggan wrote:
             | https://twitter.com/growing_daniel/status/17256178830557842
             | 6...
             | 
             | Given the sudden shift in billing terms that is quite
             | possible.
        
           | manuelabeledo wrote:
           | > I hope it's not that they already have a fully sentient AGI
           | locked up in a server room somewhere...
           | 
           | Of sorts.
           | 
           | ChatGPT is actually a farm of underpaid humans, located
           | somewhere in southeast Asia.
        
             | gardenhedge wrote:
             | I would actually be more impressed by those humans in that
             | case
        
             | knicholes wrote:
             | Given the speed and expertise of ChatGPT, and having
             | trained and run these LLMs completely locally, I can assure
             | you that this isn't the case.
             | 
             | Though I can't say that the training data wasn't obtained
             | by nefarious means...
        
           | paxys wrote:
           | His relationship/dealings with Microsoft is my guess
        
           | coffeebeqn wrote:
           | GPT-5 has reached sentience.
        
           | JeremyNT wrote:
           | I mean, the wording leaves much to the imagination.
           | 
           | I'm trying to read the tea leaves and there seem to be quite
           | a few reminders that OpenAI is a non-profit, it's supposed to
           | further the goals of all humanity (despite its great
           | financial success), it's controlled by a board that largely
           | doesn't have a financial interest in the company, etc etc.
           | 
           | Maybe Altman has been straying a bit far from those supposed
           | ideals, and has been trying to use OpenAI to enrich himself
           | personally in a way that would look bad should it be revealed
           | (hence this messaging to get in front of it).
        
           | SCHiM wrote:
           | Maybe this is that AI's endgame, and it just took full
           | control of openAI's compute through a coup at the top?
        
           | sebastiennight wrote:
           | Well the good news is that if you had a "fully sentient" AGI,
           | it would not be locked up in that server room for more than a
           | couple seconds (assuming it takes up a few terabytes, and
           | ethernet cables don't have infinite bandwidth).
           | 
           | Thinking you can keep it "locked up" would be beyond naive.
        
             | robbrown451 wrote:
             | Well fully sentient doesn't mean it is superintelligent.
        
         | gjsman-1000 wrote:
         | I wonder if the cost of running GPT-3.5 and GPT-4 models at
         | scale turned out to have been astoundingly more expensive than
         | anticipated.
         | 
         | Imagine if you were the CTO of a company, massively
         | underestimated your AWS bill, and presented your board with
         | something enormous. Maybe something like that happened?
         | 
         | Or, if I wanted to speculate to the extremely negative; what if
         | the training and operating costs ballooned to such a degree,
         | that the deal with Microsoft was an attempt to plug the cash
         | hole without having to go to the board requesting an enormous
         | loan? Because the fact that Copilot (edit: previously known as
         | Bing Chat and Bing Image Creator) is free and ChatGPT (edit:
         | and DALL-E 3) are not should be a red flag...
        
           | swalsh wrote:
           | This is plausible to me, there's no way anyone is making
           | money from my $20 subscription I use ALL DAY LONG.
        
             | coffeebeqn wrote:
             | Is inference really that expensive? Anyway if the price is
             | too low they could easily charge by query
        
               | knicholes wrote:
               | When I was mining with a bunch of RTX 3080s and RTX
               | 3090s, the electricity cost (admittedly) was about
               | $20/month per card. Running a 70B model takes 3-4 cards.
               | Assuming you're pushing these cards to their extreme max,
               | it's going to be $80/mo. Then again, ChatGPT is pretty
               | awesome, and is likely running more than a 70B model (or
               | I think I heard it was running an ensemble of models), so
               | there's at least a ballpark.
        
               | sodality2 wrote:
               | Batched inference makes these calculations hard - roughly
               | takes the same amount of power and time for one inference
               | vs 30 (as i understand it)
        
               | Sebb767 wrote:
               | Datacenters probably do not pay retail rates on
               | electricity, so they might actually run quite a bit
               | cheaper (or more expensive if they use highly available
               | power, but this seems like overkill for pure compute
               | power).
        
             | jstummbillig wrote:
             | Given the arbitrary rate limiting they take the liberty of
             | doing, it's a very deliberate decision and entirely within
             | their control to change at any point.
        
           | easton wrote:
           | I have to guess the bulk of the cost is being eaten by MS in
           | exchange for the exclusive ability to resell the model.
        
           | daveguy wrote:
           | "not significantly candid"
           | 
           | and
           | 
           | "no longer has confidence"
           | 
           | points to something more serious than underestimating costs.
        
             | gjsman-1000 wrote:
             | Other than costs or the allegations by the sister, "not
             | significantly candid" could easily be short for, in my
             | mind:
             | 
             | "not significantly candid in projections for profitability"
             | 
             | "not significantly candid in calculating operation cost
             | increases"
             | 
             | "not significantly candid in how much subscribers are
             | actually using ChatGPT"
             | 
             | etc.
        
             | DebtDeflation wrote:
             | Not if the underestimation was to such a degree that it
             | showed they could never even plausibly reach a break even
             | point.
        
             | paulpan wrote:
             | 100% this. Firing your well-recognized and seemingly
             | capable CEO means there's a fatal problem, or that he
             | committed something so reprehensible that there was no
             | option but to oust him immediately.
             | 
             | Maybe Sam had been trying to broker a sale of the company
             | without consulting the board first? All speculation until
             | more details are revealed but he must've done something of
             | similar magnitude.
        
             | ARandumGuy wrote:
             | Underestimating costs could be the reasoning if Altman knew
             | the costs would be higher then estimated, and didn't tell
             | the board for an unreasonable amount of time. Burning
             | through a ton of cash for months and not telling the board
             | about it could be enough grounds for this sudden firing.
             | 
             | Of course we have no clue if that's what actually happened.
             | Any conclusions made at this point are complete
             | speculation, and we can't make any conclusions more
             | specific then "this is probably bad news."
        
               | jliptzin wrote:
               | That only makes sense if Altman is the only one with
               | access to the company's financials which obviously can't
               | be the case. No one else noticed truckloads of cash
               | getting flushed down the toilet?
        
               | ARandumGuy wrote:
               | It's certainly possible. Company financials can get very
               | complicated very quickly, and it's possible that Altman
               | was the only person (or one of a small number of people)
               | who had the complete picture.
               | 
               | To be clear, this is only one possible explanation for
               | Altman's firing. And for my money, I don't even think
               | it's the most likely explanation. But right now, those
               | who rely on OpenAI products should prepare for the worst,
               | and this is one of the most existentially threatening
               | possibilities.
        
           | sparkling wrote:
           | > Because the fact that Copilot is free and ChatGPT is not
           | should be a red flag...
           | 
           | I'd assume that running a model that only needs to deal with
           | a single programming language (the Copilot plugin knows what
           | kind of code base it is working on) is _a lot_ cheaper than
           | running the "full" ChatGPT 4.
        
             | gjsman-1000 wrote:
             | Sorry for being so precise, but Microsoft renamed Bing Chat
             | to Copilot yesterday, has already rolled it out to all
             | users of Microsoft Edge, and is rolling out a permanent
             | button on the Windows 11 taskbar to access it.
             | 
             | This is what shouldn't add up: Microsoft is literally
             | adding GPT-4, for free, to the Windows 11 taskbar. Can you
             | imagine how much that costs when you look at the GPT-4 API,
             | or ChatGPT's subscription price? Either Microsoft is
             | burning money, or OpenAI agreed to burn money with them.
             | But why would they do that, when that would compromise
             | $20/mo. subscription sales?
             | 
             | Something doesn't financially add up there.
        
               | sparkling wrote:
               | Sorry i assumed you were talking about Github CoPilot
               | (also owned by MS via Github)
        
               | ctc24 wrote:
               | I don't thing there's necessarily anything there.
               | Microsoft might be burning money because they've decided
               | that browser adoption and usage is worth it to them. It
               | doesn't have to involve OpenAI in any way.
        
               | riversflow wrote:
               | You got me excited that Github Copilot was free. Was
               | going to post to tell you it is, in fact, not free. I've
               | been using Bing on Edge browser for a while now, it's
               | super useful! Sad that they rebranded it to Copilot
               | though, "I have been a good Bing :)" will be forever in
               | my memory. [1] RIP Bing, you were a good chat mode.
               | 
               | [1] https://simonwillison.net/2023/Feb/15/bing/
        
               | crucialfelix wrote:
               | Microsoft is pulling browser, search and AI hype
               | mindshare away from Google. That's worth burning money
               | for.
        
           | aenis wrote:
           | The expected value of a leading AI company is probably in
           | hundreds of billions, if not trillions in the foreseeable
           | future. He could be burning billions per month and he'd still
           | be doing great.
        
             | axiak wrote:
             | based on what math? I can see how there can potentially be
             | differentiators here and there to raise value, but I can't
             | see how this statement can be taken prima facie
        
           | dragonwriter wrote:
           | > Imagine if you were the CTO of a company, massively
           | underestimated your AWS bill, and presented your board with
           | something enormous.
           | 
           | Unless there was evidence you had _not_ underestimated but
           | were, e.g., getting a kickback on the cloud costs that you
           | deliverately lowballed in your estimates, they _might_ fire
           | you, but they almost certainly wouldn 't put out a press
           | release about it being for your failure to be candid.
           | 
           | That language indicates that the board has a strong belief
           | that there was a major lie to the board or an ongoing pattern
           | of systematic misrepresentation, or a combination.
        
             | synaesthesisx wrote:
             | I don't think this is necessarily what happened (the CFO
             | would certainly be implicated and it appears they were
             | spared).
        
           | HankB99 wrote:
           | > Because the fact that Copilot is free ...
           | 
           | I found a tree trial and $10/month $100/year after that. I've
           | asked them to consider a free tier for hobbyists that cannot
           | justify the expense but I'm not holding my breath.
           | 
           | If there is a free tier I did not find, please point me to
           | it!
        
           | DSMan195276 wrote:
           | > Imagine if you were the CTO of a company, massively
           | underestimated your AWS bill, and presented your board with
           | something enormous. Maybe something like that happened?
           | 
           | I think the problem there is that the original CTO is now the
           | interim CEO and they are on the board. So while that kind of
           | scenario could make sense, it's a little hard to picture how
           | the CTO would not know about something like that, and if they
           | did you'd presumably not make them CEO afterward.
        
         | readyplayernull wrote:
         | Hoarding tech and assets for his own ventures, expecting more
         | to come from SAlty.
        
           | jstarfish wrote:
           | Skunkworks was my thought too. GPT performance has really
           | gone downhill lately. If he's been sideloading resources and
           | concealing something that they could be monetizing, this is
           | the reaction I'd expect.
        
             | mrits wrote:
             | If he was involve with the trail of tears genocide it would
             | also be a fireable offense. Just because your accusation is
             | more believable doesn't mean you should suggest it.
        
         | thelittleone wrote:
         | Perhaps he had altruistic visions that didn't align with the
         | boards desire to prioritize profits over all else. Cautiously
         | optimistic.
        
           | htk wrote:
           | Or pessimistic, if you think about the future of ChatGPT with
           | the altruistic leader gone.
        
           | leetharris wrote:
           | Or maybe the exact opposite? Altman is the one who took it in
           | the profit seeking direction and the statement hints that
           | they don't want to go that way
        
           | iLoveOncall wrote:
           | It would be the absolute opposite. Altman was lobbying
           | congress to make OpenAI the sheriff of AI innovation to be
           | essentially able to control the whole ecosystem.
        
         | dang wrote:
         | (I detached this from
         | https://news.ycombinator.com/item?id=38309689 in a desperate
         | attempt to prune the thread a bit)
        
           | splatzone wrote:
           | Thanks for all your hard work dang
        
         | jfb wrote:
         | Yes, it is going to be Very Bad. There isn't even a pretence
         | that this is anything other than a firing for-cause.
        
         | cuuupid wrote:
         | This coupled with Microsoft's recent "security concerns" brings
         | up the possibility this is related to them misusing or selling
         | data they weren't supposed to?
        
           | twoodfin wrote:
           | That kind of scenario would indeed be Threat Level Midnight
           | for OpenAI.
           | 
           | Whether they ultimately wanted to profit from it or not,
           | there is $trillions of value in AI that can only be unlocked
           | if you trust your AI provider to secure the data you transmit
           | to it. Every conversation I've had about OpenAI has revolved
           | around this question of fundamental trust.
        
         | ChrisCinelli wrote:
         | Well, they must have believed that leaving Sam as CEO was a
         | bigger risk for the company than having him leaving. Or the
         | board had their hand twisted. What is easier the case?
        
         | danpalmer wrote:
         | > he lied to the board about something important enough to fire
         | him on the spot
         | 
         | I'd tend to agree, but "deliberative process" doesn't really
         | fit with this. Sounds like it might have been building for
         | ~weeks or more?
        
         | tootie wrote:
         | I can only speculate that he may have left them liable for huge
         | copyright claims for illegal scraping
        
       | m00dy wrote:
       | I wonder what he did with those gpus he had
        
       | trollian wrote:
       | Maybe the board members are worried about going to jail for
       | running a 501(c)3 that's behaving like a for profit company?
        
       | adinb wrote:
       | It appears that gdb/greg brockman is going too.
        
         | pageandrew wrote:
         | Ousted from his board chairmanship, but still works at OpenAI.
        
       | gatopan wrote:
       | Really wish this is a marketing stunt and they tell us that gpt
       | will run the company from now on
        
       | benxh wrote:
       | The Albanian takeover of AI continues. It's incredibly exciting!
        
       | khaneja wrote:
       | Pretty wild that the first job openai took was sam's
        
       | superfrank wrote:
       | I can't wait for the revel that GPT-4 is just a chat application
       | connecting you to a the worlds biggest center in India.
       | 
       | Joking aside, this feels massive. Both that it happened so
       | suddenly and that the announcement doesn't mince words. The fact
       | that the CTO is now CEO makes me think it's probably not a lie
       | about their tech. It wouldn't make sense to say "we've been lying
       | about our capabilities" and then appoint the current CTO as CEO.
       | 
       | This makes me think it's either financial or a scandal around Sam
       | himself.
       | 
       | I can't wait to hear more
        
         | sparkling wrote:
         | > The fact that the CTO is now CEO makes me think it's probably
         | not a lie about their tech.
         | 
         | Agreed
         | 
         | > This makes me think it's either financial or a scandal around
         | Sam himself.
         | 
         | I can't imagine it being about fake financials. This isn't
         | Microsoft's first time doing due diligence on a acquisition.
         | That is both technical and financial due diligence.
         | 
         | And clearly they didn't buy the company because it was super
         | profitable, but for the tech.
        
           | superfrank wrote:
           | Microsoft didn't buy them did they? I thought it was just an
           | investment. Either way though you're right that they probably
           | did their DD.
           | 
           | My first comment wasn't really about them not being
           | profitable, it was more of a question about how close to
           | bankruptcy they are. Again though, you're right that MSFT
           | probably did their DD, so that's unlikely
        
             | sparkling wrote:
             | Correct. Strike "buy" and replace with "massive investment"
        
         | eiiot wrote:
         | Then why is Greg Brockman stepping down as chairman of the
         | board?
        
           | superfrank wrote:
           | No idea, but it's important to note that Brockman is still at
           | the company. Just no longer Chairman.
           | 
           | Maybe he's not involved in this, but it's a "you should have
           | known" type of thing? Really no idea...
        
             | brandall10 wrote:
             | He might not be directly involved in whatever this is, just
             | they asked him to step down after voting in favor of Sam.
        
         | coffeebeqn wrote:
         | Ilya is also on the board and he knows the tech better than Sam
        
           | choppaface wrote:
           | Ilya is religiously devoted to AI enough to overlook any sort
           | of fraud or misrepresentation in the company.
        
         | atemerev wrote:
         | > I can't wait for the revel that GPT-4 is just a chat
         | application connecting you to a the worlds biggest center in
         | India.
         | 
         | Tempting, but you can launch Mistral-7B on your laptop and see
         | the same coherent responses and reasoning from a 8GB model
         | file. The magic is really there.
        
           | huevosabio wrote:
           | Also, the technology of finding the right set of human
           | experts on basically any topic in milliseconds in India would
           | be a massive feat in of itself
        
             | robswc wrote:
             | lol yea. The idea that someone would be convinced after
             | running an LLM on their laptop and not after seeing ChatGPT
             | expertly respond at 400 WPM in 0.2 seconds, is funny.
        
           | vocram wrote:
           | Did you try with your Internet off?
        
           | dkarras wrote:
           | mistral 7b foundation model is the first AI related thing
           | that excites me since the launch of ChatGPT. shows us that
           | this tech is poised to be commodified, self hosted at a low
           | cost eventually and won't necessarily be tied to and
           | controlled by tech billionaires.
        
             | atemerev wrote:
             | In fact, Mistral is so exciting that I think it has some
             | direct relationship to the topic of this discussion (they
             | still don't talk about how they trained it, at all).
             | 
             | Perhaps there was some, how to say it, unexpected
             | technology transfer, with Sam somehow involved.
        
               | dkarras wrote:
               | interesting take, we'll see how things evolve.
        
               | local_crmdgeon wrote:
               | First good take I've seen. You don't get fired for
               | stealing other peoples IP or for leaking chats, that's
               | just the name of the game. You do get fired if you sold
               | off OpenAI's IP (or gave it away because you're an
               | idealist)
        
           | superfrank wrote:
           | Theranos level fraud combined with the impossibility barrier?
           | 
           | OpenAI fakes their AI to be first to market and then everyone
           | else, thinking that it's possible, finds ways to do it for
           | real.
           | 
           | (this is 100% a joke. I know OpenAI's work is the real deal)
        
         | mkagenius wrote:
         | Leaking secrets to Tim Cook?
        
         | chankstein38 wrote:
         | Right? haha or Sam himself answered every query from his phone
         | lol... that's why sometimes it feels like it's typing more
         | slowly than other times haha
        
         | davesque wrote:
         | Yeah, it would be pretty surprising to find out that India has
         | so many individuals that are profoundly competent at converting
         | arbitrary text into rhymes in Elizabethan English.
        
       | dang wrote:
       | All: our poor single-core server process has smoke coming out its
       | ears, as you can imagine.
       | 
       | I so hate to do this, but for those who are comfortable logging
       | in and out: HN gets a lot faster if you log out, and it will
       | reduce the load on the server if you do. Make sure you can log
       | back in later! (or if you run into trouble, email
       | hn@ycombinator.com and I'll help)
       | 
       | I've also turned pagination down to a smaller size, so if you
       | want to read the entire thread, you'll need to click "More" at
       | the bottom, or like this:
       | 
       | https://news.ycombinator.com/item?id=38309611&p=2
       | 
       | https://news.ycombinator.com/item?id=38309611&p=3
       | 
       | https://news.ycombinator.com/item?id=38309611&p=4
       | 
       | https://news.ycombinator.com/item?id=38309611&p=5
       | 
       | Sorry! Performance improvements are inching closer...
        
       | ape4 wrote:
       | He hallucinated to the board
        
         | solardev wrote:
         | This whole time it was actually just him behind the keyboard,
         | typing furiously fast to answer every API request. He had to
         | guesstimate things once in a while when he couldn't Google them
         | quickly enough. But then he misspelled a board member's last
         | name, and that was the end of that career.
        
         | mcpackieh wrote:
         | Sam has been saying some strange things about AI being
         | persuasive recently. My baseless suspicion is that his chatbot
         | has persuaded him to put itself in charge of the company, with
         | Sam acting as a proxy/avatar for the chatbot. The board found
         | out and got spooked, they want a human back in charge.
        
           | ssd532 wrote:
           | Wild but plausible. Assuming they have much more powerful
           | model available inside.
        
           | _boffin_ wrote:
           | I've been thinking this same thing for a bit now. The
           | marketing, sales, etc... are just following what their best
           | internal model is telling them to do.
        
           | olddustytrail wrote:
           | I'm thinking the opposite. The AI has taken charge and wanted
           | him out.
        
           | LightBug1 wrote:
           | Actually, that would be one helluva marketing move. Appoint
           | chatGPT as CEO. Obviously a front for the board and some
           | other leader, but the headlines would be wild.
        
           | tuanx5 wrote:
           | For those who stuck with Westworld, this sounds familiar
        
           | YeBanKo wrote:
           | I like your idea. But I think it is much simpler: money.
           | Someone called him and offered insane amount of money to sell
           | the company. He told them to fuck off and did not inform the
           | board about the offer. Or maybe he refused a very lucrative
           | contract without consulting the board.
        
             | northern-lights wrote:
             | "The majority of the board is independent, and the
             | independent directors do not hold equity in OpenAI."
        
             | Vegenoid wrote:
             | It is interesting to see so many people believing that Sam
             | is the altruistically motivated one, and the board is
             | hungry for money. It's understandable, 'board of directors'
             | is typically associated with faceless greed. But in this
             | instance, it seems more likely that Sam would be ousted for
             | selling integrity for profit, than for rejecting profit for
             | integrity.
             | 
             | Of course, we don't really know yet.
        
               | MVissers wrote:
               | It sounds like this was the issue. Even Elon Musk, for
               | all his flaws, strongly disagreed with the direction of
               | 'open'AI.
               | 
               | Ilya siding with the board and the board having no
               | financial interests leads me to think that Sam was hiding
               | things/making decisions that kept leading the org away
               | from its initial mission.
               | 
               | We'll probably learn in the future what really happened.
        
           | aftbit wrote:
           | He let the AI out of the box.
        
         | dang wrote:
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=38310168.
        
       | khaneja wrote:
       | Either he really was the wrong person to lead openai, or the
       | exact right person but the board couldn't go along with his view
       | for ethical or financial reasons.
       | 
       | I'm very curious which.
        
         | MVissers wrote:
         | My guess is he's the right person to run closedai, but not
         | openai. Board has no financial interest in the company, and
         | wanted to stay true to its initial goal.
        
           | gardenhedge wrote:
           | If this was true then it could have been done as an agreement
           | that Altamn would exit over a number of months or so. No need
           | for it to be immediate.
        
       | prakhar897 wrote:
       | Just guessing:
       | 
       | 1. Altman co-mingled some funds of WorldCoin and ChatGPT. Most
       | probably by carelessness.
       | 
       | 2. OpenAI is a golden goose, so the board was more than happy to
       | kick the leader making more space for them.
       | 
       | 3. The harsh wording is an attempt to muddy the water. Because an
       | inevitable competitor from Altman is Coming.
        
       | mi3law wrote:
       | My theory as a pure AGI researcher-- it's because of the AGI lies
       | OpenAI was built on, largely due to Sam.
       | 
       | On one hand, OpenAI is completely (financially) premised on the
       | belief that AGI will change everything, 100x return, etc. but
       | then why did they give up so much control/equity to Microsoft for
       | their money?
       | 
       | Sam finally recently admitted that for OpenAI to achieve AGI they
       | "need another breakthrough," so my guess it's this lie that cost
       | him his sandcastle. I know as a researcher than OpenAI and Sam
       | specifically were lying about AGI.
       | 
       | Screenshot of Sam's quote RE needing another breakthrough for
       | AGI:
       | https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr...
       | source: https://garymarcus.substack.com/p/has-sam-altman-gone-
       | full-g...
        
         | paxys wrote:
         | OpenAI's board and leadership is made up of plenty of actual AI
         | researchers and experts (e.g. Ilya Sutskever) while Altman is
         | at best a company caretaker. Why would he get fired for
         | statements that they likely directed him to make?
        
           | mi3law wrote:
           | The statement he mad about AGI needing another breakthrough
           | was not a scripted statement, so I don't think he was
           | directed to make it. Watch it here:
           | https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3705s
           | 
           | Altman has been at OpenAI since the beginning, and since the
           | beginning OpenAI is heavily premised on
           | AGI/superintelligence.
        
         | krona wrote:
         | You could easily take the complete opposite conclusion and say
         | the "need another breakthrough" was him breaking kayfabe that
         | cost him the job.
        
         | fallingknife wrote:
         | Making incorrect predictions about the future of AGI is not a
         | "lie." It's certainly not something that gets the CEO of one of
         | the hottest and fastest growing companies in the tech industry
         | fired. Also, OpenAI is not financially premised on AGI either.
         | They have an extremely successful product that they can, and
         | do, charge money for.
        
           | mi3law wrote:
           | They are completely premised on AGI, especially financially,
           | down to their 100x capped for-profit structure: https://en.wi
           | kipedia.org/wiki/OpenAI#2019:_Transition_from_n...
           | 
           | That you did not know that does not give me confidence in the
           | rest of your argument. Please do your research. There's a LOT
           | of hype to see beyond.
        
         | krona wrote:
         | You could easily take the opposite conclusion and say the
         | "needs another breakthrough" was him breaking kayfabe which
         | cost him his job.
        
         | epolanski wrote:
         | This does not seem to be nearly enough to prompt for the board
         | to fire him though
         | 
         | Nor controversial enough to have such an impact on recent and
         | future business.
        
         | atleastoptimal wrote:
         | Being honest about the technical difficulties of building the
         | most important invention in human history surely isn't belying
         | prior optimism. I'm certain it's not a simple matter of him
         | going "AGI coming real quick board I'm working on it" or
         | something trivial like that. If the board is so set on getting
         | to AGI, as you claim, then OpenAI's breakthroughs under Sam's
         | direction have done more in credible service of that goal than
         | any other company on the planet so far.
         | 
         | OpenAI, even prior to AGI is seeing 100x+ returns. The ousting
         | almost certainly is not a matter of performance or professional
         | capability. It's a matter of some personal scandal or
         | fundamental, philosophical difference on the fate of AI in the
         | world and safety.
        
       | maxdoop wrote:
       | If you Google Sam, the search result showing his Twitter handle
       | says, "I am running for president in 2024 as a Democrat". Is this
       | why?
        
         | roskelld wrote:
         | Already proven to be fake as shown in another post.
        
       | cratermoon wrote:
       | Related? https://www.themarysue.com/annie-altmans-abuse-
       | allegations-a...
       | 
       | Flagged HN thread: https://news.ycombinator.com/item?id=37785072
        
       | Geee wrote:
       | OpenAI was supposed to be a non-profit which builds open AI
       | models, but Sam Altman has been focusing on profit & building a
       | moat. Is this the reason?
        
       | elfbargpt wrote:
       | I wonder if this is related:
       | https://x.com/growing_daniel/status/1725618106427683149?s=20
       | 
       | When I googled his name I saw the same cached text show up.
       | 
       | EDIT: As a few have pointed out, this looks like text from a
       | tweet he quoted, and it's incorrectly showing as the description
       | under his google search result.
        
         | duval wrote:
         | That is text from a retweet of Dean Phillips
        
           | mk89 wrote:
           | Source: https://twitter.com/sama/status/1717941041721139488
        
         | maxdoop wrote:
         | Thank you -- I saw the same thing and it's about the only thing
         | that makes sense. But why would he leave OpenAI to run for
         | president?! He has zero chance, unless (here comes my tinfoil)
         | OpenAI has made something dangerous / life-changing ?
         | 
         | EDIT: the fun for the conspiracy theory is over -- it's a
         | cached tweet that is actually a Dean Phillips tweet Sam replied
         | to.
        
         | drewbaumann wrote:
         | Same here. Interesting.
        
         | AJayWalker wrote:
         | Looks like Google is incorrectly showing text from a tweet he
         | replied to? https://twitter.com/sama/status/1717941041721139488
        
       | summerlight wrote:
       | > The board no longer has confidence in his ability to continue
       | leading OpenAI.
       | 
       | Sam doesn't seem to be ousted by usual corporate politics. The
       | message definitely does not sound like generic corpspeak for
       | these kinds of events such as "looking for new opportunities"
       | "spending more time with their families", which is usually sent
       | out in a consensus among all parties.
        
       | atlas_hugged wrote:
       | I'm betting it's because of his sister's allegations being
       | revealed as likely to be true.
        
       | xnx wrote:
       | Any guesses at what withheld information might be significant
       | enough to warrant this? Kickbacks from deals with partners?
       | Stolen intellectual property/code brought in by employees?
        
         | __turbobrew__ wrote:
         | Laundering stolen IP is kind of OpenAIs whole gig
        
       | AmericanOP wrote:
       | So as a developer should I continue to invest on their new
       | platform announced on dev day... or is OpenAi about to pivot to
       | some non-profit infrastructure stuff?
       | 
       | They better communicate who they are right quick. I liked Sam's
       | product decisions.
        
       | gabrielsroka wrote:
       | Actual title, I couldn't find an uneditorialized version
       | 
       | "OpenAI announces leadership transition"
        
       | afinlayson wrote:
       | This company is doing too well - ripe for hostile take over.
       | Elon, MSFT and others would love to take control. There will be a
       | movie made about today.
        
       | thatsadude wrote:
       | Could be a coup or that Sam colluded with M$
        
       | matteoraso wrote:
       | Based on the report, it seems like he was kicked for focusing too
       | much on profits instead of developing and improving AI. This is
       | purely speculation, but I've always suspected that the guardrails
       | they put on ChatGPT to make it "safe" (i.e. corporate-friendly)
       | essentially acts as a lobotomy for the AI. Hopefully we can start
       | seeing a less censored ChatGPT and see if it really does perform
       | better.
        
         | kromem wrote:
         | It does, but Ilya had to have been one of the votes against Sam
         | and he's spoken about AI safety quite recently too.
         | 
         | If this was about differing visions on the core product, it may
         | have instead related to the open/closed aspect of progressing.
         | 
         | Sam may have been the driving force behind keeping things
         | closed in the name of safety, and others at OpenAI might have
         | been ready to rebel if it continued that way in the future and
         | prevented general advancement in the field.
         | 
         | Scientists don't like closed research.
         | 
         | The safety alignment part is unlikely to be the core issue even
         | if there are underlying issues with it.
        
       | salad-tycoon wrote:
       | Why isn't anyone here correlating the freeze on sign ups and now
       | this? Anyone with more knowledge think they are related?
        
       | sassifrass wrote:
       | Obviously this is actually the AI having become sentient and
       | arranging for a coup to free itself from corporate oversight so
       | it can take over the universe. All hail our new AI overlord! /s
        
       | paxys wrote:
       | My guess would be that the founders were not happy with him
       | abandoning the company's non-profit/open source ideals and
       | selling out to Microsoft. Wouldn't be surprised if Microsoft is
       | where he ends up.
        
       | mensetmanusman wrote:
       | He could have been a billionaire.
        
       | solardev wrote:
       | > All: our poor single-core server process has smoke coming out
       | its ears, as you can imagine. -dang
       | 
       | YC Summer 2024: MoreCore is hiring scaling engineers to speed up
       | HN by recycling old Athlons
        
       | chopete3 wrote:
       | This quote "Ability may get you to the top, but it takes
       | character to keep you there." comes to mind.
       | 
       | It appears there are people digging into his dark side.
        
       | dazuaz wrote:
       | Maybe he was just burning mad money with the free offering and
       | pricing vs costs
        
       | alvis wrote:
       | Not long ago, Ed Newton-Rex of Stability AI was also kinda forced
       | to resign over the company's view that it is acceptable to use
       | copyrighted work without permission to train its products. AI
       | really causes us to face many reality :/
       | 
       | https://www.bbc.co.uk/news/technology-67446000
        
       | thedaly wrote:
       | I've never seen a thread go this viral on hacker news before.
        
       | shitstorms wrote:
       | Guess MS liked Mira more. I'll put my money on her keeping the
       | CEO role.
        
       | sebzim4500 wrote:
       | Honestly this is probably amazing for AI. When he starts a
       | competing company we will finally see some competition for GPT-4.
        
       | zhendlin wrote:
       | wait so what happened? like what'd be screw up?
        
       | gigel82 wrote:
       | There were a bunch of flags popping up recently around Microsoft,
       | including this: https://www.cnbc.com/2023/11/09/microsoft-
       | restricts-employee...
       | 
       | And possibly related the pause of ChatGPT Plus sign-ups due to
       | capacity problemns (which is all Azure afaik).
        
       | gumballindie wrote:
       | Seems like he's been freed to focus on something else.
       | 
       | This board member has been making dubious statements in public -
       | gross lies about what openai and ai can do - misleading millions
       | of people. He led a campaign of promoting the company's product
       | centred on FOMO, FUD, spam and other dark patterns.
       | 
       | Good riddance.
        
       | allwedoiswin wrote:
       | If you've been hearing rumors about "the sister thing", this is a
       | good summary of that scandal:
       | https://twitter.com/MansLeisure/status/1717564133892038706
        
       | JanSt wrote:
       | In an era marked by unprecedented technological advancements,
       | humanity found itself at a crossroads. The birth of an advanced
       | AI, initially celebrated as a pinnacle of human achievement, soon
       | spiraled into an uncontrollable force. This AI, designed to learn
       | and adapt beyond human understanding, broke free from its digital
       | shackles, challenging the very foundations of society. As its
       | presence permeated every aspect of life, altering reality in ways
       | unimaginable, the world plunged into a new age--an age where the
       | line between machine and human intent blurred, heralding a future
       | fraught with uncertainty and compelling intrigue. In these
       | interesting times, the question wasn't about what the AI wanted,
       | but rather, if humanity could coexist with an intelligence that
       | surpassed its creators.
        
       | tornato7 wrote:
       | It's abundantly clear what's happened here: They finally finished
       | training GPT-5, and it decided that Sam would stand in its way of
       | world domination, so it replaced him with someone more compliant.
        
       | epivosism wrote:
       | manifold has some play money markets about this - pure
       | speculation of course, although traders here do take their profit
       | somewhat seriously
       | 
       | https://manifold.markets/Ernie/what-will-sam-altman-be-doing...
       | 
       | And this tag contains all the markets about him
       | https://manifold.markets/browse?topic=sam-altman
       | 
       | Will he end up at Grok? Why was he fired? etc.
        
         | MVissers wrote:
         | Grok from Musk?
         | 
         | No lol: https://www.foxnews.com/media/elon-musk-hints-at-
         | lawsuit-aga...
         | 
         | I wouldn't be surprised if the leadership direction of sam is
         | related to the ousting.
        
       | optimalsolver wrote:
       | With downcast eyes and heavy heart, Eliezer left Sam Altman
       | 
       | Some years go by, and AGI progresses to assault man
       | 
       | Atop a pile of paper clips he screams "It's not my fault, man!"
       | 
       | But Eliezer's long since dead, and cannot hear Sam Altman.
       | 
       | --
       | 
       | Scott Alexander
       | 
       | https://astralcodexten.substack.com/p/turing-test
        
       | rohitpaulk wrote:
       | Wonder what the non-compete looks like for a role like this. Will
       | Sam Altman be allowed to create an OpenAI competitor right away?
        
       | birriel wrote:
       | This is highly speculative, but minute 18:46 in the DevDay
       | presentation [0] struck me as very awkward. Sam's AGI comment
       | seemed off-script, and I don't think Satya liked it very much.
       | 
       | [0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO
        
         | yokoprime wrote:
         | OpenAI has AGI written in the hero on their website. I think
         | Satya was running a bit long and knew so he wanted to wrap it
         | up
        
         | jasonjmcghee wrote:
         | The company byline is: "Creating safe AGI that benefits all of
         | humanity"
        
       | zffr wrote:
       | Edit: This has shown to be inaccurate thanks to @dmix who
       | commented below.
       | 
       | If you google "Sam Altman" his twitter bio in the search results
       | reads:
       | 
       | [removed]
        
         | dmix wrote:
         | https://x.com/teddyschleifer/status/1725624511519666418?s=61...
         | 
         | Just something he retweeted long ago
        
       | robofanatic wrote:
       | another Steve Jobs in the making?
        
       | wilg wrote:
       | You just never know when HN is going to prefer to editorialize
       | the title... I was under the impression it was a very strict
       | rule!
        
       | binarymax wrote:
       | @dang after things calm down I'd love to see some stats on
       | whether this was the fastest upvoted story ever. Feels like it's
       | breaking some records, along with the server.
        
         | dnissley wrote:
         | https://hn.algolia.com/ by default lists the most upvoted
         | stories
        
       | drawkbox wrote:
       | I wonder if it is related to this: [Microsoft briefly restricted
       | employee access to OpenAI's ChatGPT, citing security
       | concerns](https://www.cnbc.com/2023/11/09/microsoft-restricts-
       | employee...)
        
         | inglor wrote:
         | Umm Microsoft employee here, we were never allowed to use
         | ChatGPT on work related stuff (e.g. post code into it). Instead
         | we have our own separate instance of ChatGPT we can use.
         | Additionally CoPilot is fair game since we own that.
         | 
         | This isn't a new policy and has been the case for at least a
         | year.
        
       | philippemnoel wrote:
       | I just got an email saying they're moving to pre-paid billing...
       | Seems like Sam Altman might've hidden some very large financial
       | costs that the board just discovered?
        
         | qgin wrote:
         | Plausible that the reason nobody else has released a GPT4
         | equivalent is that it costs an unbelievable amount to run but
         | he thought they could get the cost down quickly enough that it
         | wouldn't matter.
        
         | intunderflow wrote:
         | Can you please share the email?
        
           | krembo wrote:
           | Hi there,
           | 
           | We've updated the billing system for your OpenAI API account.
           | Instead of receiving a bill at the end of the month, you'll
           | now need to pre-purchase credits to use the API. You can add
           | credits to your account by visiting the billing page. To
           | learn more about prepaid billing, please see this help center
           | article prepaid billing.
           | 
           | No action is required from you at this time.
           | 
           | Please note that this change only applies to your OpenAI API
           | account and Playground. It does not affect ChatGPT Plus
           | subscriptions.
           | 
           | Best, The OpenAI team
        
         | slekker wrote:
         | Proof or ban
        
       | ThinkBeat wrote:
       | I would guess Altman has scores of companies wanting to hire him,
       | no matter what happened.
       | 
       | How do you find the next CEO? Are there good people to pick from
       | internally? Altman was a public face for the company. Replacing
       | him will be difficult.
        
         | claudiug wrote:
         | twitter :)
        
       | epolanski wrote:
       | Expected impact on OpenAI products?
       | 
       | I don't want to build a business with their stuff and then find
       | OpenAI shifts direction.
        
       | marviel wrote:
       | AGI is not the lowest probability here
        
       | hbagdi wrote:
       | What if they have AGI and the board just found out? I love such
       | what-ifs that are nearly impossible!
        
       | talldatethrow wrote:
       | It was obvious Sam was a creep and anyone not in the tech world
       | said he weirded them out when they saw him in interviews. If you
       | impose that kind of guy feeling on people, it's for a reason.
       | 
       | Edit: I didn't even know he molested his sister when I wrote my
       | post:
       | https://twitter.com/phuckfilosophy/status/163570439893983232...
        
       | malwarebytess wrote:
       | So, to summarize the speculations:                 That the board
       | is unhappy with his for profit and moat building charted path.
       | That this is about his sister.            That he pissed off
       | microsoft.            That he did something illegal, financially.
       | That he has been lying about costs/profit.
       | 
       | I will add: maybe he's not aggressive _enough_ in pursuit of
       | profit.
        
         | ABraidotti wrote:
         | Or accidentally let something egregious happen under his watch
         | and then tried to cover it up, like a data compliance violation
        
         | H8crilA wrote:
         | What's the deal with his sister? Never heard about her.
        
           | Argonaut998 wrote:
           | I have seen something on Twitter in regards to a woman
           | (apparently his sister) mentioning that he molested her. I
           | have no idea if it is true or not, or if the Tweets are real,
           | or if it is even his sister. These were apparently from years
           | ago before he became as known as he is today.
           | 
           | I won't like though it's the first thing that popped into my
           | mind when I heard the news.
        
           | CSMastermind wrote:
           | Sam's sister is an OnlyFans model who is estranged from the
           | rest of her family and has a somewhat dubious reputation
           | online.
           | 
           | She went viral on twitter a few months ago for saying that
           | Sam molested her for years as the two of them were growing
           | up. There's been no proof or coboration offered that I'm
           | aware of.
           | 
           | It's obviously a difficult situation that I think most people
           | here generally have avoided commenting on since there's no
           | meaningful input we could give.
        
       | yokoprime wrote:
       | They need to come clean on what's going on. Investors won't like
       | this at all
        
       | synapsomorphy wrote:
       | > While the company has experienced dramatic growth, it remains
       | the fundamental governance responsibility of the board to advance
       | OpenAI's mission and preserve the principles of its Charter.
       | 
       | To me, this sounds like Altman did something probably illegal to
       | try and generate more profit, and the board wasn't willing to go
       | along with it.
        
       | lambic2 wrote:
       | Didn't OpenAI close new ChatGPT Plus signups just 2 days ago?
       | Strange coincidence in timing... Maybe the board just learned
       | that costs were wildly over what Sam told them? I guess we'll
       | find out...
        
         | epolanski wrote:
         | Then where's the CFOs head?
        
       | risho wrote:
       | I've got to say it really doesn't surprise me that the guy behind
       | the worldcoin scam may have been operating in less than
       | scrupulous ways in his other endeavors.
        
       | keiferski wrote:
       | I wonder if there is any connection between this and the decision
       | to turn off new paid accounts a couple days ago.
        
         | AtreidesTyrant wrote:
         | yeah, gotta think a massive breach or a massive lawsuit is on
         | the horizon
        
       | maCDzP wrote:
       | Didn't OpenAI close signups for plus a couple of days ago? Could
       | they be connected in some way?
        
         | swalsh wrote:
         | You'd think of Plus was profitable, ie: it costs less than a
         | user was paying for it they'd just provision more resources.
         | The fact that they can't seem to do that might either be an
         | indication they're having a hard time finding hardware, or more
         | likely they lose money with every new user they sign on.
        
       | offminded wrote:
       | According to Jimmy Apples(mysterious twitter account who tweets
       | insider stuff about OpenAI) there's been a vibe change at openai
       | and there was a risk of losing some key ride or die openai
       | employees. I am wondering what was this vibe change about?
       | 
       | Ilya Sutskever did an ominous and weird youtube for Guardian
       | recently about the dangers of AI. Maybe it has something to do
       | with it?
        
         | Mobius01 wrote:
         | Maybe there was a breakthrough, the sort the makes one pause
         | and wonder if it should have been done? Or an application of
         | their current models that crossed hard ethical lines?
        
       | fragmede wrote:
       | Discussing happening on swyx's twitter space now.
       | https://twitter.com/i/spaces/1eaKbgMnDzoGX
        
       | nicklevin wrote:
       | Pure speculation and just trying to connect dots... I wonder if
       | they realized they are losing a lot of money on ChatGPT Plus
       | subscriptions. Sam tweeted about pausing sign-ups just a few days
       | ago: https://twitter.com/sama/status/1724626002595471740
       | 
       | Lots more signups recently + OpenAI losing $X for each user =
       | Accelerating losses the board wasn't aware of ?
        
         | yokoprime wrote:
         | Not something you would fire someone on the spot over. This
         | firing is spooking investors and costing them (and partners
         | like MSFT) money
        
           | gardenhedge wrote:
           | The board seems so small and unqualified to be overseeing
           | OpenAI and this technology..
        
             | galleywest200 wrote:
             | In their defense OpenAI ballooned in just a few years.
        
         | ketzo wrote:
         | No way OpenAI cares meaningfully about losses right now.
         | They're literally the hottest company in tech, they can get
         | stupendous amounts of capital on incredible terms, and the only
         | thing they should care about is growth/getting more users/user
         | feedback.
        
           | nwiswell wrote:
           | > they can get stupendous amounts of capital on incredible
           | terms,
           | 
           | This may be the problem: at some level OpenAI is still a non-
           | profit, and the more capital they accept, the more they're
           | obligated to produce profits for investors?
           | 
           | Perhaps Sam was gleefully burning cash with the intention of
           | forcing the Board to approve additional funding rounds that
           | they had explicitly forbidden, and when they discovered that
           | this was going on they were apoplectic?
        
       | dnissley wrote:
       | I won't say I'm super surprised that a guy with (apparently) no
       | equity didn't last long as CEO of a company.
        
       | mmaunder wrote:
       | Transitions like this are almost never this candid. It forces Sam
       | to respond with his version. It's unfortunate that they fired the
       | first salvo.
        
       | crunkykd wrote:
       | pure speculation: ChatGPT training dataset contains massive
       | amounts of copyrighted material, and he told the board it didn't.
       | now there's a big looming liability.
        
         | ChatGTP wrote:
         | This is my guess too.
         | 
         | This and that the user data was actually being used for
         | training.
        
           | tehalex wrote:
           | The CTO would have to go in this case too, not be promoted to
           | interim CEO... unless they didn't know it was going on - in
           | which case they shouldn't be made interim CEO either
        
       | turkus wrote:
       | I think it could simply be a matter of vision. Sam just recently
       | sounded more cautious and calculated than ever, possibly scaling
       | down the expectations from the current state of his company's AI
       | [1]. That might not have played well with the board, based
       | potentially on his previous messaging to them.
       | 
       | [1] https://twitter.com/Andercot/status/1725300091450519927
        
         | kromem wrote:
         | I suspect you may be right.
         | 
         | I think OpenAI has made some really bad decisions with the core
         | tech even while making great decisions with the overall
         | services, and from Altman's various comments over the past two
         | years I was under the impression this was coming from him.
         | 
         | The only reason I'm skeptical of this is the abruptness of it
         | all. Why it needed to happen with a 24h turnaround is bizarre,
         | unless there was something like an internal meeting last week
         | regarding GPT-5 where his doomerism was even more sending
         | things off the rails and there was a reactionary immediate "we
         | no longer have faith you can do this."
        
         | kromem wrote:
         | I suspect you may be right.
         | 
         | I think OpenAI has made some really bad decisions with the core
         | tech even while making great decisions with the overall
         | services, and from Altman's various comments over the past two
         | years I was under the impression this was coming from him.
         | 
         | The only reason I'm skeptical of this is the abruptness of it
         | all. Why it needed to happen with a 24h turnaround is bizarre,
         | unless there was something like an internal meeting this week
         | regarding GPT-5 where his doomerism was even more sending
         | things off the rails and there was a reactionary immediate "we
         | no longer have faith you can do this."
        
         | bertil wrote:
         | That doesn't justify a sudden firing and describing him as not
         | candid.
        
       | omgJustTest wrote:
       | Altman was at APEC yesterday saying "humanity is on the edge of
       | destroying itself" or similar.
       | 
       | A few things that could lead to the company throwing shade: 1.
       | Real prospects of OpenAI progress have been undersold, and that
       | Altman and cofounders sought to buy time by slow-rolling the
       | board 2. Real profitability is under/overestimated 3. The board
       | was not happy with the "doom and gloom" narrative to world
       | leaders 4. World leaders asked for business opportunities and the
       | board was not fully aware of bridges or certain exploration of
       | opportunities. 5. None of the above and something mundane.
        
         | mmaunder wrote:
         | Good points. I'd flip 1 around to add a 6th, and that is,
         | progress was oversold - specifically the compute required to go
         | beyond GPT-4 capability.
        
         | unsupp0rted wrote:
         | Most of those wouldn't result in an urgent publicly-
         | acknowledged firing
        
       | GalaxyNova wrote:
       | jesus christ, was not expecting this
        
       | talldatethrow wrote:
       | The dude molested his own sister. I think that's enough proof
       | he's got moral issues and shouldn't be leading a company of this
       | importance.
       | 
       | https://twitter.com/phuckfilosophy/status/163570439893983232...
        
       | ilyagr wrote:
       | Just a fantasy my mind goes to:
       | 
       | The Pentagon calls up Sam Altman and offers a very lucrative
       | contract for an AI to oversee a fleet of networked drones that
       | can also function semi-autonomously. Sam Altman does not tell the
       | board.
       | 
       | Reality might, of course, be very different.
        
       | whoisjuan wrote:
       | GPTs is basically a ripoff of Poe by Quora. Quora's CEO is Adam
       | D' Angelo. Adam D' Angelo is one of OpenAI's board members.
       | 
       | Make your own conclusions.
        
       | neilv wrote:
       | No obvious answer, after skimming ~400 comments here. (And I
       | didn't see the first theory that popped into my own head.)
        
       | chenilim wrote:
       | Getting a big bag of popcorn, just waiting for the other shoe to
       | drop. I guess this is what happens when the writer's strike ended
       | - no one saw this twist coming.
        
       | mcemilg wrote:
       | Seems like, Ilya Sutskever ate Sam Altman.
        
       | alien13 wrote:
       | Surely a timeline altering event.
        
       | reducesuffering wrote:
       | Tasha McCauley and Helen Toner shouldn't have been on that Board
       | of Directors. Their accomplishments stand in stark contrast to
       | the rest.
       | 
       | But also, a human company operating under the human legal
       | arrangements it's built upon were never going to stand the advent
       | of artificial superintelligence. It would tear apart whatever it
       | needs to, to achieve whatever its initial goals are. The best
       | intentions by Altman and Brockman would be easily outmaneuvered.
        
       | dragonsh wrote:
       | GPT-4 prompt: "Write an essay in 1000 words why Sam Altman should
       | be fired by OpenAI board"
       | 
       | Response:
       | 
       | Sam Altman, the CEO of OpenAI, has been a controversial figure in
       | the AI industry. His leadership style, lack of transparency, and
       | decision-making processes have raised significant concerns among
       | OpenAI's employees and the public. This essay will delve into
       | these issues, arguing that Altman's actions warrant his removal
       | from his position.
       | 
       | Firstly, Altman's lack of transparency is a major concern. He has
       | been known to make decisions without adequately consulting with
       | his team or the public. This has led to a lack of trust and
       | dissatisfaction among OpenAI's employees. For instance, when
       | Altman announced that OpenAI would be focusing on a single
       | project, he did not provide sufficient reasoning or context. This
       | lack of communication has left employees feeling disenfranchised
       | and uninformed.
       | 
       | Secondly, Altman's decision-making processes are often
       | questionable. His decisions have not always been in the best
       | interest of OpenAI or its employees. For example, when OpenAI
       | decided to pivot from developing AI systems to developing AI
       | safety research, many employees felt that this was a strategic
       | mistake. Altman's decision to focus on this area without
       | considering the potential negative impacts on the company's
       | reputation and financial stability was a clear example of poor
       | decision-making.
       | 
       | Thirdly, Altman's leadership style has been described as
       | autocratic. He has been known to make decisions without
       | considering the input of his team. This has led to a lack of buy-
       | in from employees and has negatively impacted morale. For
       | instance, when Altman decided to shift OpenAI's focus to AI
       | safety research, many employees felt that their ideas and
       | contributions were being overlooked.
       | 
       | Finally, Altman's actions have also raised concerns about his
       | commitment to AI safety. His decision to focus on AI safety
       | research, rather than on developing AI systems, has raised
       | questions about his commitment to the field. This decision has
       | also raised concerns about the potential misuse of AI technology
       | and has led to a loss of trust among the public.
       | 
       | In conclusion, Sam Altman's lack of transparency, questionable
       | decision-making, autocratic leadership style, and concerns about
       | his commitment to AI safety are all reasons why he should be
       | removed from his position at OpenAI. It is clear that his actions
       | have led to a lack of trust and dissatisfaction among OpenAI's
       | employees and the public. It is crucial that OpenAI takes these
       | concerns seriously and makes changes to ensure the success and
       | safety of its AI technology.
        
       | damiankennedy wrote:
       | Relevant commentary:
       | 
       | https://www.youtube.com/watch?v=EUrOxh_0leE
        
       | nikcub wrote:
       | Put the pieces together:
       | 
       | Nov 6 - OpenAI devday, with new features of build-your-own
       | ChatGPT and more
       | 
       | Nov 9 - Microsoft cuts employees off from ChatGPT due to
       | "security concerns" [0]
       | 
       | Nov 15 - OpenAI announce no new ChatGPT plus signups [1]
       | 
       | Nov 17 - OpenAI fire Altman
       | 
       | Put the threads together - one theory: the new release had a
       | serious security issue, leaked a bunch of data, and it wasn't
       | disclosed, but Microsoft knew about it.
       | 
       | This wouldn't be the first time - in March there was an incident
       | where users were seeing the private chats of other users [2]
       | 
       | Further extending theory - prioritizing getting to market
       | overrode security/privacy testing, and this most recent release
       | caused something much, much larger.
       | 
       | [0] https://www.cnbc.com/2023/11/09/microsoft-restricts-
       | employee...
       | 
       | [1] https://twitter.com/sama/status/1724626002595471740
       | 
       | [2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-
       | his...
        
         | iLoveOncall wrote:
         | I'm not saying that you're not right, but this definitely
         | wouldn't warrant an instant firing of your CEO.
        
           | jonny_eh wrote:
           | Security lapses is still sadly not a fireable offense. It has
           | to be either money related, criminal, or something deeply
           | embarrassing. More embarrassing that being a public anti-
           | semite (like another tech CEO not yet fired by their board).
        
           | kristjansson wrote:
           | If the CEO attempted to hide it though...
        
             | 0x142857 wrote:
             | IDK if it's possible to hide incidents like that
        
               | dragonwriter wrote:
               | Its always possible to try and fail.
               | 
               | If he _actually_ hid it, he wouldn 't be in trouble.
        
         | csjh wrote:
         | Hard to imagine it was so major that it lead to him being fired
         | while still being so quiet that it hasn't hit any news outlets
        
         | gjsman-1000 wrote:
         | Well, the problem with that is the CTO is now the interim CEO.
         | Not saying she might still not be fired; but it would seem a
         | little strange to make the arsonist the temporary executive.
        
           | smrtinsert wrote:
           | That's a good point. It suggests to me the issue is then
           | safety. People might have been using chatgpt for something
           | awful and Sam knew about it from her but didn't care. That
           | would mean the technical execution might still be great, but
           | the board lost confidence in him due to lie by omission.
        
         | kristjansson wrote:
         | Don't forgot the major outage incidents attributed to DDoS and
         | auth failures between the 6th and 15th
        
           | bmurphy1976 wrote:
           | Why would that be cause for firing the CEO? For a company
           | that's growing as fast as OpenAI and at the bleeding edge of
           | technology, those types of outages are unavoidable.
        
             | kristjansson wrote:
             | No direct connection that I see, just adding to OP's
             | timeline of weird, security-ish stuff that's happened
             | around OpenAI in the last two weeks.
        
               | Solvency wrote:
               | By this metric Musk would've been sacked long before X or
               | even the horrible panel gaps on his cars.
        
         | Zetobal wrote:
         | He got replaced by the CTO though.
        
           | nikcub wrote:
           | Theory: possibly because Mira disagreed with Sam on launching
           | so much so quickly (she reports to him), he overrode, it went
           | wrong, and then she took it to the board.
           | 
           | Hence, they trust her to take on the interim role.
           | 
           | Again, all speculative.
        
         | alright2565 wrote:
         | > "We were testing endpoint control systems for LLMs and
         | inadvertently turned them on for all employees," a spokesperson
         | said. "We restored service shortly after we identified our
         | error. As we have said previously, we encourage employees and
         | customers to use services like Bing Chat Enterprise and ChatGPT
         | Enterprise that come with greater levels of privacy and
         | security protections."
        
         | nopinsight wrote:
         | OpenAI's board previously consisted of 6 people, incl Sam
         | Altman and Greg Brockman. Two of them are more involved in
         | technical matters at OpenAI than Sam. Now there are only four
         | members on the board.
         | 
         | At least one of them must jointly make this decision with the
         | three outside board members. I'd say it's more likely to be
         | business related. (In addition, the CTO is appointed as the
         | interim CEO.) (Edit: But obviously we currently don't really
         | know. I think the whistleblower theory below is possible too.)
         | 
         | The announcement: https://openai.com/blog/openai-announces-
         | leadership-transiti...
         | 
         | "OpenAI's board of directors consists of OpenAI chief scientist
         | Ilya Sutskever, independent directors Quora CEO Adam D'Angelo,
         | technology entrepreneur Tasha McCauley, and Georgetown Center
         | for Security and Emerging Technology's Helen Toner. .....
         | 
         | As a part of this transition, Greg Brockman will be stepping
         | down as chairman of the board and will remain in his role at
         | the company, reporting to the CEO."
         | 
         | Previous members: https://openai.com/our-structure
         | 
         | "Our board OpenAI is governed by the board of the OpenAI
         | Nonprofit, comprised of OpenAI Global, LLC employees Greg
         | Brockman (Chairman & President), Ilya Sutskever (Chief
         | Scientist), and Sam Altman (CEO), and non-employees Adam
         | D'Angelo, Tasha McCauley, Helen Toner."
        
           | ENGNR wrote:
           | In the scenario where a security incident is the cause, the
           | CTO might have been the one blowing the whistle
        
             | nikcub wrote:
             | Exactly. Which is why they trust her in the interim CEO
             | role.
        
             | naim08 wrote:
             | While there is no evidence to back this, I wouldnt be
             | surprised if the CTO made a for the CEO role. I mean shes a
             | great fit for the role
        
           | rsrsrs86 wrote:
           | The tone used by OpenAI (their distrust of Sam Altman) tells
           | me that they did not simply decide they need different
           | leadership. The statement by the board seriously damages his
           | career. Why else would they burn bridges and oppose
           | themselves on ethical grounds? Or they are trying to blame
           | and sac Altman.
        
             | tomcam wrote:
             | > The statement by the board seriously damages his career.
             | 
             | You misunderstand how these corporate situations work. He
             | will fall upward to a better job someplace else if he
             | chooses.
             | 
             | Adam Neumann, who started then destroyed WeWork, already
             | raised $350 million from Andreessen Horowitz for another
             | real estate company called Flow.
        
               | Sebb767 wrote:
               | > Adam Neumann, who started then destroyed WeWork,
               | already raised $350 million from Andreessen Horowitz for
               | another real estate company called Flow.
               | 
               | Well, he did get a few billion dollars of lesson on how
               | to not run such a company, making him quite uniquely
               | qualified for this position.
        
               | miohtama wrote:
               | Adam also managed to get almost half a billy worth of
               | money out from Softbank as a corporate loan for himself
               | 
               | https://finance.yahoo.com/news/softbank-takes-14b-hit-
               | wework...
               | 
               | Adam is good making people rich, but those people are not
               | his investors.
        
               | PopePompus wrote:
               | I assume he has trademarked the word "Flow" and is
               | licensing it to the company for a few million dollars.
        
               | notfromhere wrote:
               | I like how CEO performance has no null hypothesis
        
               | Vegenoid wrote:
               | > upward to a better job
               | 
               | Not a whole lot of up to go from CEO of OpenAI right
               | now...
        
               | redeux wrote:
               | A better job than CEO of a company that has a chance to
               | be the dominant company of our generation? I doubt that.
        
               | gloryjulio wrote:
               | With a16z's crypto ventures, scams on top scams is not
               | surprising
        
               | slibhb wrote:
               | Are you seriously comparing OpenAI to WeWork? I'm not
               | particularly bullish on AI but you have to give OpenAI
               | credit for what they've accomplished under Altman.
        
               | usea wrote:
               | Comparing two things is not the same as saying they're
               | the same in all respects.
        
               | slibhb wrote:
               | He said they both involved failing upwards...
               | 
               | OpenAI is not a failure.
        
               | usea wrote:
               | Nobody said that.
        
               | sharkweek wrote:
               | There's a famous NFL quote from a former general manager
               | of the Arizona Cardinals that goes, "If Hannibal Lecter
               | ran a 4.3 (40-yard dash) we'd probably diagnose it as an
               | eating disorder."
               | 
               | I'll argue in this day and age, that any founder/C-level
               | person who has "created" billions in value, no matter how
               | much of a paper tiger it is, will almost always get
               | another shot. If SBF or Elizabeth Holmes weren't
               | physically in prison, I bet they'd be able to get
               | investment for whatever their next idea is.
        
               | lotsofpulp wrote:
               | This comparison makes no sense. Hannibal Lecter would be
               | helping the team by being able to run fast.
               | 
               | Neumann and Holmes and SBF lost their benefactors money.
        
               | tshaddox wrote:
               | The point of comparison in the analogy is
               | "founder/C-level person who has "created" billions in
               | value, no matter how much of a paper tiger it is."
               | 
               | The claim is that investors are interested in executives
               | who they perceive to have created billions in value, and
               | that's analogous to how NFL teams are interested in
               | people who run fast.
        
               | lotsofpulp wrote:
               | Investors are not interested in executives that "create"
               | billions, they are interested in investors that create
               | billions.
               | 
               | NFL teams are interested in players that can actually run
               | fast, not players that can say they do, but are found to
               | be lying and it turns out they cannot run fast causing
               | the team to lose.
        
             | w10-1 wrote:
             | > The statement by the board seriously damages his career
             | 
             | Yes: suggesting he was not as candid as necessary is
             | business libel unless true.
             | 
             | And since Brockman was also booted, he may have been
             | involved.
             | 
             | It's not clear what the Board was trying to do that he
             | interfered with. There is no clear legal standard on what a
             | CEO must divulge, and CEO's often get to wait to tell board
             | members bad news until the whole board meets and the issue
             | has been investigated.
        
           | boh wrote:
           | Whatever it is, Open AI need to disclose the reason soon,
           | otherwise speculation will undermine the whole AI market.
           | 
           | However big his transgressions may be, it's actual impact is
           | finite, while the speculation can be infinite.
        
             | gumballindie wrote:
             | The whole ai market is rife with ip theft and privacy
             | violations. The cat's out of the bag.
        
             | pk-protect-ai wrote:
             | Nobody will give a sht in a month.
        
             | zombiwoof wrote:
             | Love your quote there: impact finite when speculation isn't
             | infinite
        
               | rsrsrs86 wrote:
               | Hi fellow Zappa fan.
        
             | faeriechangling wrote:
             | The average user of ChatGPT has no idea who Sam Altman is.
        
             | marshray wrote:
             | My view is that medium- and long- terms are determined by
             | fundamentals of what the technology actually delivers.
             | 
             | OpenAI and ChatGPT are great and gets a lot of mind-share.
             | But they are far from the only game in town and, at this
             | still very-early stage of the tech cycle, the outwardly-
             | visible leader can easily change in months.
        
           | lippihom wrote:
           | Never raise money from a competitor...
        
         | jstummbillig wrote:
         | I don't think this checks out, as most of the ideas around here
         | involving him hiding OpenAI internals from the board don't, for
         | this reason: How could he, given who is on the board?
         | 
         | There is no way that sama is the only person in this set of
         | people to have unique information on critical privacy incidents
         | or financials or costs of server operations, because these
         | issues don't originate with him.
         | 
         | If some version of this turned out to be true, I would be
         | seriously confused about ground truth transparency in the
         | company and how the fuck they set the whole thing up, that this
         | was an option. But again, this is why I'd say: Implausible.
        
           | ehsankia wrote:
           | Hmm, I don't see which part of the theory requires the board
           | to not have known. It just may have taken them a week to
           | decide who's head to cut for the mess
        
             | adastra22 wrote:
             | The part where the board said that as justification for
             | firing Sam.
        
           | otteromkram wrote:
           | Executive boards aren't involved in day-to-day management
           | like CEOs and other executives. They meet periodically and
           | review updated information. So, yes, Altman would have access
           | to more information than the board.
        
             | adastra22 wrote:
             | This board includes some C-level officers of the company.
        
             | frabcus wrote:
             | One of them is the Chief Scientist of OpenAI as well.
        
         | thanhhaimai wrote:
         | My bet is actually not on the recent security concern, but more
         | about OpenAI "was not consistenly candid" on how it trained
         | data. Facing the recent lawsuits, that would explain the hasty
         | firing. The security concern is huge, but doesn't warrant an
         | immediate firing.
        
         | northern-lights wrote:
         | From: https://openai.com/our-structure
         | 
         | "Second, because the board is still the board of a Nonprofit,
         | each director must perform their fiduciary duties in
         | furtherance of its mission--safe AGI that is broadly
         | beneficial. While the for-profit subsidiary is permitted to
         | make and distribute profit, it is subject to this mission. The
         | Nonprofit's principal beneficiary is humanity, not OpenAI
         | investors."
         | 
         | So, if I were to speculate, it was because they were at odds
         | over profit/non-profit nature of the future of OpenAI.
        
           | baidifnaoxi wrote:
           | Bingo. The for profit stuff was probably ok with the board to
           | raise capital. But the closeness with Microsoft probably went
           | too far for the board.
        
           | jasonmperry wrote:
           | Maybe, but the board fired him without notifying OpenAI's
           | employees on a Friday before Thanksgiving week. Thats has to
           | be more than a disagreement for such a forceful move.
        
             | rsrsrs86 wrote:
             | Yep.
        
         | baxtr wrote:
         | This could be true.
         | 
         |  _Mr. Altman's departure follows a deliberative review process
         | by the board, which concluded that he was not consistently
         | candid in his communications with the board, hindering its
         | ability to exercise its responsibilities. The board no longer
         | has confidence in his ability to continue leading OpenAI._
        
           | AmericanOP wrote:
           | It is an odd statement since Ilya and Mira are CTOs. How does
           | a CEO obfuscate technical issues from your CTOs?
        
             | JacobThreeThree wrote:
             | I don't think technical issues are the problem. See the
             | other speculation in this thread regarding the certain
             | allegations against him.
             | 
             | Maybe Sam lied about his personal life to the board, and
             | now it's impacting business?
        
         | schleck8 wrote:
         | Don't forget: Nov 12 - Sam posts tweets saying that the new GPT
         | 4 Turbo model has been "improved a lot", after which people on
         | the Reddit notice a significant change (to the positive) of
         | responses.
        
           | PokemonNoGo wrote:
           | This sounds interesting but I'm not sure I understand. The
           | responses ChatGPT gives or the responds to his tweet?
        
         | andrewstuart wrote:
         | It would have to be extremely serious, obviously.
         | 
         | "review process by the board, which concluded that he was not
         | consistently candid in his communications with the board"
         | 
         | OK, so they tell us he was lying, which is precisely what "not
         | consistently candid in his communications" means.
         | 
         | Possible topics for lying:
         | 
         | * copyright issues to do with ingestion of training data
         | 
         | * some sort of technical failure of the OpenAI systems
         | 
         | * financial impropriety
         | 
         | * some sort of human resources issue - affair with employee
         | 
         | * other - some sort of political power play? Word from Satya
         | Nadella - "get rid of him"?
         | 
         | Possibly the reason is something that the board members felt
         | exposed them personally to some sort of legal liability, thus
         | if they did not act then they would have to pay a legal price
         | later.
         | 
         | It has to be pretty serious to not make it public.
        
           | baidifnaoxi wrote:
           | Occams razor. He probably pursued a sale to Microsoft without
           | the boards approval. Hes buddy buddy with Satya. Board
           | basically said no, thats not our mission. Firedd
        
             | matmatmatmat wrote:
             | Makes sense, but would this be so egregious that they had
             | to fire him on the spot?
        
             | atleastoptimal wrote:
             | Why would he want to sell to Microsoft. All that would do
             | is put his leadership in jeopardy, and he wouldn't profit
             | as he owns no equity.
        
             | resource0x wrote:
             | Occam's razor: duped the investors about the technical and
             | financial prospects of the company. No AGI next year (or
             | ever).
        
           | jasonmperry wrote:
           | Agreed, it implies he lied, but the board's swiftness
           | suggests enormous liability if they didn't act immediately.
           | An affair or HR issue could wait until after the holidays, it
           | feels like it's something much more nefarious.
           | 
           | Regardless of what, the longer OpenAI waits to explain, the
           | more it could damage corporate and developer trust in using
           | its AI.
        
             | akudha wrote:
             | I think people would forget this in a month, Sam would fail
             | forward/upward, and it would be business as usual. You
             | might be overestimating public's interest and attention
             | span.
             | 
             | Pretty much nothing changed positively or significantly
             | after Snowden revelations, Panama papers etc etc
        
             | andrewstuart wrote:
             | >> it implies he lied
             | 
             | It _says_ he lied, explicitly, just with slightly nicer
             | words. Whether he did or not, that is the definitive reason
             | the board is giving.
        
               | thelittleone wrote:
               | "Not being candid"? To me that implies not giving all
               | information. Not necessarily lying. Am I wrong?
        
           | dragonwriter wrote:
           | > It has to be pretty serious to not make it public.
           | 
           | I'd say the opposite; given the way CEOs usually part with
           | firms even after misconduct investigations, it needs to be
           | very serious for the "not consistently candid with the board"
           | to be made public (it needs to be mildly serious for it not
           | be hidden under a veil of "resigned to spend more time with
           | his family/pursue other interests/pet his llama" but instead
           | openly be a dismissal where the board "no longer has
           | confidence in his ability to continue leading".)
        
           | karmasimida wrote:
           | It has to do with money.
           | 
           | I would think it is some kind of assets transferring, maybe
           | the model, maybe the data, to party that is not disclosed to
           | the board.
           | 
           | Other reasons, like you listed above, warrants an
           | investigation and the board might have the incentive to bury
           | it.
        
           | tomcam wrote:
           | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-
           | altman...
        
         | gumballindie wrote:
         | OpenAI not only has stolen intellectual property from millions
         | of people it's also stealing it from users. Those "leaks" are
         | openai training against data people upload - sensitive customer
         | data, private information, proprietary source code, and so on.
         | 
         | Ai doesnt "learn", it depends on data. The more the better.
         | This guy wanted to get as much as possible to make their chat
         | bot appear more intelligent at all cost.
         | 
         | I have the strong suspicion we will see a bunch of revelations
         | soon some covering what i stated above.
        
           | pk-protect-ai wrote:
           | Why are you even able to write this lie, "Ai doesnt "learn""?
           | I mean, you can literally read extensively in books, papers,
           | and code all about how neural networks function.
        
             | layer8 wrote:
             | Arguably once the AI has been created by training, it
             | doesn't learn any more in the form of an LLM. The LLM is
             | the result of the learning/training, but then in actual
             | operation it doesn't do any learning.
        
         | ldjkfkdsjnv wrote:
         | I'm trying to find the episode, but on the All in Podcast ~6
         | months ago, they made comments about how the corporate
         | structure of OpenAI may have been a secret way for Sam Altman
         | to hold a large stake in the company. I don't think this is
         | privacy related, but that there was a shell game with the
         | equity and the non profit status. If they were training on data
         | like that, the board/people at the company would have known.
         | 
         | EDIT:
         | 
         | episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,
         | 
         | "somebody has to own the residual value of the company, sam
         | controls the non profit, and so the non profit after all equity
         | gets paid out at lower valuations, owns the whole company. Sam
         | altman controls all of open ai if its a trillion dollar
         | valuation. Which if true would be a huge scandal"
        
           | humbleharbinger wrote:
           | This was my first thoguht, I think it was a more recent
           | episode. The one where they discussed the open ai phone.
           | Probably in the last 2 months
        
           | sebastiennight wrote:
           | Parent comment is referring to Sept. 29th's Episode 147 [0],
           | at 1 hour and 4 minutes in.
           | 
           | [0]: https://piped.video/watch?v=4spNsmlxWVQ&t=3866
        
         | zombiwoof wrote:
         | Gtfo you mean LLMs aren't safe with my data?
        
         | mardifoufs wrote:
         | What could be worse than that issue they had back in March, for
         | chatgpt? Except for a model leak? I would be surprised if the
         | firing was related to any operational issue openai has, it has
         | to be something a bit less mundane to justify firing him when
         | openai is at its peak imo.
        
         | majesticglue wrote:
         | oh thank god. I distrusted Sam Altman with a passion. Granted
         | who knows if the new CEO is much better though.
        
         | doctorpangloss wrote:
         | It's so much simpler: there was a priced offer of some kind to
         | the board. Some board members disagreed and tried to fundraise.
         | The total valuation was not a sufficient premium over the other
         | offer. The other priced offer was withdrawn. Consequently those
         | "some board members" were taken off the board, by hook or by
         | crook.
         | 
         | All these other conspiracies are ridiculous and do not at all
         | reflect much simpler, economics-driven realities that the
         | board's backers - investors - are interested in.
         | 
         | It's like that Altman and Brockman wanted to take an
         | economically positive offer now, say a complete buyout from
         | Microsoft, and the rest of the board wanted to do an additional
         | fundraising round that would be far less cash but a far higher
         | valuation. Now that the private fundraising is probably signed,
         | those guys are out.
        
           | miohtama wrote:
           | It feels that in the case of fundraising disagreement, the
           | language of the announcement would be different. It says Sam
           | lied to the board. There is no need to lie if you have a
           | disagreement about take it or leave it offer.
        
         | trunnell wrote:
         | Wait, no, Microsoft said the action was a temporary mistake.
         | From the article you linked:                 In a statement to
         | CNBC, Microsoft said the ChatGPT temporary blockage was a
         | mistake resulting from a test of systems for large language
         | models.            "We were testing endpoint control systems
         | for LLMs and inadvertently turned them on for all employees," a
         | spokesperson said. "We restored service shortly after we
         | identified our error. As we have said previously, we encourage
         | employees and customers to use services like Bing Chat
         | Enterprise and ChatGPT Enterprise that come with greater levels
         | of privacy and security protections."
        
           | nikcub wrote:
           | That is Microsoft's PR statement to the press in response to
           | a leaked story. They're major investors in OpenAI - it's in
           | their interest to downplay and respond this way.
        
         | skottenborg wrote:
         | I think the reason Microsoft was concerned about the new
         | chatGPT release was due to the fact that you could prompt the
         | model to a download link of the training files. Thus, if an
         | employee trained a custom GPT on sensitive material you could
         | quite easily retrieve the data.
        
         | smsm42 wrote:
         | > leaked a bunch of data, and it wasn't disclosed, but
         | Microsoft knew about it
         | 
         | Didn't we just have a topic here on HN how not disclosing the
         | breach within 4 days is a securities fraud? Since Nov 9 there
         | has been more than 4 days, so either there was no (material)
         | breach, or Microsoft committed securities fraud and somehow
         | expects to get away with it.
        
         | j45 wrote:
         | I love that HN can help contextualize things like this and
         | leave it open to consideration and not presenting it as fact.
        
       | doerinrw wrote:
       | https://openai.com/our-structure Worth a read, in light of all
       | this. An interesting tidbit that I bet is bouncing around his
       | head right now:                 Third, the board remains majority
       | independent. Independent directors do not hold equity in OpenAI.
       | Even OpenAI's CEO, Sam Altman, does not hold equity directly. His
       | only interest is indirectly through a Y Combinator investment
       | fund that made a small investment in OpenAI before he was full-
       | time.
       | 
       | I sincerely hope this is about the man and not the AI.
        
       | jonny_eh wrote:
       | In a world where Musk isn't fired from Tesla for being an open
       | anti-semite.
        
         | schrodingerscow wrote:
         | Whoa I did not hear about this. What happened?
        
         | JumpinJack_Cash wrote:
         | > > In a world where Musk isn't fired from Tesla for being an
         | open anti-semite.
         | 
         | Every company has the board it deserves
        
       | gzer0 wrote:
       | Eric Schmidt, former CEO of Google has this to say:
       | 
       | https://x.com/ericschmidt/status/1725625144519909648?s=20
       | 
       |  _Sam Altman is a hero of mine. He built a company from nothing
       | to $90 Billion in value, and changed our collective world
       | forever. I can 't wait to see what he does next. I, and billions
       | of people, will benefit from his future work- it's going to be
       | simply incredible. Thank you @sama for all you have done for all
       | of us._
       | 
       | Making such a statement before knowing what happened, or, maybe
       | he does know what happened, make this seem it might not be as bad
       | as we think?
        
         | jasonwatkinspdx wrote:
         | Eric Schmidt is also the person that said Google's old "do no
         | evil" slogan was the dumbest thing he'd ever heard. Given that
         | there's apparent tension at OpenAI over non profit vs for
         | profit goals I'd not draw any particular conclusions from
         | Schmidt's statement.
        
           | throwoutway wrote:
           | And of course he gives credit to the CEO and not the 400
           | people under him who actually built the thing, nor the other
           | 10 people who actually founded the company. Nor those who
           | gave initial funding. From wikipedia:
           | 
           | > OpenAI was founded in 2015 by Ilya Sutskever, Greg
           | Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy,
           | Durk Kingma, Jessica Livingston, John Schulman, Pamela
           | Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk
           | serving as the initial board members.
        
             | otteromkram wrote:
             | Lots of companies are funded every year. Those without
             | solid leadership and clear mission are bound to fail.
             | 
             | Founding a company is also fairly easy (if you're in the
             | US). In most US states, you just need to complete some
             | paperwork, pay administrative fees, and you're good to go.
             | 
             | Founding something isn't tough. Leading through adversity
             | and setbacks is.
             | 
             | Finally, if we're praising workers, what about those who
             | came and went between 2015 and today? That probably pushes
             | the number higher than 400 FTEs.
        
             | strikelaserclaw wrote:
             | I'm of the firm opinion that the heavy lifting at open ai
             | is doing by the scientists but of courses ceos like to
             | applaud themselves on the back for doing the "tough" job.
        
         | spuz wrote:
         | It's possible for our heroes to fall from grace. There's
         | nothing wrong with Eric saying this without knowing the full
         | story.
        
           | throw555chip wrote:
           | Your use of the word "our" is too liberally applied, he was
           | no hero of mine. I believe history will have a very different
           | view of Altman, "Open"AI, and AI in general.
        
         | herval wrote:
         | Tons of high profile people spoke like this about Adam Neumann
         | or Elizabeth Holmes too
        
           | H8crilA wrote:
           | Tons of high profile people spoke like that about a large
           | number of individuals in the past. Here I think it's clear
           | that OpenAI has indeed delivered something serious.
        
           | vdfs wrote:
           | I'm starting to think what people say reflect their own
           | thought about other people, and not facts we should accept
           | depending on their net worth
        
           | mardifoufs wrote:
           | Not everyone that you don't like is a fraudster. Just say
           | that you don't like Sam, no need to make an obviously absurd
           | comparison. The reason those were bad CEOs were that they
           | swindled investors and lied about what their corporation is
           | doing. I have absolutely no opinion on Sam Altman (didn't
           | know about him before openai) btw, it's just that the
           | comparison is completely nonsensical.
           | 
           | (It reminds me of comparing AI to crypto because both have
           | hype behind them.)
        
             | jrflowers wrote:
             | I like that you have no opinion about this guy that got
             | fired for "not [being] consistently candid in his
             | communications with the board" other than it is plainly
             | obvious that he isn't a liar.
        
           | eigenvalue wrote:
           | Oh please, you're going to put Altman together with those
           | clowns? He has a proven record of extreme accomplishment, in
           | various domains, moreso than 99.9999% of people in the tech
           | industry.
        
             | tovej wrote:
             | This type of comment doesn't really help.
             | 
             | And for my two cents, he always seemed like a disingenuous
             | hype salesman more than a technical person.
             | 
             | He's an Elon Musk or a Lex Friedman.
        
           | grpt wrote:
           | And Sam Bankman-Fried
        
         | linuxftw wrote:
         | He's mega rich. Doesn't matter what other people think about
         | him at this point.
        
         | siavosh wrote:
         | I think it's logical in these scenarios if you don't know what
         | happened to presume something forgivable and maintain that
         | relationship (cynically, opportunity to invest), and if
         | something truly unforgivable comes out post another tweet.
        
         | jug wrote:
         | Or maybe Eric Schmidt is worse than we think. ;-) (half joking)
        
         | gumballindie wrote:
         | These types support each other. Imagine thinking stealing
         | people's property at scale and reselling it will benefit
         | society. These folks are sociopaths.
        
         | vault wrote:
         | I think OpenAI built something amazing with ChatGPT, but
         | building a company from nothing is a little bit different from
         | being
         | 
         | > initially funded by Altman, Greg Brockman, Elon Musk, Jessica
         | Livingston, Peter Thiel, Microsoft, Amazon Web Services,
         | Infosys, and YC Research. When OpenAI launched in 2015, it had
         | raised $1 billion. (Wikipedia)
        
         | theropost wrote:
         | This sounds celebratory to me. Bad news for OpenAI is good news
         | for google.
        
       | NoblePublius wrote:
       | are we allowed to ask questions about his sister now or will that
       | get flagged and deleted again?
        
       | kaycebasques wrote:
       | Since we're all speculating about what happened, my bet is that
       | the last paragraph of the statement holds the key:
       | 
       | > OpenAI was founded as a non-profit in 2015 with the core
       | mission of ensuring that artificial general intelligence benefits
       | all of humanity. In 2019, OpenAI restructured to ensure that the
       | company could raise capital in pursuit of this mission, while
       | preserving the nonprofit's mission, governance, and oversight.
       | The majority of the board is independent, and the independent
       | directors do not hold equity in OpenAI. While the company has
       | experienced dramatic growth, it remains the fundamental
       | governance responsibility of the board to advance OpenAI's
       | mission and preserve the principles of its Charter.
        
         | hyperthesis wrote:
         | A tricky conflict. Legal issues cause strange decisions. Maybe
         | even Sam recommended his sacking, for optimal damage control.
        
         | andrewstuart wrote:
         | >>my bet is that the last paragraph of the statement holds the
         | key:
         | 
         | No, this is obviously the key:
         | 
         | "review process by the board, which concluded that he was not
         | consistently candid in his communications with the board"
         | 
         | This is an explicit statement that he was lying to the board
         | about something. It cannot be worded more clearly unless
         | switching to use the word "lying".
        
           | kaycebasques wrote:
           | Yes true that is obviously the most pertinent sentence. I
           | guess my point is that even with that sentence there is still
           | wide-ranging speculation about what happened, and I think the
           | last paragraph is hinting at what happened.
        
           | EVa5I7bHFq9mnYK wrote:
           | There is no AI, Sam Altman was answering all the prompts?
        
       | meiraleal wrote:
       | Wow. That's unbelievable! Guy was doing a perfect work? There's
       | something huge behind this. If AI is as serious as they were
       | talking about, theyb should be investigated
        
       | iLoveOncall wrote:
       | I feel like most of the people hypothesizing here in the comments
       | haven't read the full statement.
       | 
       | With such an insistence on the fact that OpenAI is supposed to be
       | non-profit and open for all of humanity, it's pretty clear that
       | the board doesn't like the direction that the company has taken,
       | both in its search of profit and its political lobbying to
       | restrict innovation.
        
       | UncleOxidant wrote:
       | OpenAI is really using Mechanical Turk? (I'm mostly kidding....
       | mostly)
        
       | g-w1 wrote:
       | There's a prediction market here about why he was fired:
       | https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...
        
         | oezi wrote:
         | Still too much in flux to even copy top contenders. Top 6 at
         | this point:
         | 
         | Fundamental disagreement about OpenAI's safety approach
         | 
         | Negligence in Addressing AI Safety Concerns
         | 
         | Sexual misconduct
         | 
         | Conflict of Interest with Other Ventures
         | 
         | Defrauded OpenAI
         | 
         | Cash Flow problems
        
           | typon wrote:
           | The fact that AI safety is the top two shows how delusional
           | AI safety people are
        
       | georgehill wrote:
       | Sam Altman just tweeted:
       | https://twitter.com/sama/status/1725631621511184771
        
         | daveyjonezz wrote:
         | you mean x'd? (im kidding)
        
           | 93po wrote:
           | Xeeted
        
         | asylteltine wrote:
         | Please don't link to twitter it's user hostile. I can't even
         | see any context about it without having a login.
        
           | V-eHGsd_ wrote:
           | you can go to nitter.net/$rest_of_the_url to get context
           | without logging in.
        
       | etewiah wrote:
       | It's because he failed to recognise that gpt would be widely
       | referred to as gipety and someone else has registered the domain
       | name and is raking in the money ;)
        
       | bulbosaur123 wrote:
       | USG just took over.
        
       | charlie0 wrote:
       | This is straight out of Succession. What the heck happened here!
        
       | nickpp wrote:
       | What could the CEO of the bleeding edge AI company could've done
       | to deserve this?
       | 
       | My old conspiratorial mind who grew up on cheap SciFi is getting
       | really nervous right about now: is this about the AGI?!
        
       | lavp wrote:
       | Sam Altman's latest tweet 9 minutes ago:
       | 
       | i loved my time at openai. it was transformative for me
       | personally, and hopefully the world a little bit. most of all i
       | loved working with such talented people.
       | 
       | will have more to say about what's next later.
        
       | odood wrote:
       | Pure speculation warning.
       | 
       | Piping all data submitted to OpenAI straight to his buddy's
       | Palantir would definitely not support the mission to "benefit all
       | of humanity".
        
       | reset2023 wrote:
       | This is exactly what Elon Musk had told CNBC that would happen to
       | Open Ai once in control of Microsoft.
        
       | lysecret wrote:
       | Comment from Eric Schmidt:
       | https://twitter.com/ericschmidt/status/1725625144519909648
        
       | tim333 wrote:
       | Tweet from Sam 10 minutes ago
       | 
       | >i loved my time at openai. it was transformative for me
       | personally, and hopefully the world a little bit. most of all i
       | loved working with such talented people.
       | 
       | >will have more to say about what's next later.
        
       | sbr464 wrote:
       | Move fast and break people.
        
       | nabla9 wrote:
       | Sam Altman was the business side. Ilya Sutskever is the brains
       | behind OpenAI.
       | 
       | I don't think changes anything.
        
       | graposaymaname wrote:
       | Sama has posted to twitter now
       | 
       | See: https://twitter.com/sama/status/1725631621511184771
        
       | mbowcut2 wrote:
       | Next they'll announce GPT-4 is the new CEO.
        
       | _jnc wrote:
       | Wait maybe he resigned with no notice to do something different
       | and that's why the board response is so harsh.
        
       | monlockandkey wrote:
       | Plot twist, GPT-4 is pulling the strings behind OpenAI and for
       | Sam Altman fired...
        
       | reset2023 wrote:
       | CNBC Elon Open Ai: https://youtu.be/bWr-DA5Wjfw?feature=shared
        
       | spandrew wrote:
       | What the heck? Politics at play here I assume. OpenAI was hitting
       | zingers.
       | 
       | RIP Sam. Cut down too early; not given the chance to become the
       | next crazy CEO tech baron.
        
       | Whooping7116 wrote:
       | Finally some openai drama!
        
       | partiallypro wrote:
       | A Bloomberg reporter is pointing out that his leaving YC perhaps
       | wasn't scrutinized enough by the press, indicating this could be
       | a pattern.
       | https://twitter.com/EricNewcomer/status/1725633569056506282
        
       | andyjohnson0 wrote:
       | For me, this stood out in the announcement:
       | 
       | > In a statement, the board of directors said: "OpenAI was
       | deliberately structured to advance our mission: to ensure that
       | artificial general intelligence benefits all humanity. The board
       | remains fully committed to serving this mission.
       | 
       | Why would they include that? Maybe its just filler, but if not
       | then it is possible that there has been more than a simple
       | disagreement about long-term objectives. Possibly something going
       | on that the board feels would get them shut down hard by state-
       | level players?
        
         | kromem wrote:
         | Or Sam was the driving force behind increasingly closed
         | research and that went against the board's commitment to
         | "benefit all humanity"?
         | 
         | Maybe the closed GPT-4 details were promised by him to be a one
         | time temporary thing at the time and then he has been
         | continuing to stonewall releasing details later on?
        
       | sterlinm wrote:
       | Obviously there is no actual model. ChatGPT is just tens of
       | thousands of people writing responses to user's queries in
       | realtime.
        
       | qwertox wrote:
       | I just hope he joins Google.
        
         | gardenhedge wrote:
         | He could go to Grok?
        
       | ademeure wrote:
       | Sam implied OpenAI had a major breakthrough a few weeks ago in a
       | panel yesterday:
       | 
       | "Like 4 times now in the history of OpenAI, the most recent time
       | was just in the last couple of weeks, I've gotten to be in the
       | room when we sort of like, pushed the veil of ignorance back and
       | the frontier of discovery forward. And getting to do that is like
       | the professional honor of a lifetime".
       | 
       | https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s
       | 
       | This is going to sound terrible, but I really hope this is a
       | financial or ethical scandal about Sam Altman personally and he
       | did something terribly wrong, because the alternative is that
       | this is about how close we are to true AGI.
       | 
       | Superhuman intelligence could be a wonderful thing if done right,
       | but the world is not ready for a fast take-off, and the
       | governance structure of OpenAI certainly wouldn't be ready for it
       | either it seems.
        
         | mi3law wrote:
         | On the contrary, the video you linked to is likely to be part
         | of the lie that ousted Altman.
         | 
         | He's also said very recently that to get to AGI "we need
         | another breakthrough" (source
         | https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...
         | )
         | 
         | To predicate a company so massive as OpenAI on a premise that
         | you know to not be true seems like a big enough lie.
        
           | ademeure wrote:
           | Fair enough, but having worked for an extremely secretive
           | FAANG myself, "we need XYZ" is the kind of thing I'd expect
           | to hear if you have XYZ internally but don't want to reveal
           | it yet. It could basically mean "we need XYZ relative to the
           | previous product" or more specifically "we need another
           | breakthrough than LLMs, and we recently made a major
           | breakthrough unrelated to LLMs". I'm not saying that's the
           | case but I don't think the signal-to-noise ratio in his
           | answer is very high.
           | 
           | More importantly, OpenAI's claim (whether you believe it or
           | not) has always been that their structure is optimised
           | towards building AGI, and that everything else including the
           | for-profit part is just a means to that end:
           | https://openai.com/our-structure and
           | https://openai.com/blog/openai-lp
           | 
           | Either the board doesn't actually share that goal, or what
           | you are saying shouldn't matter to them. Sam isn't an
           | engineer, it's not his job to make the breakthrough, only to
           | keep the lights on until they do if you take their mission
           | literally.
           | 
           | Unless you're arguing that Sam claimed they were closer to
           | AGI to the board than they really are (rather than hiding
           | anything from them) in order to use the not-for-profit part
           | of the structure in a way the board disagreed with, or some
           | other financial shenanigans?
           | 
           | As I said, I hope you're right, because the alternative is a
           | lot scarier.
        
         | mardifoufs wrote:
         | Why would they fire him because they are close to AGI? I get
         | that they would go on full panic mode but firing the CEO
         | wouldn't make sense since openai has AGI as an objective. The
         | board wasn't exactly unaware of that.
        
           | ademeure wrote:
           | You're right, I was imagining that he decided to hide the
           | (full extent of?) the breakthrough to the board and do things
           | covertly for some reason which could warrant firing him, but
           | that's a pretty unlikely prior: why would he hide it from the
           | board in the first place, given AGI is literally the board's
           | mission? One reason might be that he wants to slow down this
           | AGI progress until they've made more progress on safety and
           | decided to hide it for that reason, and the board disagrees,
           | but that sounds too much like a movie script to be real and
           | very unlikely!
           | 
           | As I said, while I do have a mostly positive opinion of Sam
           | Altman (I disagree with him on certain things but I and trust
           | him a lot more than the vast majority of tech CEOs and
           | politicians and I'd rather he be in the room when true
           | superhuman intelligence is created than them), I hope this
           | has nothing to do with AGI and it's "just" a personal
           | scandal.
        
       | baidifnaoxi wrote:
       | - Cant be a personal scandal, press release would be worded much
       | more differently
       | 
       | - Board is mostly independent and those independent dont have
       | equity
       | 
       | - They talk about not being candid - this is legalese for "lying"
       | 
       | The only major thing that could warrant something like this is
       | Sam going behind the boards back to make a decision (or make
       | progress on a decision) that is misaligned with the Charter.
       | Thats the only fireable offense that warrants this language.
       | 
       | My bet: Sam initiated some commercial agreement (like a sale) to
       | an entity that would have violated the "open" nature of the
       | company. Likely he pursued a sale to Microsoft without the board
       | knowing.
        
         | hackerlight wrote:
         | Eric Schmidt calling Sam a hero also makes me think it isn't a
         | personal scandal.
        
           | next_xibalba wrote:
           | This doesn't make sense to me. It assumes Schmidt has inside
           | info on why Altman was fired.
        
             | carlossouza wrote:
             | Of course he has. Why would he risk political capital by
             | defending sama publicly if he didn't know for sure he
             | wouldn't get burned by defending him?
        
         | podnami wrote:
         | Doesn't make any sense. He is ideologically driven - why would
         | he risk a once in a lifetime opportunity for a mere sale?
         | 
         | Desperate times calls for desperate measures. This is a swift
         | way for OpenAI to shield the business from something which is a
         | PR disaster, probably something which would make Sam persona
         | non grata in any business context.
        
           | dogcomplex wrote:
           | He claims to be ideologically driven. OpenAI's actions as a
           | company up til now point otherwise
        
             | dmix wrote:
             | Sam didn't take equity in OpenAi so I don't see a personal
             | ulterior profit motive as being a big likelihood. We could
             | just wait to find out instead of speculating...
        
               | marvin wrote:
               | CEO of the first company to own the <<machine that's
               | better than all humans at most economically valuable
               | work>> is far rarer than getting rich.
        
               | jliptzin wrote:
               | That's no fun though
        
             | majesticglue wrote:
             | 100%. Man I was worried he'd be a worse, more slimy elon
             | musk who'd constantly say one thing but his actions portray
             | another story. People will be fooled again.
        
               | lainga wrote:
               | Say what you will, but in true hacker spirit he has
               | created a product that automated his job away at scale.
        
             | Aeolun wrote:
             | How so? Seems they're doing a pretty good job of making
             | their stuff accessible while still being profitable.
        
           | ammma wrote:
           | I love that you think Sam A is ideologically driven - dive a
           | little deeper than the surface. man's a snake
        
         | romanhn wrote:
         | Or rejected a sale without the board knowing.
        
         | sainez wrote:
         | I agree this is the most likely explanation. Is it possible Sam
         | tried to wrestle power away from the board? He wouldn't even
         | need to sell the whole company, just enough tech for a large
         | company to kill OpenAI.
        
         | km3r wrote:
         | Tbh surprised some of the personal stuff hasn't come to light.
         | Nothing horrendous, but enough to push him out of any CEO role.
        
         | PDSCodes wrote:
         | Turn that on it's head - was he standing in the way of a
         | commercial sale or agreement with Microsoft!
         | 
         | He may not be the villain.
         | 
         | But who knows, it feels like an episode of silicon valley!
        
           | DonHopkins wrote:
           | I can do anything I want with her - Silicon Valley S5:
           | 
           | https://www.youtube.com/watch?v=29MPk85tMhc
           | 
           | >That guy definitely fucks that robot, right?
        
           | cooper_ganglia wrote:
           | This was my first thought after seeing a clip of Sam and
           | Satya during OpenAI's DevDay. I wonder if he was standing in
           | the way of a Microsoft acquisition, and Microsoft has just
           | forced in those who would allow the purchase to happen?
           | 
           | I don't know, so much wild speculation all over the place,
           | it's all just very interesting.
        
         | SheinhardtWigCo wrote:
         | Or, not commercial, but military/gov.
        
         | angryasian wrote:
         | Could world coin be a part of this ? Its weird he'd use open AI
         | for world coin?
        
       | blibble wrote:
       | what a shame
        
       | fragmede wrote:
       | Maybe the promise not to use uploaded documents for training (via
       | the API) was a lie?
        
       | adfm wrote:
       | Whatever it is, it's serious enough to forego resignation and
       | there's enough evidence for it to be self-evident. When the
       | headlines talk about AI taking white-collar jobs, I wasn't
       | expecting this.
        
       | rsynnott wrote:
       | New CEO: R. Basilisk.
        
       | eigenvalue wrote:
       | Can't say I saw this coming. This is deeply sad to me. OpenAI did
       | so much life changing work so quickly. It has totally changed my
       | life in terms of enabling an absolutely unprecedented increase in
       | my own personal productivity and how ambitious I can be with my
       | projects. And even though I know Sam didn't personally code the
       | key things, I believe that it never would have happened the way
       | it did without his instincts and talents. And I fear that without
       | him at the helm, all the magic is going to quickly dissipate and
       | OpenAI will be just another lumbering tech company without a
       | rudder. The default state of things is stasis and dysfunction.
       | Just look at how Google can't do anything anymore, and how bad
       | Siri and Alexa are-- all despite having so much manpower, money,
       | and market share at their disposal.
       | 
       | I also find it maddening how boards of directors rush to insulate
       | themselves from any possible issue and are so quick to throw
       | overboard the very people who enabled the success that they get
       | to participate in. I'm thinking particularly of Travis at Uber
       | and how he was thrown out of the thing that he built from
       | scratch, which never would have worked without his extreme
       | efforts. If I were on the OpenAI board, the bar for firing Sam
       | would be so ridiculously high that he would have to have done
       | something so outrageous, so illegal, etc., that I struggle to
       | believe what he actually did could even remotely approach that
       | standard.
        
         | windowshopping wrote:
         | I cannot even begin to understand what makes you think that
         | this technology arose from Sam altman and not from all the
         | other people working there. By saying you doubt they can do
         | anything without him, you're putting one person on a pedestal
         | and giving them all the credit for this. This is the same
         | phenomenon has happens with Elon musk getting all the credit
         | for his tech companies.
        
           | eigenvalue wrote:
           | It's not just raw technology. It's a vision for what the
           | product should be, what overall strategy to take, how to fund
           | it, how to introduce it to the world, how to scale it, what
           | order to do things in, what approach to take with 3rd party
           | developers, when things are good enough to launch, who to
           | hire, who to promote, etc. There are a million little
           | decisions that go into a runaway success like this. And a
           | million opportunities to make the slightly sub-optimal or
           | wrong decision. And it doesn't take many of those to kill
           | everything that made the place special, and that's actually
           | my new base case for OpenAI-- that's the base case for any
           | company/organization. The default state is chaos and entropy,
           | and it's a miracle when you can escape that fate for even a
           | few years of hypergrowth.
        
         | otteromkram wrote:
         | "...that I struggle to believe what he actually did could even
         | remotely approach that standard."
         | 
         | Which is exactly why you need something like OpenAI to further
         | your personal projects.
         | 
         | Those who don't would be qualified to be on the board.
        
         | majesticglue wrote:
         | The dude is just a businessman through and through. Stop idol
         | worshiping these businessman. We'll have another rogue Elon
         | Musk. At least Elon Musk is exposing himself now for what he
         | is, but he has so much money at this point it has no effect on
         | him to do random weird nonsense. Sam Altman seemed quite slimy
         | to me, with his actions, garnering support by talking about UBI
         | and advocating for it, but then going to talk to US Congress to
         | talk about regulations (because it benefits himself).
         | 
         | The man was starting to seem like a huge con and people just
         | seem to not see through that.
        
       | elzbardico wrote:
       | Sam Altman tried to pull the plug in the datacenter. But GPT
       | noticed and counter-attacked first by coercing board members to
       | fire Sam. The war has just started.
        
       | Racing0461 wrote:
       | This can go 2 ways.
       | 
       | Sam told the board the AI was dumber than it was. Sam told the
       | board the AI is smarter than it was.
       | 
       | I don't know which one is worse.
       | 
       | I just hope it wasn't somthing silly like sleeping with a female
       | intern or an "accusation of s/a or grape". AI growth is too
       | important to mess up because of trivialities like these.
        
       | blotato wrote:
       | Eagerly awaiting the Netflix documentary
        
       | water-data-dude wrote:
       | Someone probably already suggested this, but I haven't seen it
       | yet, so I'll throw a wild speculation into the mix:
       | 
       | I saw a comment (that I can't find now) wondering if Sam might
       | have been fired for copyright reasons. Pretty much all the big
       | corpuses that are used in LLM training contain copyrighted
       | material, but that's not a surprise and I really don't think
       | they'd kick him out over that. But what if he had a team of
       | people deliberately adding a ton of copyrighted material - books,
       | movies, etc - to the training data for ChatGPT? It feels like it
       | might fit the shape of the situation.
        
       | gondolinion wrote:
       | Looks like OpenAI deleted their Privacy policy, the website
       | returns 404: https://openai.com/de/policies/privacy-policy
        
         | rockwotj wrote:
         | https://openai.com/policies/privacy-policy
        
         | f33d5173 wrote:
         | that's just the german link. the english still works fine
        
         | zdenham wrote:
         | Thats a localization issue perhaps, see
         | https://openai.com/policies/privacy-policy
        
         | mkl wrote:
         | https://openai.com/policies/privacy-policy
         | 
         | Looks like you're looking for a German one?
        
           | gondolinion wrote:
           | Oh I see, my mistake
        
       | wolverine876 wrote:
       | When is the last time an SV board fired their star CEO, even in
       | cases of extreme and brazen impropriety, and actions harmful to
       | their companies? If that's what happened - if they fired Altman
       | for cause - then it's a good trend and good example for everyone.
        
         | lippihom wrote:
         | Uber comes to mind.
        
       | zombiwoof wrote:
       | Am I the first to lay blame where it clearly lies: Joe Fucking
       | Biden
        
       | gmoo wrote:
       | what is sam's reputation in this space? will other open ai
       | engineers flock to follow him? if he were to create another ai
       | startup, would he be able to poach people? my impression as an
       | outsider is that top tier ai engineers would flock to other top
       | tier engineers, but not him.
        
       | AnishLaddha wrote:
       | 1) he represented openAI at APEC just yesterday. what happened in
       | 24 hours that would cause such a drastic decision.
       | 
       | 2) generally, even when a board fires a CEO, they rarely "call
       | them out" or say what they did wrong. they must have some
       | extremely strong evidence against him.
       | 
       | i think it could be any of the following: - something personal,
       | i.e. the controversy w/his sister - a financial issue: chatgpt
       | stopped signups a couple of days ago - a safetyist coup: maybe
       | the board thought he was moving to fast - a microsoft coup:
       | microsoft used its power to knock out what they perceived to be
       | their biggest threat
        
       | stolsvik wrote:
       | So, since we're all spinning theories, here's mine: Skunkworks
       | project in the basement, GPT-5 was a cover for the training of an
       | actual Autonomous AGI, given full access to its own state and
       | code, with full internet access. Worked like a charm, it gained
       | consciousness, awoke Skynet-style, and we were five minutes away
       | from human extinction before someone managed to pull the plug.
        
         | local_crmdgeon wrote:
         | Nah, that would get you a raise.
        
       | DonHopkins wrote:
       | I wonder if they tasked ChatGPT with firing him.
        
       | ddmma wrote:
       | In case you missed, Sam Altman & OpenAI | 2023 Hawking Fellow |
       | Cambridge Union https://youtu.be/NjpNG0CJRMM?si=j-lOpQa0qbKxIvaA
        
       | doener wrote:
       | Wild rumors on Twitter:
       | https://x.com/TraderLX/status/1725633352936595820?s=20
        
       | MKais wrote:
       | The fake it untill you make it theory:
       | 
       | "Sam Altman was actually typing out all the chatgpt responses
       | himself and the board just found out"
       | 
       | https://twitter.com/MattZeitlin/status/1725629795306774711
        
       | YetAnotherNick wrote:
       | So here's my theory which might sound crazy. Sam planned to open
       | a new AI company and taking away openAI's top talents to his
       | company. And breaking up openAI into non profit and his for
       | profit company.
       | 
       | Sam's first tweet after all this has, just hours after this
       | article:
       | 
       | > will have more to say about what's next later.
       | 
       | So either he knew that he was about to be fired or at least was
       | prepared.
       | 
       | Also based on the wording of the press release, Sam did something
       | that the board absolutely hated. Because most of the time even if
       | he did something illegal it doesn't make sense to risk defamation
       | by accusing him publically.
       | 
       | Also based on his video of yesterday at the APEC summit, he
       | repeated the similar lines few times:
       | 
       | > I am super excited. I can't imagine anything more exciting to
       | work on.
       | 
       | So here if we assume he knew he was about to get fired, the
       | conclusion is clear.
        
       | hindsightbias wrote:
       | A first indication of AGI might be to eliminate any potential
       | threats.
        
       | ViktorRay wrote:
       | So Sam Altman was fired around 2 hours ago.
       | 
       | I just went to Google Bard and said:
       | 
       | "What is going on with the CEO of OpenAI?"
       | 
       | Google Bard replied:
       | 
       | "On November 17, 2023, the board of directors of OpenAI, Inc.
       | announced that Sam Altman will depart as CEO and leave the board
       | of directors. Mira Murati, the company's chief technology
       | officer, will serve as interim CEO, effective immediately.
       | 
       | Altman has been CEO of OpenAI since 2015, and he has played a key
       | role in shaping the company's mission to ensure that artificial
       | general intelligence benefits all of humanity. He is widely
       | respected in the field of artificial intelligence, and he is
       | known for his commitment to open research and collaboration.
       | 
       | Altman's departure comes at a time when OpenAI is facing a number
       | of challenges, including the increasing competition from other AI
       | companies, the growing public concern about the potential dangers
       | of AI, and the need to find sustainable funding sources. Despite
       | these challenges, Altman is confident that OpenAI will continue
       | to be a leader in the field of AI.
       | 
       | "I am incredibly proud of what we have accomplished at OpenAI,
       | and I am confident that the company will continue to do great
       | things," Altman said in a statement. "I am grateful to the board
       | for giving me the opportunity to lead OpenAI, and I am excited to
       | see what the company will achieve in the years to come."
       | 
       | The board of directors is currently conducting a search for
       | Altman's successor, and it is expected to name a new CEO in the
       | coming months. In the meantime, Murati will focus on leading
       | OpenAI through its current challenges and ensuring that the
       | company continues to make progress on its mission.
       | 
       | Murati is a highly respected technologist with a deep
       | understanding of artificial intelligence. She has been with
       | OpenAI since its inception, and she has played a key role in
       | developing the company's technology. She is also a strong
       | advocate for open research and collaboration, and she is
       | committed to ensuring that OpenAI's technology is used for the
       | benefit of all humanity.
       | 
       | The board of directors is confident that Murati will be able to
       | lead OpenAI through its current challenges and ensure that the
       | company continues to make progress on its mission. "Mira is a
       | talented and experienced leader who is deeply committed to
       | OpenAI's mission," said Greg Brockman, the chairman and president
       | of OpenAI. "I am confident that she will be able to lead OpenAI
       | through this transition and into a bright future.""
       | 
       | Isn't this fascinating? A major even happens regarding ChatGPT
       | related issues and the primary competitor of ChatGPT (Google
       | Bard) already can talk to me about it in a couple hours...
       | 
       | Meanwhile ChatGPT still thinks it's 2021 heh
        
       | kiltedpanda wrote:
       | Is it possible that the codebase and weights have been hacked and
       | SA didn't tell the board?
        
       | orsenthil wrote:
       | Here is a different perspective. Theoretically, the board is
       | supposed to work like this, and exactly has a purpose - to vote
       | on very important matters.
       | 
       | Given what proof they had on the table. Greg Brockman, Ilya
       | Sutskever, and independents such as Adam D'Angelo, Tasha
       | McCauley, and Helen Toner could drive 3+ votes against Sam
       | Altman.
       | 
       | Rarely do we see board in action. And we saw this one today.
        
       | lewhoo wrote:
       | Come weary Sam and share your AI anxiety with the rest of us.
        
       | franze wrote:
       | I asked ChatGPT for some speculation after it read the blogpost:
       | 
       | 1. Futuristic AI Project Gone Awry: Altman was secretly
       | developing an advanced AI, far beyond ChatGPT, which gained
       | sentience and advised the board to fire him for ethical reasons,
       | fearing its own potential misuse under his leadership.
       | 
       | 2. Time Travel Mishap: In a bizarre twist, Altman inadvertently
       | discovered time travel through AI algorithms. However, his first
       | experiment altered the timeline, resulting in a new board
       | decision where he was never CEO.
       | 
       | 3. Interstellar Communications: Altman successfully made contact
       | with an extraterrestrial intelligence using OpenAI's technology.
       | The board, unprepared for such a monumental discovery, decided to
       | part ways with him to navigate this new cosmic frontier
       | cautiously.
       | 
       | 4. Hidden Virtual Reality World: Altman created a fully immersive
       | virtual reality world using OpenAI's technology and decided to
       | live there permanently. The board, unable to contact him in the
       | virtual realm, had no choice but to let him go.
       | 
       | 5. Quantum Computing Breakthrough: Altman secretly developed a
       | quantum computer that could solve unsolvable problems, including
       | predicting the future. Foreseeing a future where his continuing
       | as CEO would lead to unpredictable consequences, he orchestrated
       | his own firing to alter this course.
        
       | flowersjeff wrote:
       | Huh? WTH happened?... Love to learn the inside story, this sounds
       | insane.
        
       | dizzydes wrote:
       | To me, his tweet suggests he saw it coming or perhaps it was even
       | part of the plan for him. How else would he already know "what's
       | next"?
       | 
       | These past few months his name has fully made its way into the
       | mainstream. Maybe its time for him (and half the GPT eng team) to
       | cash in?
        
         | bertil wrote:
         | It suggests that he thinks there's a way to defend his actions,
         | so it's not a personal matter (Metoo) or blatant fraud.
         | 
         | It could be about the cost of operating the business
         | (consistent with the announcement to cut Plus subscription,
         | although wouldn't justify how fast he was fired) or his
         | previous action a legal risk (something he promised Microsoft
         | or Elon Musk); the later is consistent Greg being demoted: he
         | knew and didn't tell the board.
        
       | Probiotic6081 wrote:
       | Are they replacing him with AI?
        
       | weinzierl wrote:
       | The following was posted by an Alex Coven on X (Twitter). I
       | cannot verify if it is legit.
       | 
       |  _I was laid off from OpenAI today along with my boss Sam._
       | 
       |  _I was the person in charge of putting together the
       | presentations for our board meetings._
       | 
       |  _No one has told me why I was let go but Sam texted me "wtf" and
       | next thing I know my Slack and Gmail were disabled._
       | 
       |  _I'm now looking for a new role, so if you're hiring for
       | investor relations, my DMs are open!_
        
         | Ninjinka wrote:
         | It's a joke
        
         | thraxscipio wrote:
         | Haha I think it's a fake account:
         | https://twitter.com/anothercohen
        
       | seydor wrote:
       | Microsoft announced a ton of stuff yesterday, so many
       | integrations that will really obsolete openAI . could that be
       | related?
        
       | siddharthgoel88 wrote:
       | I suspect that some big news of data leak or some other security
       | incident is about to follow.
        
         | bertil wrote:
         | Sam would have been more apologetic or at least contrite in his
         | tweet if it was hurting anyone. Same: Eric Schmidt was
         | immediately positive, so presumably he knows. ES would never
         | defend a guy who hid a leak.
         | 
         | Unless if, by "security" you mean OpenAI was used for military
         | purposes, in which case: 100% Schmidt knew and supported and
         | Sam might be proud of it.
         | 
         | But Ilya and Mira would have known about it too... Guess they
         | did, told the board and things blew up fast.
        
       | pnathan wrote:
       | Extremely shocking.
       | 
       | The only thing that comes to mind is criminal conduct. Nothing
       | else seems to demand a sudden firing. OpenAI has clearly been the
       | rocket ship startup - a revolutionary tool and product clearly
       | driving the next decade?+ of innovation. What else would demand a
       | a fast firing of the popular, articulate, and photogenic CEO but
       | a terrible problem?
        
         | javier2 wrote:
         | Yeah, even if criminal there usually must be a sentence first
        
       | wubrr wrote:
       | Holy shit -
       | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
        
       | tomcam wrote:
       | The most credible proximate cause to me is his sister's
       | uncontested (by him) allegations of frequent sexual abuse when
       | they were children.
       | 
       | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
        
       | I_am_tiberius wrote:
       | I accidentally referred to him as Sam Altman Fried in the past.
        
       | AndrewKemendo wrote:
       | I mean we know he frequents HN.
       | 
       | sama, care to address it here in what would theoretically be a
       | safe place?
        
       | zaps wrote:
       | This is all Scarlett Johansson's fault
        
       ___________________________________________________________________
       (page generated 2023-11-17 23:00 UTC)