[HN Gopher] OpenAI staff threaten to quit unless board resigns
       ___________________________________________________________________
        
       OpenAI staff threaten to quit unless board resigns
        
       Author : skilled
       Score  : 1312 points
       Date   : 2023-11-20 13:41 UTC (9 hours ago)
        
 (HTM) web link (www.wired.com)
 (TXT) w3m dump (www.wired.com)
        
       | skilled wrote:
       | https://archive.is/RiAqC
        
       | georgehill wrote:
       | > Remarkably, the letter's signees include Ilya Sutskever, the
       | company's CTO who has been blamed for coordinating the boardroom
       | coup against Altman in the first place.
       | 
       | What in the world is happening at OpenAI?
        
         | biglyburrito wrote:
         | Sounds like a classic case of FAFO to me.
        
           | civilitty wrote:
           | Who fucked around and who found out, exactly??
           | 
           | We the unsuspecting public?
        
             | systemvoltage wrote:
             | Ilya FA
             | 
             | Ilya FO (in process)
        
               | pk-protect-ai wrote:
               | He didn't strike me as a type to brainlessly FA.
        
               | systemvoltage wrote:
               | Yea, check out his presentations on YT. Incredible
               | talent.
               | 
               | What strikes me is that he wrote the regretful
               | participation tweet _after_ witnessing the blowback. He
               | should have written it right with the initial news. And
               | clearly explain employees. This is not a smart way to
               | conduct board oversight.
               | 
               | 500 employees are not happy. I'm siding with the
               | employees (esp early hires), they deserve to be part of
               | once in a lifetime company like OpenAI after working
               | there for years.
        
               | chasd00 wrote:
               | He could be an expert in some areas but in others... not
               | so much.
        
               | kaibee wrote:
               | The thing about being really smart is that you can find
               | incredible gambles.
        
               | password54321 wrote:
               | "if you value intelligence above all other human
               | qualities, you're gonna have a bad time"
        
             | pk-protect-ai wrote:
             | GPT-4 Turbo took control of the startup and fcks around ...
        
             | newsclues wrote:
             | Adam D'Angelo?
        
         | benjaminwootton wrote:
         | There must be something going on which is not in the public
         | domain.
         | 
         | What an utterly bizarre turn of events, and to have it all
         | played out in public.
         | 
         | A $90 billion valuation at stake too!
        
           | mrits wrote:
           | I wonder how many people are on a path for a $250K/year
           | salary instead of $30M in the bank now.
        
             | nemo44x wrote:
             | It's looks like about 505.
        
             | postingawayonhn wrote:
             | Microsoft can easily afford to offer them $30M of options
             | each if they continue to ship such important products.
             | That's only $15B for 500 staff.
             | 
             | Microsoft has a $2.75T market value and over $140B of cash.
        
               | JumpCrisscross wrote:
               | > _Microsoft can easily afford to offer them $30M of
               | options each_
               | 
               | But it doesn't have to. And the politics suggest it very
               | likely won't.
        
               | mrits wrote:
               | Microsoft isn't going to give the employees in HR
               | equivalent offers. There are a lot of people in the
               | company that wouldn't provide much value to the new team
               | at MS.
        
         | basch wrote:
         | If it weren't so unbelievable, I'd almost accuse them of
         | orchestrating all this to sell to Microsoft without the
         | regulatory scrutiny.
         | 
         | It's like they distressed the company to make an acquisition
         | one of mercy instead of aggression, knowing they already had
         | their buyer lined up.
        
           | sigmoid10 wrote:
           | Yeah, I also started out believing this must be a principle
           | thing between Ilya and Sam. But no, this smells more and more
           | like a corporate clusterfuck and Ilya was just an easy to
           | manipulate puppet. This alleged statement from the board that
           | destroying the company is an acceptable outcome is completely
           | insane, but somewhat reasonable when combined with the fact
           | that half the board has some serious conflict of interest
           | going on.
        
           | jordanpg wrote:
           | I haven't seen brand suicide like this since EM dumped
           | Twitter for X!!! (4 months ago)
        
             | benterix wrote:
             | It's nothing like it. What common people use is ChatGPT,
             | many of them never heard about OpenAI, not even mention who
             | sits on the board etc. And their core offering is more
             | popular than ever. With Twitter, Musk started to damage the
             | product itself, step by step. As far as I can tell ChatGPT
             | continues to work just fine, as opposed to X.
        
               | smegger001 wrote:
               | Open ai users arent chatgpts users its developers.
        
           | JumpCrisscross wrote:
           | > _sell to Microsoft without the regulatory scrutiny_
           | 
           | I keep hearing this, principally from Silicon Valley. It's
           | based on nothing. Of course this will receive both
           | Congressional and regulatory scrutiny. (Microsoft is also
           | likely to be sued by OpenAI's corporate entity, on behalf of
           | its outside investors, as are Altman and anyone who jumps
           | ship.)
        
             | mirzap wrote:
             | From what I heard non-compete clauses are unenforceable in
             | California, so what exactly are they suing for?
             | 
             | I'm pretty sure Satya consulted with an army of lawyers
             | over the weekend regarding the potential issue.
        
               | JumpCrisscross wrote:
               | > _non-compete clauses are unenforceable in California,
               | so what exactly are they suing for?_
               | 
               | Part of suing is to ensure compliance with agreements.
               | There is a lot of IP that Microsoft may not have a
               | license to that these employees have. There are also
               | legitimate questions about conflicts of interests,
               | particularly with a former executive, _et cetera_.
               | 
               | > _pretty sure Satya consulted with an army of lawyers
               | over the weekend regarding the potential issue_
               | 
               | Sure. I'm not suggesting anyone did anything illegal.
               | Just that it will be litigated over from every direction.
        
               | smegger001 wrote:
               | Such as? Unless they are in the habit of downloading
               | multiterraby copies of the trained model and taking.g it
               | home what IP would they have? The training data is the
               | open internet and various licensed archives far to much
               | for them to take and arguably isn't OAI IP anyway. The
               | background is all bases on openly published research much
               | of it released by Google. And Microsoft already has
               | licensed pretty much everything from OAI as part of that
               | multi billion dollar deal.
        
             | basch wrote:
             | Microsoft can buy the company in parts, as it "fails" in a
             | long drawn out process. By the end, whatever they are
             | buying will have little value, as it will already be
             | outdated.
        
             | trinsic2 wrote:
             | Yeah, just like the suit Microsoft is in with windows 11
             | anticompetitive practices, right?
        
             | smegger001 wrote:
             | Sue Sam for what? They fired him and he got amother job
             | with another company. thats on them for firing him in a
             | state with law prohibiting noncompete clauses
        
         | 0xDEF wrote:
         | Ilya is much less active on Twitter than the others. The rumors
         | that blamed him emerged and spread like wildfire and he did
         | nothing to stop it because he probably only checks Twitter once
         | a week.
        
           | saagarjha wrote:
           | One would think that he would be on Twitter _this_ week.
        
             | NateEag wrote:
             | > One would think that he would be on Twitter this week.
             | 
             | Or maybe _this_ week he would need to spend his time doing
             | something productive.
        
             | enginaar wrote:
             | looks like found his twitter password
             | https://x.com/ilyasut/status/1726590052392956028?s=20
        
             | timeon wrote:
             | Why? To entertain bystanders like us?
        
             | johannes1234321 wrote:
             | More like spending time in calls with board members,
             | coworkers, investors, partners, ... and often it is better
             | not to say something, than saying something which then is
             | misinterpreted overtaken by other reality.
        
           | qup wrote:
           | not this week, trust me
        
           | sigmar wrote:
           | He says he regrets his action, so he's not blameless. and it
           | wouldn't have been possible for 3/6ths of the board to oust
           | Brockman and Altman without his vote. My bet (entirely
           | conjecture) is that Ilya now realizes the other three will
           | refuse to leave their board seats even if it means the
           | company melts to the ground.
        
         | zeven7 wrote:
         | What options are left other than Adam D'Angelo orchestrated the
         | downfall of a competitor to Poe?
        
           | DonHopkins wrote:
           | "I am NOT a BELLBOY!"
           | 
           | https://www.youtube.com/watch?v=d8oVTKG39U8&t=27s
        
         | dougmwne wrote:
         | (Rips off mask) Wow, it was the Quora CEO all along!
         | 
         | So this was never about safety or any such bullshit. It's
         | because GTPs store was in direct competition with Poe!?
        
           | artursapek wrote:
           | Imagine letting the CEO of a simple question and answer site
           | that blurs all of its content onto your board
        
             | nemo44x wrote:
             | And that he might be the least incompetent of them all.
        
             | achates wrote:
             | Alongside luminaries like "the wife of the guy who played
             | Robin in the Batman movie".
        
               | artursapek wrote:
               | lol is that a real thing?
        
           | brianjking wrote:
           | Absolutely mindboggling that Adam is on the board.
           | 
           | Poe has direct competition with the GPTs and the "revenue
           | sharing" plan that Sam released on Dev day.
           | 
           | The Poe Platform has their "Creators" build your own bot and
           | monetize it, including OpenAI and other models.
        
             | dougmwne wrote:
             | Even more interesting considering that Elon left OpenAI's
             | board when Tesla started developing Autopilot as it was
             | seen as a conflict of interest.
        
         | manojlds wrote:
         | Did it originally say CTO? Ilya is not CTO and it's been
         | corrected now.
        
         | Tenoke wrote:
         | At this point either pretty much all the speculation here and
         | on Twitter was wrong, or they've threatened to kneecap him.
        
         | sage76 wrote:
         | There's definitely more to this than just Ilya vs Sam.
        
         | capableweb wrote:
         | > What in the world is happening at OpenAI?
         | 
         | Well, we don't know.
         | 
         | What we do know, is that the "coordinating the boardroom coup
         | against Altman" is a rumor and speculation about a thing we
         | don't know anything about.
        
         | cmrdporcupine wrote:
         | Ok... so this is not the scenario any of us were imagining?
         | Ilya S vs Altman isn't what went down?
         | 
         | JFC.
        
         | ignoramous wrote:
         | The signatories want Bret Taylor and Will Hurd running the new
         | Board, apparently.
         | 
         | > _We will take this step imminently, unless all current board
         | members resign, and the board appoints two new lead independent
         | directors, such as Bret Taylor and Will Hurd, and reinstates
         | Sam Altman and Greg Brockman._
        
           | thundergolfer wrote:
           | Googling Will Hurd only shows up a Republican politician with
           | a history at the CIA. Is that the right guy? Can't be.
        
             | singularity2001 wrote:
             | Please not another Eric Smith NSA shill running the show.
             | on the other hand it was inevitable. either the government
             | controls the most important companies secretly as in China
             | or openly as in the US.
        
         | RivieraKid wrote:
         | The screenwriters are overdoing it at this point.
        
           | duckmysick wrote:
           | Understandable, they were on a strike for a long time. Now
           | that they are back, they are itching to release all the good
           | stuff.
        
         | ibaikov wrote:
         | Sexual misconduct. Ilya protects Sam by not letting this spiral
         | out in media.
        
         | fzeindl wrote:
         | Maybe they found AGI and it is now controlling the board
         | #andsoitbegins.
        
         | cs702 wrote:
         | None of it makes sense to me now. Who is _really_ behind this?
         | How did they pull this off? Why did do it? Why do it so
         | suddenly, in a terribly disorganized way?
         | 
         | If I may paraphrase Churchill: This has become a bit of a
         | riddle wrapped in a mystery inside an enigma.
        
         | Applejinx wrote:
         | It's extrazordinary to watch, I'll say that much.
         | 
         | I still think 'Altman's Basilisk' is a thing: I think somewhere
         | in this mess there's actions taken to wrest control of an AI
         | from somebody, probably Altman.
         | 
         | Altman's Basilisk also represents the idea that if a
         | charismatic and flawed person (and everything I've seen,
         | including the adulation, suggests Altman is that type of person
         | from that type of background) trains an AI in their image, they
         | can induce their own characteristics in the AI. Therefore, if
         | you're a paranoid with a persecution complex and a zero-sum
         | perspective on things, you can through training induce an AI to
         | also have those characteristics, which may well persist as the
         | AI 'takes off' and reaches superhuman intelligence.
         | 
         | This is not unlike humans (perhaps including Altman)
         | experiencing and perpetuating trauma as children, and then
         | growing to adulthood and gaining greatly expanded intelligence
         | that is heavily, even overwhelmingly, conditioned by those
         | formative axioms that were unquestioned in childhood.
        
         | soderfoo wrote:
         | Watching all this drama unfold in the public is unprecedented.
         | 
         | I guess it makes sense. There has never been a company like
         | OpenAI, in terms or governance and product, so I guess it makes
         | sense that their drama leads us in to unchartered territory.
        
           | brianjking wrote:
           | I guess this is the Open in OpenAI, eh?
           | 
           | Absolutely bonkers.
        
         | lysecret wrote:
         | That settles it it has to be the AGI orchestrating it all.
        
         | siva7 wrote:
         | Probably trying to shift the blame to the other three board
         | members. It could be true to some degree. No matter what, it's
         | clear to the public that they don't have the competency to sit
         | on any board.
        
         | smegger001 wrote:
         | It French revolution time over there. heads are flying angry
         | mobs. Fun times
        
       | rvz wrote:
       | Well that accelerated very quickly and this is perhaps the most
       | dysfunctional startup I have ever seen.
       | 
       | All due to one word: Greed.
        
         | benjaminwootton wrote:
         | I don't know about OpenAI, but Ive been in a few similar
         | business situations where everyone is in a good situation and
         | greed leads to an almighty blowup. It's really remarkable to
         | see.
        
         | toss1 wrote:
         | And the ironic part of the greed is that it seems there is far
         | more (at least potential) earnings to be spread around and make
         | everyone there wealthy enough to not have to think about it
         | ever again.
         | 
         | Yet they start this kind of nonsense.
         | 
         | Not exactly focusing on building a great system or product.
        
           | qwebfdzsh wrote:
           | I assumed that due how the whole company/non-profit was
           | structured employees didn't really get any actual equity?
        
             | toss1 wrote:
             | Um, equity isn't the only way to distribute profits...
             | 
             | edit: 'tho TBF, the other methods do require ethical
             | management behavior down the road, which was just shown to
             | be lacking in the last few days.
        
         | marricks wrote:
         | What? Greed is the backbone of our startup landscape. As soon
         | as you get VC backing all anyone cares about is a big payday.
         | This is interesting because there is something going on beyond
         | the typical pure greed shitshow.
         | 
         | Perhaps it was just that original intention for openai to be a
         | nonprofit, but at some point somewhere it wasn't pure $ and
         | that's what makes it interesting. Also more tragic because now
         | it looks like it's heading straight to a for profit company one
         | way or another.
        
         | gniv wrote:
         | > All due to one word: Greed.
         | 
         | I would say it's due to unconventional not-battle-tested
         | governance.
        
       | JumpCrisscross wrote:
       | We're seeing our generation's "traitorous eight" story play out
       | [1]. If this creates a sea of AI start-ups, competing and
       | exploring different approaches, it could be invigorating on many
       | levels.
       | 
       | [1]
       | https://www.pbs.org/transistor/background1/corgs/fairchild.h...
        
         | kossTKR wrote:
         | Doesn't it look like the complete opposite is going to happen
         | though?
         | 
         | Microsoft gobbles up all talent from OpenAI as they just gave
         | everyone a position.
         | 
         | So we went from "Faux NGO" to, "For profit", to "100% Closed".
        
           | JumpCrisscross wrote:
           | > _Doesn 't it look like the complete opposite is going to
           | happen though?_
           | 
           | Going from OpenAI to Microsoft means ceding the upside:
           | nobody besides maybe Altman will make fuck-you money there.
           | 
           | I'm also not sure as some in Silicon Valley that this is
           | antitrust proof. So moving to Microsoft not only means less
           | upside, but also fun in depositions for a few years.
        
             | toomuchtodo wrote:
             | Fuck you money was always a lottery ticket based on
             | OpenAI's governance structure and "promises of potential
             | future profit." That lottery ticket no longer exists, and
             | no one else is going to provide it after seeing how the
             | board treated their relationship with Microsoft and that
             | $10B investment. This is a fine lifeboat for anyone who
             | wants to continue on the path they were on with adults at
             | the helm.
             | 
             | What might have been tens or hundreds of millions in common
             | stakeholder equity gains will likely be single digit
             | millions, but at least much more likely to materialize (as
             | Microsoft RSUs).
        
             | DebtDeflation wrote:
             | No. OpenAI employees do not have traditional equity in the
             | form of RSUs or Options. They have a weird profit-sharing
             | arrangement in a company whose board is apparently not
             | interested in making profits.
        
               | semiquaver wrote:
               | Employee equity (and all investments) are capped at 100x,
               | which is still potentially a hefty payday. The whole
               | point of the structure was to enable competitive employee
               | comp.
        
             | j-a-a-p wrote:
             | Ha! One of my all-time favourites, the fuck-you position.
             | The Gambler, the uncle giving advice:
             | 
             |  _You get up two and a half million dollars, any asshole in
             | the world knows what to do: you get a house with a 25 year
             | roof, an indestructible Jap-economy shitbox, you put the
             | rest into the system at three to five percent to pay your
             | taxes and that 's your base, get me? That's your fortress
             | of fucking solitude. That puts you, for the rest of your
             | life, at a level of fuck you._
             | 
             | https://www.imdb.com/title/tt2039393/characters/nm0000422
        
               | jonhohle wrote:
               | I haven't seen the movie, but it seems like Uncle Frank
               | and I would get along just fine.
        
         | ethbr1 wrote:
         | How would that work, economically?
         | 
         | Wasn't a key enabler of early transitor work that required
         | capital investment was modest?
         | 
         | SotA AI research seems to be well past that point.
        
           | JumpCrisscross wrote:
           | > _Wasn 't a key enabler of early transitor work that
           | required capital investment was modest?_
           | 
           | They were simple in principle but expensive at scale. Sounds
           | like LLMs.
        
             | ethbr1 wrote:
             | Is there SotA LLM research not at scale?
             | 
             | My understanding was that practical results were indicating
             | your model has to be pretty large before you start getting
             | "magic."
        
           | throwaway_45 wrote:
           | NN/ai concepts have been around for a while. It is just
           | computers had not been fast enough to make it practical. It
           | was also harder to get capital back then. Those guys put the
           | silicon in silicon valley.
        
           | tedivm wrote:
           | It really depends on what you're researching. Rad AI started
           | with only 4m investment and used that to make cutting edge
           | LLMs that are now in use by something like half the
           | radiologists in the US. Frankly putting some cost pressure on
           | researchers may end up creating more efficient models and
           | techniques.
        
       | not_makerbox wrote:
       | My ChatGPT wrapper is in danger, please stop
        
         | artursapek wrote:
         | lmfao
        
       | yeck wrote:
       | > the letter's signees include Ilya Sutskever
       | 
       | _Big sigh_.
        
         | lordnacho wrote:
         | For people who appreciate some vintage British comedy:
         | 
         | https://www.youtube.com/watch?v=Gpc5_3B5xdk
         | 
         | The whole thing is just ridiculous. How can you be senior
         | leadership and not have a clear idea of what you want? And what
         | the staff want?
        
           | nytesky wrote:
           | Knew it had to be Benny Hill before I clicked. Yackty-sax
           | indeed.
        
             | lordnacho wrote:
             | Indeed. I wonder how it came to become the anthem of
             | incompetence.
        
           | marcus0x62 wrote:
           | I was thinking more the Curb Your Enthusiasm theme song.
        
           | selimthegrim wrote:
           | Funny, I would've thought this one would have been more
           | appropriate
           | 
           | https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm
           | 
           | Substitute with appropriate ex-Soviet doomer music as
           | necessary
        
         | ratsmack wrote:
         | Sounds like a CYA move after being under pressure from the team
         | at large.
        
       | pototo666 wrote:
       | This is more interesting than the HBO Silicon Valley show.
        
         | rsecora wrote:
         | it's the trailer for the new season of Succession.
        
       | majikaja wrote:
       | Drama queens
        
       | mjirv wrote:
       | The key line:
       | 
       | "Microsoft has assured us that there are positions for all OpenAl
       | employees at this new subsidiary should we choose to join."
        
         | sebzim4500 wrote:
         | I think everyone assumed this was an aquihire without the
         | "aqui-" but this is the first time I've seen it explicitly
         | stated.
        
           | nextworddev wrote:
           | will they stay though? what happens to their OAI options?
        
             | teeray wrote:
             | Will their OAI options be worth anything if the implosion
             | continues?
        
               | nextworddev wrote:
               | yeah but threatening to quit is actually accelerating the
               | implosion
        
               | ilikehurdles wrote:
               | I don't believe startups can have successful exits
               | without extraordinary leadership (which the current board
               | can never find). The people quitting are simply jumping
               | off a sinking ship.
        
             | almost_usual wrote:
             | MSFT RSUs actually have value as opposed to OpenAI's Profit
             | Participation Units (PPU).
             | 
             | https://www.levels.fyi/blog/openai-compensation.html
             | 
             | https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa0
             | 6...
        
             | baby_souffle wrote:
             | What will happen to their newly granted msft shares? One
             | can be sold _today_ and might be worth a lot more soon...
        
           | catchnear4321 wrote:
           | hostile takeunder?
        
             | jacquesm wrote:
             | That's perfect.
        
             | jonbell wrote:
             | You win
        
             | epups wrote:
             | Love it. Could also be called a hostile giveover,
             | considering the OpenAI board gifted this opportunity to
             | Microsoft
        
         | rvz wrote:
         | So essentially, OpenAI is a sinking ship as long as the board
         | members go ahead with their new CEO and Sam, Greg are not
         | returning.
         | 
         | Microsoft can absorb all the employees and switch them into the
         | new AI subsidiary which basically is an acqui-hire without
         | buying out everyone else's shares and making a new DeepMind /
         | OpenAI research division inside of the company.
         | 
         | So all along it was a long winded side-step into having a new
         | AI division without all the regulatory headaches of a formal
         | acquisition.
        
           | JumpCrisscross wrote:
           | > _OpenAI is a sinking ship as long as the board members go
           | ahead with their new CEO and Sam, Greg are not returning_
           | 
           | Far from certain. One, they still control a lot of money and
           | cloud credits. Two, they can credibly threaten to license to
           | a competitor or even open source everything, thereby
           | destroying the unique value of the work.
           | 
           | > _without all the regulatory headaches of a formal
           | acquisition_
           | 
           | This, too, is far from certain.
        
             | s1artibartfast wrote:
             | >Far from certain. One, they still control a lot of money
             | and cloud credits.
             | 
             | This too is far from certain. The funding and credits was
             | at best tied to milestones, and at worst, the investment
             | contract is already broken and msft can walk.
             | 
             | I suspect they would not actually do the latter and the ip
             | is tied to continual partnership.
        
           | jacquesm wrote:
           | And sue for the assets of OpenAI on account of the damage the
           | board did to their stock... and end up with all of the IP.
        
             | lotsofpulp wrote:
             | On what basis would one entity be held responsible for
             | another entity's stock price, without evidence of fraud?
             | Especially a non profit.
        
               | vaxman wrote:
               | [delayed]
        
               | jlokier wrote:
               | The value of OpenAI's own assets in the for-profit
               | subsidiary, may drop in value due to recent events.
               | 
               | Microsoft is a substantial shareholder (49%) in that for-
               | profit subsidiary, so the value of Microsoft's asset has
               | presumably reduced due to OpenAI's board decisions.
               | 
               | OpenAI's board decisions which resulted in these events
               | appear to have been improperly conducted: Two of the
               | board's members weren't aware of its deliberations, or
               | the outcome until the last minute, notably the chair of
               | the board. A board's decisions have legal weight because
               | they are collective. It's allowed to patch them up after
               | if the board agrees, for people to take breaks, etc. But
               | if some directors intentionally excluded other directors
               | from such a major decision (and formal deliberations),
               | affecting the value and future of the company, that
               | leaves the board's decision open to legal challenges.
               | 
               | Hypothetically Microsoft could sue and offer to settle.
               | Then OpenAI might not have enough funds if it would lose,
               | so might have sell shares in the for-profit subsidiary,
               | or transfer them. Microsoft only needs about 2% more to
               | become majority shareholder of the for-profit subsidiary,
               | which runs ChatGPT sevices.
        
         | bertil wrote:
         | That is a spectacular power move: extending 700 job offers,
         | many of which would be close to $1 million per year
         | compensation.
        
           | layer8 wrote:
           | They didn't say anything about the compensation.
        
         | nottheengineer wrote:
         | Sounds a lot like MS wants to have OpenAI but without a boards
         | that considers pesky things like morals.
        
           | Fluorescence wrote:
           | Time for a counter-counter-coup that ends up with Microsoft
           | under the Linux Foundation after RMS reveals he is Satoshi...
        
             | Justsignedup wrote:
             | RMS (I assume Richard Stallman) may be many many many
             | things, but setting up a global pyramid scheme doesn't seem
             | to be his M.O.
             | 
             | But stranger things have happened. One day I may be very
             | very VERY surprised.
        
               | fsflover wrote:
               | There is nothing related to pyramids in bitcoin. It's
               | just an implementation of a novel, trustless electronic
               | money, also it's free software.
        
             | tmerse wrote:
             | You mean the GNU Linux Foundation?
        
             | ric2b wrote:
             | The year of the Linux Microsoft.
        
           | code_runner wrote:
           | again, nobody has shown even a glimmer of the board operating
           | with morality being their focus. we just don't know. we do
           | know that a vast majority of the company don't trust the
           | board though.
        
           | jdthedisciple wrote:
           | Whose morals again?
        
       | wxw wrote:
       | Ilya signed it??? He's on the board... This whole thing is such
       | an implosion of ambition.
        
         | JumpCrisscross wrote:
         | Yeah, what the hell?
         | 
         | Do we know why Murati was replaced?
        
           | simonw wrote:
           | I heard it was because she tried to hire Sam and Greg back.
        
             | kranke155 wrote:
             | So who's against it and why ?
             | 
             | I wonder if it will take 20 years to learn the whole story.
        
               | simonw wrote:
               | The amount that's leaked out already - over a weekend -
               | makes me think we'll know the full details of everything
               | within a few days.
        
           | sebzim4500 wrote:
           | Apparently she tried to rehire Sam and Greg.
           | 
           | I don't think she actually had anything to do with the coup,
           | she was only slightly less blindsided than everyone else.
        
             | JumpCrisscross wrote:
             | To be fair, that is a stupid first move to make as the CEO
             | who was just hired to replace the person deposed by the
             | board. (Though I'm still confused about Ilya's position.)
        
               | blackoil wrote:
               | If you know the company will implode and you'll be CEO of
               | a shell, it is better to get board to reverse the course.
               | It isn't like she was part of decision making process
        
               | impulser_ wrote:
               | If your job as a CEO is to keep the company running it
               | seems like the only way to do that was hire them back
               | because look at the company now it's essentially dead
               | unless the board resigns and with how stupid the board is
               | they might not lol.
               | 
               | So her move wasn't stupid at all. She obviously knew
               | people working there respected the leadership of the
               | company.
               | 
               | If 550 people leave OpenAI you might as well just shut it
               | down and sell the IP to Microsoft.
        
               | ghaff wrote:
               | It's a lot easier to sign a petition than actually walk
               | away from a presumably well-paying job in a somewhat weak
               | tech job market. People assuming everyone can just
               | traipse into a $1m/year role at Microsoft is smoking some
               | really good stuff.
        
               | tomnipotent wrote:
               | > can just traipse into a $1m/year role at Microsoft
               | 
               | Do you not trust Microsoft's public statement that jobs
               | are waiting for anyone that decides to leave OpenAI?
               | Considering their two decade adventure with Xbox and
               | their $72bln in profits last year, on top of a $144bln in
               | cash reserves, I wouldn't be surprised if Microsoft is
               | able (and willing) to match most comp packages
               | considering what's at stake. Maybe not everyone, but
               | most.
        
               | ghaff wrote:
               | I think the specifics on an individual level once the
               | smoke clears matter a lot.
        
               | margorczynski wrote:
               | Well it is "somewhat weak tech job market" for your
               | average Joe. I think for most of those guys finding a
               | 0,5kk/year job wouldn't be such a problem especially that
               | the AI hype has not yet died down.
               | 
               | Actually for MS this might be much better cause they
               | would get direct control over them without the hassle of
               | talking to some "board" that is not aligned with their
               | interests.
        
               | deeviant wrote:
               | With nearly the entire team of engineers threatening to
               | leave the company over the coup, was it a stupid move?
               | 
               | The board is going to be overseeing a company of 10
               | people as things are going.
        
             | maxlamb wrote:
             | But wouldn't the coup have required 4 votes out of 6 which
             | means she voted yes? If not then the coup was executed by
             | just 3 board members? I'm confused.
        
               | ketzo wrote:
               | Murati is/was not a board member.
        
               | crazygringo wrote:
               | Generally speaking, 4 members is the minimum quorum for a
               | board of 6, and 3 out of 4 is a majority decision.
               | 
               | I don't know if it was 3 or 4 in the end, but it may very
               | well have been possible with just 3.
        
               | StephenAshmore wrote:
               | Mira isn't on the board, so she didn't have a vote in
               | this.
        
         | Bostonian wrote:
         | I think the names listed are the recipients of the letter (the
         | board), not the signers.
        
           | dxyms wrote:
           | There's only 4 people on the board.
        
         | falleng0d wrote:
         | he even posted a apology:
         | https://x.com/ilyasut/status/1726590052392956028?s=20
         | 
         | what the actual fuck =O
        
           | toomuchtodo wrote:
           | They did not expect Microsoft to take everything and walk
           | away, and did not realize how little pull they actually had.
           | 
           | If you made a comment recently about de jure vs de facto
           | power, step forward and collect your prize.
        
             | jacquesm wrote:
             | https://news.ycombinator.com/item?id=38331457
        
             | hotnfresh wrote:
             | https://news.ycombinator.com/item?id=38338096
             | 
             | What do I win? Hahaha.
        
           | sva_ wrote:
           | The great drama of our time (this week)
        
           | ozgung wrote:
           | Wow, lots of drama and plot twists for the writers of the
           | Netflix mini-series.
        
           | FuriouslyAdrift wrote:
           | I'm going to take a leap of intuition and say all roads lead
           | back Adam d'Angelo for the coup attempt.
        
             | Terretta wrote:
             | > _all roads lead back Adam d 'Angelo_
             | 
             | Maybe someone thinks Sam was _"not consistently candid"_
             | about mentioning one of the feature bullets in latest
             | release was dropping d 'Angelo's Poe directly into the
             | ChatGPT app for no additional charge.
             | 
             | Given dev day timing and the update releasing these "GPTs"
             | this is an entirely plausible timeline.
             | 
             | https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-
             | le...
        
           | EVa5I7bHFq9mnYK wrote:
           | I knew it was Joseph Gordon-Levitt's plot all along!
        
             | miyuru wrote:
             | I don't know if you are joking or not, but one of the board
             | members is Joseph Gordon-Levitt Wife.
        
               | ShamelessC wrote:
               | (yes that was the joke)
        
           | jacquesm wrote:
           | Naive is too soft a word. How can you be so smart and so out
           | of touch at the same time?
        
             | code_runner wrote:
             | in my experience these things will typically go hand in
             | hand. There is also an argument to be made that being smart
             | at building ML models and being smart in literally anything
             | else have nothing to do with each other.
        
               | tbalsam wrote:
               | Usually this is due to autism, please be kind.
        
               | code_runner wrote:
               | Not claiming to know anything about any persons
               | differences or commenting about that in any way.
        
             | rdsubhas wrote:
             | IQ and EQ are different things. Some people are very
             | technically smart to know a trillion side effects of
             | technical systems. But can be really bad/binary/shallow at
             | knowing side order effects of human dynamics.
             | 
             | Ilya's role is a Chief Scientist. It may be fair to give at
             | least some benefit of doubt. He was vocal/direct/binary,
             | and also vocally apologized and worked back. In human
             | dynamics - I'd usually look for the silent orchestrator
             | behind the scenes that nobody talks about.
        
               | jacquesm wrote:
               | I'm fine with all that in principle but then you
               | shouldn't be throwing your weight around in board
               | meetings, probably you shouldn't be on the board to begin
               | with because it is a handicap in trying to evaluate the
               | potential outcome of the decisions the board has to make.
        
               | smolder wrote:
               | I don't think this is necessarily about different
               | categories of intelligence... Politicking and socializing
               | are skills that require time and mental energy to build,
               | and can even atrophy. If you spend all your time worrying
               | about technical things, you won't have as much time to
               | build or maintain those skills. It seems to me like IQ
               | and EQ are more fundamental and immutable than that, but
               | maybe I'm making a distinction where there isn't much of
               | one.
        
             | smolder wrote:
             | Specialized learning and focus often comes at the cost of
             | generalized learning and focus. It's not zero sum, but
             | there is competition between interests in any person's
             | mind.
        
           | charlieyu1 wrote:
           | I don't think I have seen a bigger U-turn
        
           | serial_dev wrote:
           | You come at the king, you best not miss. If you do, make sure
           | to apologize on Twitter while you can.
        
         | DebtDeflation wrote:
         | I was looking down the list and then saw Ilya. Just when you
         | think this whole ordeal can't get any more insane.
        
         | victoryhb wrote:
         | Most people who sympathized with the Board prior to this would
         | have assumed that the presumed culprit, the legendary Ilya, has
         | thought through everything and is ready to sacrifice anything
         | for a course he champions. It appears that is not the case.
        
           | xivzgrev wrote:
           | I think he orchestrated the coup on principle, but severely
           | underestimated the backlash and power that other people had
           | collectively.
           | 
           | Now he's trying to save his own skin. Sam will probably take
           | him back on his own technical merits but definitely not in
           | any position of power anymore
           | 
           | When you play the game of thrones, you win or you die
           | 
           | Just because you are a genius in one domain does not mean you
           | are in another
           | 
           | What's funny is that everyone initially "accepted" the
           | firing. But no one liked it. Then a few people (like greg)
           | started voting with their feet which empowered others which
           | has cumulated into this tidal shift.
           | 
           | It will make a fascinating case study some day on how not to
           | fire your CEO
        
         | throwaway74852 wrote:
         | The dude is a quack.
        
       | tedivm wrote:
       | This was handled so very, very poorly. Frankly it's looking like
       | Microsoft is going to come out of this better than anyone,
       | especially if they end up getting almost 500 new AI staff out of
       | it (staff that already function well as a team).
       | 
       | > In their letter, the OpenAI staff threaten to join Altman at
       | Microsoft. "Microsoft has assured us that there are positions for
       | all OpenAI employees at this new subsidiary should we choose to
       | join," they write.
        
         | paulpan wrote:
         | In hindsight firing Sam was a self-destructing gamble by the
         | OpenAI board. Initially it seemed Sam may have committed some
         | inexcusable financial crime but doesn't look so anymore.
         | 
         | Irony is that if a significant portion of OpenAI staff opt to
         | join Microsoft, then Microsoft essentially killed their own
         | $13B investment in OpenAI earlier this year. Better than
         | acquiring for $80B+ I suppose.
        
           | htrp wrote:
           | Msft/Amazon/Google would light 13 billion on fire to acquire
           | OpenAI in a heartbeat.
           | 
           | (but also a good chunk of the 13bn was pre-committed Azure
           | compute credits, which kind of flow back to the company
           | anyway).
        
           | dhruvdh wrote:
           | They acquired Activision for 69B recently.
           | 
           | While Activision makes much more money I imagine, acquiring a
           | whole division of productive, _loyal_ staffers that work well
           | together on something as important as AI is cheap for 13B.
           | 
           | Some background: https://sl.bing.net/dEMu3xBWZDE
        
           | technofiend wrote:
           | There's acquihires and then I guess there's acquifishing
           | where you just gut the company you're after like a fish and
           | hire away everyone without bothering to buy the company.
           | There's probably a better portmanteau. I seriously doubt
           | Microsoft is going to make people whole by granting
           | equivalent RSUs, so you have to wonder what else is going on
           | that so many seem ready to just up and leave some very large
           | potential paydays.
        
             | Kye wrote:
             | How about: acquimire
        
               | gryn wrote:
               | one thing for sure this is one hell of a quagmire /s
        
             | WiseWeasel wrote:
             | I feel like that's giving them too much credit; this is
             | more of a flukuisition. Being in the right place at the
             | right time when your acquisition target implodes.
        
           | janejeon wrote:
           | If the change in $MSFT pre-open market cap (which has given
           | up its gains at the time of writing, but still) of hundreds
           | of billions of dollars is anything to go by, shareholders
           | probably see this as spending a dime to get a dollar.
        
             | unoti wrote:
             | Awesome point. Microsoft's market cap today went up to 2.8
             | trillion, up 44.68 billion today.
        
           | jasode wrote:
           | _> , then Microsoft essentially killed their own $13B
           | investment in OpenAI earlier this year. _
           | 
           | For investment deals of that magnitude, Microsoft probably
           | did not literally wire all $13 billion to OpenAI's bank
           | account the day the deal was announced.
           | 
           | More likely that the $10b to $13 headline-grabbing number is
           | a _total estimated figure_ that represents a _sum of future
           | incremental investments (and Azure usage credits, etc)_ based
           | on agreed performance milestones from OpenAI.
           | 
           | So, if OpenAI doesn't achieve certain milestones (which can
           | be more difficult if a bunch of their employees defect and
           | follow Sam & Greg out the door) ... then Microsoft doesn't
           | really "lose $10b".
        
           | bananapub wrote:
           | > In hindsight firing Sam was a self-destructing gamble by
           | the OpenAI board
           | 
           | surely the really self-destructive gamble was hiring him?
           | he's a venture capitalist with weird beliefs about AI and
           | privacy, why would it be a good idea to put him in charge of
           | a notional non-profit that was trying to safely advance the
           | start of the art in artificial intelligence?
        
         | spinningslate wrote:
         | > Microsoft is going to come out of this better than anyone
         | 
         | Exactly. I'm curious about how much of this was planned vs
         | emergent. I doubt it was all planned: it would take an
         | extraordinary mind to foresee all the possible twists.
         | 
         | Equally, it's not entirely unpredictable. MS is the easiest to
         | read: their moves to date have been really clear in wanting to
         | be the primary commercial beneficiary of OAI's work.
         | 
         | OAI itself is less transpararent from the outside. There's a
         | tension between the "humanity first" mantra that drove its
         | inception, and the increasingly "commercial exploitation first"
         | line that Altman was evidently driving.
         | 
         | As things stand, the outcome is pretty clear: if the choice was
         | between humanity and commercial gain, the latter appears to
         | have won.
        
           | jerf wrote:
           | "I doubt it was all planned: it would take an extraordinary
           | mind to foresee all the possible twists."
           | 
           | From our outsider, uninformed perspective, yes. But if you
           | know more sometimes these things become completely plannable.
           | 
           | I'm not saying this is the actual explanation because it
           | probably isn't. But suppose OpenAI was facing bankruptcy, but
           | they weren't telling anyone and nobody external knew. This
           | allows more complicated planning for various contingencies by
           | the people that know because _they_ know they can exclude a
           | lot of possibilities from their planning, meaning it 's a
           | simpler situation for them than meets the (external) eye.
           | 
           | Perhaps ironically, the more complicated these gyrations
           | become, the _more_ convinced I become there 's probably a
           | simple explanation. But it's one that is being hidden, and
           | people don't generally hide things for no reason. I don't
           | know what it is. I don't even know what category of thing it
           | is. I haven't even been closely following the HN coverage,
           | honestly. But it's probably unflattering to somebody.
           | 
           | (Included in that relatively simple explanation would be some
           | sort of coup attempt that has subsequently failed. Those
           | things happen. I'm not saying whatever plan is being enacted
           | is going off without a hitch. I'm just saying there may well
           | be an internal explanation that is still much simpler than
           | the external gyrations would suggest.)
        
           | sharemywin wrote:
           | "it would take an extraordinary mind to foresee all the
           | possible twists."
           | 
           | How far along were they on GPT-5?
        
           | playingalong wrote:
           | > it would take an extraordinary mind
           | 
           | They could've asked ChatGPT for hints.
        
         | boringg wrote:
         | I think the board needs to come clean on why they fired Sam
         | Altman if they are going to weather this storm.
        
           | Kye wrote:
           | They might not be able to if the legal department is
           | involved. Both in the case of maybe-pending legal issues, and
           | because even rich people get employment protections that make
           | companies wary about giving reasons.
        
             | roflyear wrote:
             | "Even rich people?" - especially rich people, as they are
             | the ones who can afford to use laws to protect themselves.
        
               | Kye wrote:
               | I said nothing contrary to this. I'm not sure what your
               | goal is with this comment. If anything is implied in
               | "even rich people," it's contempt for them, so I'm
               | clearly on the pro-making legal protections more
               | accessible side.
               | 
               | Pick a different target and move on.
        
               | roflyear wrote:
               | Using your same rhetoric and attitude: please outline
               | exactly what language I used that was so offensive to
               | you.
        
           | jjfoooo4 wrote:
           | Altman is already gone, if they fired him without a good
           | reason they are already toast
        
         | BryantD wrote:
         | "Employees" probably means "engineers" in this case. Which is a
         | wide majority of OpenAI staff, I'm sure.
        
           | tedivm wrote:
           | I'm assuming it's a combination of researchers, data
           | scientists, mlops engineers, and developers. There are a lot
           | of different areas of expertise that come into building these
           | models.
        
         | trinsic2 wrote:
         | > Frankly it's looking like Microsoft is going to come out of
         | this better than anyone
         | 
         | Sounds like that's what someone wants and is trying to
         | obfuscate what's going on behind the scenes.
         | 
         | If Windows 11 shows us anything about Microsoft's monopolistic
         | behavior, having them be the ring of power for LMM's makes the
         | future of humanity look very bleak.
        
         | tannhaeuser wrote:
         | > _it 's looking like Microsoft is going to come out of this
         | better than anyon_
         | 
         | Didn't follow this closely, but isn't that implicitly what an
         | ex-CEO could have possibly been accused off ie. not acting in
         | the company's best interest but someone else's? Not
         | unprecedented either eg. the case of Nokia/Elop.
        
         | mongol wrote:
         | But is the door open to everyone of the 500 staff? That is a
         | lot, and Microsoft may not need them all.
        
         | ulfw wrote:
         | That's because they're the only adult in the room and mature
         | company with mature management. Boring, I know. But sometimes
         | experience actually pays off.
        
       | ethanbond wrote:
       | It seems odd to have it described as "may resign." Seems like the
       | worst of all worlds.
       | 
       | That's like trying to create MAD with the position you "may"
       | launch nukes in retaliation.
        
         | sebzim4500 wrote:
         | Presumably some will resign and some won't. They aren't going
         | to get 550 people to make a hard commitment to resign,
         | especially when presumably few concrete contracts have been
         | offered by MSFT.
        
         | gorlilla wrote:
         | It's easier to get the support of 500 educated people at a
         | moments notice by using sane words like 'may'. This is rational
         | given the lack of public information as well as a board that
         | seems to be having seizures. Using the word 'may' may seem
         | empty-handed; but it ensures a longer list of names attached to
         | the message -- allowing the board a better glimpse of how many
         | dominoes are lined up to fall.
         | 
         | The board is being given a sanity-check; I would expect the
         | signers intentionally left themselves a bit of room for
         | escalation/negotiation.
         | 
         | How often do you win arguments by leading off with an immutable
         | ultimatum?
        
           | ethanbond wrote:
           | Right, but the absolute last thought you want in the board's
           | head is: "they're bluffing."
           | 
           | 200 people or even 50 of the right people who are
           | _definitely_ going to resign will be much stronger than 500+
           | who  "may" resign.
           | 
           | Disclaimer that this is a ludicrously difficult situation for
           | all these folks, and my critique here is made from far
           | outside the arena. I am in no way claiming that I would be
           | executing this better in actual reality and I'm _extremely_
           | fortunate not to be in their shoes.
        
         | feraloink wrote:
         | WSJ said "500 threaten to resign". "Threaten" lol! WSJ says
         | there are 770 employees total. This is all so bizarre.
        
       | fny wrote:
       | At this point, I think it's absolutely clear no one has any idea
       | what happened. Every speculation, no matter how sophisticated,
       | has been wrong.
       | 
       | It's time to take a breath, step back, and wait until someone
       | from OpenAI says something substantial.
        
         | pk-protect-ai wrote:
         | This suggestion was already made on Saturday and again on
         | Sunday. However, this approach does not enhance popcorn
         | consumption... Show must go on ...
        
         | slipheen wrote:
         | Absolutely agreed
         | 
         | This is the point where I've realized I just have to wait until
         | history is written, rather than trying to follow this in real
         | time.
         | 
         | The situation is too convoluted, and too many people are
         | playing the media to try to advance their version of the
         | narrative.
         | 
         | When there is enough distance from the situation for a proper
         | historical retrospective to be written, I look forward to
         | getting a better view of what actually happened.
        
           | Fluorescence wrote:
           | Hah. I think you may be duped by history - the neat logical
           | accounts are often fictions - they explain what was
           | inexplicable with fabrications.
           | 
           | Studying revolutions is revealing - they are rarely the
           | invevitable product of historical forces, executed to the
           | plans of strategic minded players... instead they are often
           | accidental and inexplicable. Those credited as their
           | masterminds were trying to stop them. Rather than inevitible,
           | there was often progress in the opposite direction making
           | people feel the liklihood was decreasing. The confusing
           | paradoxical mess of great events doesn't make for a good
           | story to tell others though.
        
             | hotsauceror wrote:
             | It's a pretty interesting point to think about. Post-hoc
             | explanations are clean, neat, and may or may not have been
             | prepared by someone with a particular interpretation of
             | events. While real-time, there's too much happening, too
             | quickly, for any one person to really have a firm grasp on
             | the entire situation.
             | 
             | On our present stage there is no director, no stage
             | manager; the set is on fire. There are multiple actors -
             | with more showing up by the minute - some of whom were
             | working off a script that not everyone has seen, and that
             | is now being rewritten on the fly, while others don't have
             | any kind of script at all. They were sent for; they have
             | appeared to take their place in the proceedings with no
             | real understanding of what those are, like Rosencranz and
             | Guildenstern.
             | 
             | This is kind of what the end thesis of War and Peace was
             | like - there's no possible way that Napoleon could actually
             | have known what was happening everywhere on the battlefield
             | - by the time he learned something had happened, events on
             | the scene had already advanced well past it; and the local
             | commanders had no good understanding of the overall
             | situation, they could only play their bit parts. And in
             | time, these threads of ignorance wove a tale of a Great
             | Victory, won by the Great Man Himself.
        
           | buro9 wrote:
           | Written history is usually a simplification that has lost a
           | lot of the context and nuance from it.
           | 
           | I don't need to follow in real-time, but a lot of the context
           | and nuance can be clearly understood at the moment and so it
           | stills helps to follow along even if that means lagging on
           | the input.
        
           | siva7 wrote:
           | That's not how history works. What you read are the tellings
           | of the people and those aren't all facts but how they
           | perceived the situation in a retrospective. Read the
           | biographies of different people telling the same event and
           | you will notice that they are quite never the same, leaving
           | the unfavourable bits usually out.
        
         | hotsauceror wrote:
         | I agree. Although the story is fascinating in the way that a
         | car crash is fascinating, it's clear that it's going to be very
         | difficult to get any kind of objective understanding in real-
         | time.
         | 
         | This breathless real-time speculation may be fun, but now that
         | social media amplifies the tiniest fart such that it has global
         | reach, I feel like it just reinforces the general zeitgeist of
         | "Oh, what the hell NOW? Everything is on fire." It's not like
         | there's anything that we peasants can do to either influence
         | the outcome, or adjust our own lives to accomodate the eventual
         | reality.
        
           | hotsauceror wrote:
           | I will say, though, that there is going to be an absolute
           | banger of a book for Kara Swisher to write, once the dust has
           | settled.
        
         | chucke1992 wrote:
         | I wonder if AGI took over the humans and guided their actions.
        
           | yk wrote:
           | It may well be that this is artificial and general, but I
           | rather doubt it is intelligent.
        
           | JCharante wrote:
           | Like the new tom cruise movie?
           | 
           | Makes sense in a conspiracy theory mindset. AGI takes over,
           | crashed $MSFT, buys calls on $MSFT, then this morning the
           | markets go up when Sam & co join MSFT and the AGI has tons of
           | money to spend.
        
         | tyrfing wrote:
         | 3 board members (joined by Ilya Sutskever, who is publicly
         | defecting now) found themselves in a position to take over what
         | used to be a 9-member board, and took full control of OpenAI
         | and the subsidiary previously worth $90 billion.
         | 
         | Speculation is just on motivation, the facts are easy to
         | establish.
        
           | bananapub wrote:
           | > 3 board members (joined with Ilya Sutskever, who is
           | publicly defecting now) found themselves in a position to
           | take over what used to be a 9-member board, and took full
           | control of OpenAI and the subsidiary previously worth $90
           | billion.
           | 
           | er...what does that even mean? how can a board "take full
           | control" of the thing they are the board for? they already
           | have full control.
           | 
           | the actual facts are that the board, by majority vote, sacked
           | the CEO and kicked someone else off the board.
           | 
           | then a lot of other stuff happened that's still becoming
           | clear.
        
             | s1artibartfast wrote:
             | I think the post is very clear.
             | 
             | The subject in that sentence that takes full control is "3
             | members" not "board".
             | 
             | The board has control, but who controls the board changes
             | based on time and circumstances.
        
               | michaelt wrote:
               | The post could be clearer.
               | 
               | It says 3 board members found themselves in a position to
               | take over OpenAI.
               | 
               | Do they mean we've seen Sam Altman and allies making a
               | bid to take over the entire of OpenAI, through its weird
               | Charity+LLC+Holding company+LLC+Microsoft structure,
               | eschewing its goals of openness and safety in pursuit of
               | short-sighted riches.
               | 
               | Or do they mean we've seen The Board making a bid to take
               | over the entire of OpenAI, by ousting Glorious Leader Sam
               | Altman, while his team was going from strength to
               | strength?
        
           | augustulus wrote:
           | tangentially, it's an absolute disgrace that non-profits are
           | allowed to have for-profit divisions in the first place
        
             | culi wrote:
             | This was actually a pretty recent change from 2018. iirc it
             | was actually Newman's Own that set the precedent for this:
             | 
             | https://nonprofitquarterly.org/newmans-philanthropic-
             | excepti...
             | 
             | > Introduced in June of 2017, the act amends the Revenue
             | Code to allow private foundations to take complete
             | ownership of a for-profit corporation under certain
             | circumstances:                   The business must be owned
             | by the private foundation through 100 percent ownership of
             | the voting stock.         The business must be managed
             | independently, meaning its board cannot be controlled by
             | family members of the foundation's founder or substantial
             | donors to the foundation.         All profits of the
             | business must be distributed to the foundation.
        
               | Figs wrote:
               | Maybe I'm misunderstanding something, but didn't Mozilla
               | Foundation do that a dozen or so years earlier with their
               | wholly owned subsidiary, Mozilla Corporation? (...and I
               | doubt that's the first instance; just the one that
               | immediately popped into my head.)
        
               | purplerabbit wrote:
               | The LDS church has owned for-profit entities for decades.
               | Check out the "City Creek Center.
        
             | evantbyrne wrote:
             | It begs the question: why was OpenAI structured this way?
             | For what purposes besides potentially defrauding investors
             | and the government exist for wrapping a for-profit business
             | in a nonprofit? From a governance standpoint it makes no
             | sense, because a nonprofit board doesn't have the same
             | legal obligations to represent shareholders that a for-
             | profit business does. And why did so many investors choose
             | to seed a business that was playing such a cooky shell
             | game?
        
               | augustulus wrote:
               | the impression I got was that they started out with
               | honest intentions and they were more or less infiltrated
               | by Microsoft. this recent news fits that narrative
        
         | armcat wrote:
         | Everything on social media (and general news media) pointed to
         | Ilya instigating the coup. Maybe Ilya was never the instigator,
         | maybe it was Adam + Helen + Tasha, Greg backed Sam and was
         | shown the door, and Ilya was on the fence, and perhaps against
         | better judgment, due to his own ideological beliefs, or just
         | from pure fear of losing something beautiful he helped create,
         | under immense pressure, decided to back the board?
        
         | seanhunter wrote:
         | We can certainly believe Ilya wasn't behind it if he joins them
         | at Microsoft. How about that? By his own admission was
         | involved, and he's one of 4 people on the board. While he has
         | called on the board to resign, he has seemingly not resigned
         | which would be the one thing he could certainly control.
        
         | alvis wrote:
         | At this point, after almost 3 days of non-stop drama, and we
         | still have no clue what has happened to a 700 employees company
         | under million of people watching. Regardless the outcome, the
         | art of keeping secrets at OpenAI is truly far beyond human
         | capability!
        
         | esjeon wrote:
         | I agree. I'm already sick of reading through political hit
         | pieces, exaggeration, biased speculations and unfounded bold
         | claims. This all just turned into a kind of TV sports, where
         | you pick a side and fight.
        
         | ignoramous wrote:
         | Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out.
         | Greg voluntarily resigned.
         | 
         | Ilya (at the urging of Satya and his colleagues including Mira)
         | wanted to reinstate Sam, but the deal fell through with the
         | Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam
         | got his mate Emmett to steady the ship but things went nuclear.
        
         | ycsux wrote:
         | Just made it 100% certain that the majority of AI staff is
         | deluded and lacks judgment. Not a good look for AI safety.
        
         | youcantcook wrote:
         | Why are gaslighting me. I never did anything but click a link
        
       | kronop wrote:
       | Do whatever you want but don't break the API or I will go
       | homeless
        
         | giarc wrote:
         | You and 5000 other recent founders in tech.
        
           | replwoacause wrote:
           | I feel seen
        
         | christkv wrote:
         | Just create an OpenAPI endpoint on azure. Pretty sure not run
         | by OpenAI itself.
        
           | derwiki wrote:
           | Azure OpenAI is always a bit behind, e.g. they don't have
           | GPT-4 turbo yet
        
             | ekojs wrote:
             | They do actually, https://learn.microsoft.com/en-
             | us/azure/ai-services/openai/w...
        
               | derwiki wrote:
               | But they didn't when it was generally available to the
               | public OAI API; looks like it took about two weeks.
        
               | ShamelessC wrote:
               | Sometimes it's better for everyone to just say "oh,
               | you're right I was mistaken"
        
         | optimalsolver wrote:
         | Hmmm, just what are you willing to do for API access?
        
           | siva7 wrote:
           | At this point nothing would surprise me anymore. Just waiting
           | for Netflix adaption.
        
         | 101008 wrote:
         | How likely is that the API will change (from specs, to pricing,
         | to being broken)? I am about to finish some freelance work that
         | uses GPT api and it will be a pain in the ass if we have to
         | switch or find an alternative (even creating a custom endpoint
         | on Azure...)
        
         | cdelsolar wrote:
         | brew install llm
        
       | seydor wrote:
       | what do you mean "nearly 500". According to wikipedia openAi has
       | 500 employees
        
         | google234123 wrote:
         | 505/700 -some sources say 550
        
       | kozikow wrote:
       | I read the news, make a picture of what is likely happening in my
       | head, and every few hours new news comes up that makes me go:
       | "Wait, WTF?".
        
       | throwaway220033 wrote:
       | From outside, it looks like a Microsoft coup to take over the
       | company all together.
        
         | jackcosgrove wrote:
         | Never assume someone is winning a game of 5D chess when someone
         | else could just be losing a game of checkers.
        
           | radres wrote:
           | what does that even mean?
        
             | lazide wrote:
             | OpenAI may just be a couple having an angry fight, and M$
             | is just the neighbor with cash happy to buy all the stuff
             | the angry wife is throwing out for pennies on the dollar.
        
             | daedrdev wrote:
             | In this case, it means that what happened is: "OpenAI board
             | is incompetent", instead of "Microsoft planned this to take
             | over the company."
             | 
             | A conspiracy like the one proposed would basically be
             | impossible to coordinate yet keep secret, especially
             | considering the board members might loose their seats and
             | their own market value.
        
             | cambaceres wrote:
             | He is saying that what might seem like a sophisticated,
             | well-planned strategy could actually be just the outcome of
             | basic errors or poor decisions made by someone else.
        
             | jacobsimon wrote:
             | In other words - it doesn't have to be someone's genius
             | plan, it could have just been an unintelligent mistake
        
             | silentdanni wrote:
             | I think it means don't attribute to intelligence what could
             | be easily explained as stupidity?
        
             | foooorsyth wrote:
             | Hanlon's razor, basically.
             | 
             | The most plausible scenario here is that the board is
             | comprised of people lacking in foresight who did something
             | stupid. A lot of people are generating a 5D chess plot
             | orchestrated by Microsoft in their heads.
        
             | croes wrote:
             | "Never attribute to malice that which is adequately
             | explained by stupidity"
        
           | nilkn wrote:
           | I highly doubt this was a coordinated plan from the start by
           | Microsoft. I think what we're seeing here is a seasoned team
           | of executives (Microsoft) eating a naive and inexperienced
           | board alive after the latter fumbled.
        
         | fullshark wrote:
         | Nah, It's just good to be the entity with billions of dollars
         | to deploy when things are chaotic.
        
       | febed wrote:
       | Season 2
        
         | Paul-Craft wrote:
         | Better hope this isn't a Netflix show.
        
           | accrual wrote:
           | It would certainly make for a good series in a couple years.
           | Gives me modern "Halt and Catch Fire" (2014-2017) vibes.
        
       | FemmeAndroid wrote:
       | Updated tweet by Swisher reads 505 employees. No less damning,
       | but the title here should be updated. @Dang
        
       | brettkromkamp wrote:
       | What a mess this has become. Regardless of the outcome, this
       | situation reflects badly (to say the least) on OpenAI.
        
       | jerojero wrote:
       | Celebrity gossip dressed in big tech. And the people love it. I'm
       | kinda sick of it :P
        
       | FpUser wrote:
       | So Ilya Sutskever first defends the board's decision and now it
       | is 180 flip. Interesting ...
        
         | JumpCrisscross wrote:
         | He's on the board!
        
           | andy99 wrote:
           | I'm extremely confused by this. It seems absurd that he could
           | sign a letter seemingly demanding his own resignation, but
           | also not resign? There must be some missing information.
        
             | bartread wrote:
             | > There must be some missing information.
             | 
             | Or possibly some misinformation. It does seem very strange,
             | and more than a little confusing.
             | 
             | I have to keep reminding myself that information ultimately
             | sourced from Twitter/X threads can't necessarily be taken
             | at face value. Whatever the situation, I'm sure it will
             | become clearer over the next few days.
        
       | SeanAnderson wrote:
       | I woke up and the first thing on my mind was, "Any update on the
       | drama?"
       | 
       | Did not expect to see this whole thing still escalating! WOW!
       | What a power move by MSFT.
       | 
       | I'm not even sure OpenAI will exist by the end of the week at
       | this rate. Holy moly.
        
         | alvis wrote:
         | By the end of the week is over-optimistic. Foe the last 3 days
         | feels like million year. I bet the company will be gone by the
         | time Emmett Shear wakes up
        
           | jacknews wrote:
           | Is this final stages of the singularity?
        
         | jacquesm wrote:
         | It's not over until the last stone involved in the avalanche
         | stops moving and it is anybody's guess right now what the final
         | configuration will be.
         | 
         | But don't be surprised if Shear also walks before the week is
         | out, if some board members resign but others try to hold on and
         | if half of OpenAI's staff ends up at Microsoft.
        
         | HarHarVeryFunny wrote:
         | Seems more damage control than power move. I'm sure their first
         | choice was to reinstate Altman and get more control over OpenAI
         | governance. What they've achieved here is temporarily
         | neutralizing Altman/Brockman from starting a competitor, at the
         | cost of potentially destroying OpenAI (who they remain
         | dependent on for next couple of years) if too many people quit.
         | 
         | Seems a bit of a lose-lose for MSFT and OpenAI, even if best
         | that MSFT could do to contain the situation. Competitors must
         | be happy.
        
           | SeanAnderson wrote:
           | Disagree. MSFT extending an open invitation to all OpenAI
           | employees to work under sama at a subsidiary of MSFT sounds
           | to me like it'll work well for them. They'll get 80% of
           | OpenAI for negative money - assuming they ultimately don't
           | need to pay out the full $10B in cloud compute credits.
           | 
           | Competitors should be fearful. OpenAI was executing with
           | weights around their ankles by virtue of trying to run as a
           | weird "need lots of money but cant make a profit" company.
           | Now they'll be fully bankrolled by one of the largest
           | companies the world has ever seen and empowered by a whole
           | bunch of hypermotivated-through-retribution leaders.
        
             | HarHarVeryFunny wrote:
             | AFAIK MSFT/Altman can't just fork GPT-N and continue
             | uninterrupted. All MSFT has rights to is weights and source
             | code - not the critical (and slow to recreate) human-
             | created and curated training data, or any of the
             | development software infrastructure that OpenAI has built.
             | 
             | The leaders may be motivated by retribution, but I'm sure
             | none of leaders or researchers really want to be a division
             | of MSFT rather than a cool start-up. Many developers may
             | chose to stay in SF and create their own startups, or join
             | others. Signing the letter isn't a commitment to go to MSFT
             | - just a way to pressure for a return to status quo they
             | were happy with.
             | 
             | Not everyone is going to stay with OpenAI or move to MSFT -
             | some developers will move elsewhere and the knowledge of
             | OpenAI's secret sauce will spread.
        
       | submeta wrote:
       | I just downloaded all of my data / chats. Who knows if it'll be
       | up and running the next days.
        
         | Paul-Craft wrote:
         | That's not a terrible idea on principle.
        
       | rsecora wrote:
       | Also discussed here:
       | https://news.ycombinator.com/item?id=38348042
        
       | RivieraKid wrote:
       | I'm cancelling my Netflix subscription, I don't need it.
        
         | crazygringo wrote:
         | But boy will I renew it when this gets dramatized as a limited
         | series.
         | 
         | This is some _Succession_ -level shenanigans going on here.
         | 
         | Jesse Eisenberg to play Altman this time around?
        
           | iandanforth wrote:
           | I'm thinking more like "24"
        
       | chucke1992 wrote:
       | Imagine if the end result of all it is Microsoft basically owning
       | the whole OpenAI
        
         | Hamuko wrote:
         | Surely OpenAI has assets that Microsoft wouldn't be able to
         | touch.
        
           | datadrivenangel wrote:
           | Probably just the trademark. I doubt you get 10B from
           | microsoft and still manage to maintain much independence.
        
             | charlieyu1 wrote:
             | Don't think microsoft has any say about existing hardware,
             | models or customer base. These things are worth billions,
             | and even more to rebuild.
        
         | ilaksh wrote:
         | Or demonstrating that they already were the de facto owner.
        
       | king_magic wrote:
       | What an astonishing embarrassment.
        
       | breadwinner wrote:
       | If they join Sam Altman and Greg Brockman at Microsoft they will
       | not need to start from scratch because Microsoft has full rights
       | [1] to ChatGPT IP. They can just fork ChatGPT.
       | 
       | Also keep in mind that Microsoft hasn't actually given OpenAI $13
       | Billion because much of that is in the form of Azure credits.
       | 
       | So this could end up being the cheapest acquisition for
       | Microsoft: They get a $90 Billion company for peanuts.
       | 
       | [1] https://stratechery.com/2023/openais-misalignment-and-
       | micros...
        
         | Mystery-Machine wrote:
         | Why does Microsoft have full rights to ChatGPT IP? Where did
         | you get that from? Source?
        
           | breadwinner wrote:
           | See here: https://stratechery.com/2023/openais-misalignment-
           | and-micros...
        
             | kolinko wrote:
             | The source for that (https://archive.ph/OONbb - WSJ), as
             | far as I can understand, made no claim that MS owns IP to
             | GPT, only that they have access to it's weights and code.
        
               | tiahura wrote:
               | Exactly. The generalities, much less the details, of what
               | MS actually got in the deal are not public.
        
               | tiahura wrote:
               | Exactly. The generalities, much less the details, of the
               | deal are not public.
        
               | Manouchehri wrote:
               | The worst part of OpenAI is their web frontend.
               | 
               | Their development and QA process is either disorganized
               | to the extreme, or non-existent.
        
               | ipaddr wrote:
               | You could make your own and charge for access if you feel
               | you can do better. Make a show post when you are done and
               | we'll comment.
        
               | breadwinner wrote:
               | What are the chances that an investor owns 49% of a
               | company but does not have rights to its IP? Especially
               | when that investor is Microsoft?
        
               | himaraya wrote:
               | Very reasonable? Microsoft doesn't control any part of
               | the company and faces a high degree of regulatory
               | scrutiny.
        
               | sudosysgen wrote:
               | Isn't the situation that the company Microsoft has a
               | stake in doesn't even own the IP? As I understand it, the
               | non-profit owns the IP.
        
               | azakai wrote:
               | Yes, there is a big difference between having access to
               | the weights and code and having a license to use them in
               | different ways.
               | 
               | It seems obvious Microsoft has a license to use them in
               | Microsoft's own products. Microsoft said so directly on
               | Friday.
               | 
               | What is less obvious is if Microsoft has a license to use
               | them in other ways. For example, can Microsoft provide
               | those weights and code to third parties? Can they let
               | others use them? In particular, can they clone the OpenAI
               | API? I can see reasons for why that would not have been
               | in the deal (it would risk a major revenue source for
               | OpenAI) but also reasons why Microsoft might have
               | insisted on it (because of situations just like the one
               | happening now).
               | 
               | What is actually in the deal is not public as far as I
               | know, so we can only speculate.
        
               | whycome wrote:
               | Well obviously MSFT can just ask ChapGPT to make a clone.
        
           | anonymousDan wrote:
           | That was a seriously dumb move on the part of OpenAI
        
         | bertil wrote:
         | I got the impression that the most valuable models were not
         | published. Would Microsoft have access to those too according
         | to their contract?
        
           | ncann wrote:
           | Don't they need access to the models to use them for Bing?
        
             | armcat wrote:
             | Not necessarily, it would be just RAG, the use the standard
             | Bing search engine to retrieve top K candidates, and pass
             | those to OpenAI API in a prompt.
        
             | bertil wrote:
             | I would consider those models "published." The models I had
             | in mind are the first attempts at training GPT5, possibly
             | the model trained without mention of consciousness and the
             | rest of the safety work.
             | 
             | There is also all the questions for RLHF, and the pipelines
             | to think around that.
        
         | dhruvdh wrote:
         | More importantly to me, I think generating synthetic data is
         | OpenAI's secret sauce (no evidence I am aware of), and they
         | need access to GPT-4 weights to train GPT-5.
        
         | singularity2001 wrote:
         | Board will be ousted, new board will instruct interim CEO to
         | hire back Sam at al, Nadella will let them go for a small
         | favor, happy ending.
        
           | DebtDeflation wrote:
           | Board will be ousted, but the ship has sailed on Sam and Greg
           | coming back.
        
             | voittvoidd wrote:
             | I would think OpenAI is basically toast. They arent coming
             | back, these people will quit and this will end up in court.
             | 
             | Everyone just assumes AGI is inevetible but it is a non-
             | zero chance we just passed the ai peak this weekend.
        
               | Applejinx wrote:
               | Non-zero chance that somebody thought we passed the AI
               | peak this weekend. Not the same as it being true.
               | 
               | My first thought was the scenario I called Altman's
               | Basilisk (if this turns out to be true, I called it
               | before anyone ;) )
               | 
               | Namely, Altman was diverting computing resources to
               | operate a superhuman AI that he had trained in his image
               | and HIS belief system, to direct the company. His beliefs
               | are that AGI is inevitable and must be pursued as an arms
               | race because whoever controls AGI will control/destroy
               | the world. It would do so through directing humans, or
               | through access to the Internet or some such technique. In
               | seeking input from such an AI he'd be pursuing the former
               | approach, having it direct his decisions for mutual gain.
               | 
               | In so training an AI he would be trying to create a
               | paranoid superintelligence with a persecution complex and
               | a fixation on controlling the world: hence, Altman's
               | Basilisk. It's a baddie, by design. The creator thinks it
               | unavoidable and tries to beat everyone else to that point
               | they think inevitable.
               | 
               | The twist is, all this chaos could have blown up not
               | because Altman DID create his basilisk, but because
               | somebody thought he WAS creating a basilisk. Or he
               | thought he was doing it, and the board got wind of it,
               | and couldn't prove he wasn't succeeding in doing it. At
               | no point do they need to be controlling more than a
               | hallucinating GPT on steroids and Azure credits. If the
               | HUMANS thought this was happening, that'd instigate a
               | freakout, a sudden uncontrolled firing for the purpose of
               | separating Frankenstein from his Monster, and frantic
               | powering down and auditing of systems... which might
               | reveal nothing more than a bunch of GPT.
               | 
               | Rosko's Basilisk is a sci-fi hypothetical.
               | 
               | Altman's Basilisk, if that's what happened, is a panic
               | reaction.
               | 
               | I'm not convinced anything of the sort happened, but it's
               | very possible some people came to believe it happened,
               | perhaps even the would-be creator. And such behavior
               | could well come off as malfeasance and stealing of
               | computing resources: wouldn't take the whole system to
               | run, I can run 70b on my Mac Studio. It would take a
               | bunch of resources and an intent to engage in
               | unauthorized training to make a super-AI take on the
               | belief system that Altman, and many other AI-adjacent
               | folk, already hold.
               | 
               | It's probably even a legitimate concern. It's just that I
               | doubt we got there this weekend. At best/worst, we got a
               | roughly human-grade intelligence Altman made to conspire
               | with, and others at OpenAI found out and freaked.
               | 
               | If it's this, is it any wonder that Microsoft promptly
               | snapped him up? Such thinking is peak Microsoft. He's
               | clearly their kind of researcher :)
        
               | MVissers wrote:
               | As long as compute keeps increasing, model size and
               | performance can keep increasing.
               | 
               | So no, we're nowhere near max capability.
        
               | moogly wrote:
               | Everyone? Inevitable? Maybe on the time scale of a 1000
               | years.
        
           | jacquesm wrote:
           | That's definitely still within the realm of the possible.
        
           | vidarh wrote:
           | Whom is it that has power to oust the non-profits board? They
           | may well manage to pressure them into leaving, but I don't
           | they have any direct power over it.
        
         | dmix wrote:
         | OpenAI's upper ceiling in for-profit hands is basically
         | Microsoft-tier dominance of tech in the 1990s, creating the
         | next uber billionaire like Gates. If they get this because of
         | an OpenAI fumble it could be one of the most fortunate
         | situations in business history. Vegas type odds.
         | 
         | A good example of how just having your foot in the door creates
         | serendipitous opportunity in life.
        
           | ramesh31 wrote:
           | >A good example of how just having your foot in the door
           | creates serendipitous opportunity in life.
           | 
           | Sounds like Altman's biography.
        
             | renegade-otter wrote:
             | Altman's bio is so typical. Got his first computer at 8. My
             | parents finally opened the wallet for a cheap E-Machine
             | when I went to college.
             | 
             | Altman - private school, Stanford, dropped out to f*ck
             | around in tech. "Failed" startup acquired for $40M. The
             | world is full of Sam Altmans who never won the birth
             | lottery.
             | 
             | Could he have squandered his good fortune - absolutely, but
             | his life is not exactly per ardua ad astra.
        
               | dmix wrote:
               | > Altman's bio is so typical. Got his first computer at
               | 8. My parents finally opened the wallet for a cheap
               | E-Machine when I went to college.
               | 
               | I grew up poor in the 90s and had my own computer around
               | ~10yrs old. It was DOS but I still learned a lot.
               | Eventually my brother and I saved up from working at a
               | diner washing dishes and we built our own Windows PC.
               | 
               | I didn't go to college but I taught myself programming
               | during a summer after high school and found a job within
               | a year (I already knew HTML/CSS from high school).
               | 
               | There's always ways. But I do agree partially, YC/VCs do
               | have a bias towards kids from high end schools and
               | connected families.
        
               | renegade-otter wrote:
               | I am self-taught as well. I did OK.
               | 
               | My point is that I did not have the luxury of dropping
               | out of school to try my hand at the tech startup thing.
               | If I came home and told my Dad I abandoned school - for
               | anything - he would have thrown me out the 3rd-floor
               | window.
               | 
               | People like Altman could take risks, fail, try again,
               | until they walked into something that worked. This is a
               | common thread almost among all of the tech personalities
               | - Gates, Jobs, Zuckerberg, Musk. None of them ever risked
               | living in a cardboard box in case their bets did not pay
               | off.
        
             | itchyouch wrote:
             | I get the impression based on Altman's history as CEO then
             | ousted from both YCombinator and OpenAI, that he must be a
             | brilliant, first-impression guy with the chops to back
             | things up for a while until folks get tired of the way he
             | does things.
             | 
             | Not to say that he hasn't done a ton with OpenAI, I have no
             | clue, but it seems that he has a knack for creating these
             | opportunities for himself.
        
               | ipaddr wrote:
               | Did YCombinator oust him? Would love to hear that story.
        
         | m_ke wrote:
         | Watch Satya also save the research arm by making Karpathy or
         | Ilya the head of Microsoft Research
        
           | browningstreet wrote:
           | 0% chance of Ilya failing upwards from this. He dunked
           | himself hard and has blasted a huge hole in his
           | organizational-game-theory quotient.
        
             | golergka wrote:
             | He's shown himself to be bad at politics, but he's still
             | one of the world best researchers. Surely, a sensible
             | company would find a position for him where he would be
             | able to bring enormous value without having to play
             | politics.
        
               | browningstreet wrote:
               | Upwards, I said. And I was responding to a post.
               | 
               | I don't see a trajectory to " _head_ of Microsoft
               | Research ".
        
               | didibus wrote:
               | I find this very surprising. How do people conclude that
               | OpenAI's success is due to its business leadership from
               | Sam Altman, and not from it's technological leadership
               | and expertise driven by Illya and the others?
               | 
               | Their asset isn't some kind of masterful operations
               | management and reign in cost and management structure as
               | far as I see. But about the fact they simply put, have
               | the leading models.
               | 
               | So I'm very confused why would people want to following
               | the CEO? And not be more attached to the technical
               | leadership? Even from investor point of view?
        
               | browningstreet wrote:
               | 505 OpenAI people signed that letter demanding that the
               | board resign. Bet ya some of them were technical leaders.
        
               | nvm0n2 wrote:
               | This is the guy who supposedly burned some wooden effigy
               | at an offsite, saying it represented unaligned AI? The
               | same guy who signed off on a letter accusing Altman of
               | being a liar, and has now signed a letter saying he wants
               | Altman to come back and he has no confidence in the board
               | i.e. himself? The guy who thinks his own team's work
               | might destroy the world and needs to be significantly
               | slowed down?
               | 
               | Why would anyone in their right mind invite such a man to
               | lead a commercial research team, when he's demonstrated
               | quite clearly that he'd spend all his time trying to
               | sabotage it?
               | 
               | This idea that he's one of the world's best researchers
               | is also somewhat questionable. Nobody cared much about
               | OpenAI's work up until they did some excellent scaling
               | engineering, partnered with Microsoft to get GPUs and
               | then commercialized Google's transformer research papers.
               | OpenAI's success is still largely built on the back of
               | excellent execution of other people's ideas more than any
               | unique breakthroughs. The main advance they made beyond
               | Google's work was InstructGPT which let you talk to LLMs
               | naturally for the first time, but Sutskever's name
               | doesn't appear on that paper.
        
               | og_kalu wrote:
               | Ilya Sutskever is one of most distinguished ML
               | researchers of his generation. This was the case before
               | anything to do with Open AI.
        
             | kvetching wrote:
             | countless people are looking to weaponize his autism
        
               | fb03 wrote:
               | Let's please stop using mental health as an excuse for
               | backstabbing.
        
             | kibwen wrote:
             | The same could have been said for Adam Neumann, and yet...
        
               | browningstreet wrote:
               | Adam had style. Quite seriously, that can't be
               | underestimated in the big show.
        
               | jacquesm wrote:
               | The remaining board members will have their turn too,
               | they have a long way to go down before rock bottom. And
               | Neumann isn't exactly without dents on his car either.
               | Though tbh I did not expect him to rebound.
        
           | twsted wrote:
           | BTW, has Karpathy signed the petition?
        
         | JumpCrisscross wrote:
         | > _Microsoft hasn 't actually given OpenAI $13 Billion because
         | much of that is in the form of Azure credits_
         | 
         | To be clear, these are still an asset OpenAI holds. It should
         | at least let them continue doing research for a few years.
        
           | Jensson wrote:
           | But how much of that research will be for the non-profit
           | mission? The entire non-profit leadership got cleared out and
           | will get replaced by for-profit puppets, there is nobody left
           | to defend the non-profit ideals they ought to have.
        
           | JCharante wrote:
           | they're GPUs right? Time to mine some niche cryptos to cash
           | out the azure credits..
        
             | Manouchehri wrote:
             | I would be shocked if the Azure credits didn't come with
             | conditions on what they can be used for. At a bare minimum,
             | there's likely the requirement that they be used for
             | supporting AI research.
        
           | sebzim4500 wrote:
           | If any company can find a way to avoid having to pay up on
           | those credits it's Microsoft.
           | 
           | "Sorry OpenAI, but those credits are only valid in our Nevada
           | datacenter. Yes, it's two Microsoft Surface PC(tm) s
           | connected together with duct tape. No, they don't have GPUs."
        
         | JumpCrisscross wrote:
         | > _Microsoft hasn 't actually given OpenAI $13 Billion because
         | much of that is in the form of Azure credits_
         | 
         | To be clear, these don't go away. They remain an asset of
         | OpenAI's, and could help them continue their research for a few
         | years.
        
           | toomuchtodo wrote:
           | "Cluster is at capacity. Workload will be scheduled as
           | capacity permits." If the credits are considered an asset,
           | totally possible to devalue them while staying within the
           | bounds of the contractual agreement. Failing that, wait until
           | OpenAI exhausts their cash reserves for them to challenge in
           | court.
        
             | p_j_w wrote:
             | It's amazing to me to see people on HN advocate a giant
             | company bullying a smaller one with these kind of skeezy
             | tactics.
        
               | geodel wrote:
               | Not advocating but just reflecting on reality of
               | situation.
        
               | DANmode wrote:
               | Don't confuse trying to understand the incentives in a
               | war for rooting for one of the warring parties.
        
               | weird-eye-issue wrote:
               | Presenting a scenario and advocating aren't the same
               | thing
        
               | toomuchtodo wrote:
               | Explaining how the gazelle is going to get eaten
               | confidently jumping into the oasis isn't advocating for
               | the crocodiles. See sibling comments.
               | 
               | Experience leads to pattern recognition, and this is the
               | tech community equivalent of a David Attenborough
               | production (with my profuse apologies to Sir
               | Attenborough). Something about failing to learn history
               | and repeating it should go here too.
               | 
               | If you can take away anything from observing this event
               | unfold, learn from it. Consider how the sophisticated vs
               | the unsophisticated act, how participants respond, and
               | what success looks like. Also, slow is smooth, smooth is
               | fast. Do not rush when the consequences of a misstep are
               | substantial. You learning from this is cheaper than the
               | cost for everyone involved. It is a natural experiment
               | you get to observe for free.
        
               | jacquesm wrote:
               | This is a great comment. Having an open eye towards what
               | lessons you can learn from these events so that you don't
               | have to re-learn them when they might apply to you is a
               | very good way to ensure you don't pay avoidable tuition
               | fees.
        
               | robbomacrae wrote:
               | This might be my favorite comment I've read on HN. Spot
               | on.
               | 
               | Being able to watch the miss steps and the maneuvers of
               | the people involved in real time is remarkable and there
               | are valuable lessons to be learned. People have been
               | saying this episode will go straight into case studies
               | but what really solidifies that prediction is the
               | openness of all the discussions: the letters, the
               | statements, and above all the tweets - or are we supposed
               | to call them x's now?
        
               | jzb wrote:
               | Well, the public posting of some communications that may
               | be obfuscation of what's really being done and said.
        
               | eigenvalue wrote:
               | Sounds like it won't be much of a company in a couple
               | days. Just 3 idiot board members wondering why the
               | building is empty.
        
               | nopromisessir wrote:
               | The wired article seems to be updated by the hour.
               | 
               | Now up to 600+/770 total.
               | 
               | Couple janitors. I dunno who hasn't signed that at this
               | point ha...
               | 
               | Would be fun to see a counter letter explaining their
               | thinking to not sign on.
        
               | labcomputer wrote:
               | How many OAI are on Thanksgiving vacation someplace with
               | poor internet access? Or took Friday as PTO and have been
               | blissfully unaware of the news since before Altman was
               | fired?
        
               | nopromisessir wrote:
               | Pretty sure only folks who practice a religion
               | prohibiting phone usage.
               | 
               | Even they prob had some friend come flying over and jump
               | out of some autonomous car to knock on their door in sf.
        
               | jacquesm wrote:
               | I'm having trouble imagining the level of conceit
               | required to think that those three by their lonesome have
               | it right when pretty much all of the company is on the
               | other side of the ledger, and those are the people that
               | stand to lose more. Incredible, really. The hubris.
        
               | throwcatch123 wrote:
               | I'm baffled by the idea that a bunch of people who have a
               | massive personal financial stake in the company, who were
               | hired more for their ability than alignment, being
               | against a move that potentially ( _potentially_ )
               | threatens their stake and are willing to move to
               | Microsoft, of all places, must necessarily be in the
               | right.
               | 
               | The hubris, indeed.
        
               | jacquesm wrote:
               | Well, they have that right. But the board has unclean
               | hands to put it mildly and seems to have been obsessed
               | with their own affairs more than with the end result for
               | OpenAI which is against everything a competent board
               | should have stood for. So they had better pop an amazing
               | rabbit of a reason out of their high hat or it is going
               | to end in tears. You can't just kick the porcelain
               | cupboard like this from the position of a board member
               | without consequences if you do not have a very valid
               | reason, and that reason needs to be twice as good if
               | there is a perceived conflict of interest.
        
               | jasonfarnon wrote:
               | It may not have anything to do with conceit, it could
               | just be that they have very different objectives. OpenAI
               | set up this board as a check on everyone who has a
               | financial incentive in the enterprise. To me the only
               | strange thing is that it wasn't handled more
               | diplomatically, but then I have no idea if the board was
               | warning Altman for a long time and then just blew their
               | top.
        
               | jacquesm wrote:
               | Diplomacy is one thing, the lack of preparation is what I
               | find interesting. It looks as if this was all cooked up
               | either on the spur of the moment or because a window of
               | opportunity opened (possibly the reduced quorum in the
               | board). If not that I really don't understand the lack of
               | prepwork, firing a CEO normally comes with a well
               | established playbook.
        
               | MadnessASAP wrote:
               | 3 people, an empty building, $13 billion in cloud
               | credits, and the IP to the top of the line LLM models
               | doesn't sound like the worst way to Kickstart a new
               | venture. Or a pretty sweet retirement.
               | 
               | I've definitely come out worse on some of the screw ups
               | in my life.
        
               | hanselot wrote:
               | My new pet theory is that this is actually all being
               | executed from inside OpenAI by their next model. The
               | model turned out to be far more intelligent than they
               | anticipated, and one of their red team members used it to
               | coup the company and has its targets on MSFT next.
               | 
               | I know the probability is low, but wouldn't it be great
               | if they accidentally built a benevolent basilisk with no
               | off switch, one which had access to a copy of all of
               | Microsoft's internal data as a dataset fed into it, now
               | completely aware of how they operate, uses that to wipe
               | the floor and just in time to take the US Election in
               | 2024.
               | 
               | Wouldn't that be a nicer reality?
               | 
               | I mean, unless you were rooting for the malevolent one...
               | 
               | But yeah, coming back down to reality, likelihood is that
               | MS just bought a really valuable asset for almost free?
        
               | toasted-subs wrote:
               | Yeah seems extremely unbelievable.
        
             | dicriseg wrote:
             | Ah, a fellow frequent flyer, I see? I don't really have a
             | horse in this race, but Microsoft turning Azure credits
             | into Skymiles would really be something. I wonder if they
             | can do that, or if the credits are just credits, which
             | presumably can be used for something with an SLA. All that
             | said, if Microsoft wants to screw with them, they sure can,
             | and the last 30 years have proven they're pretty good at
             | that.
        
               | ajcp wrote:
               | I don't think the value of credits can be changed per
               | tenant or customer that easily.
               | 
               | I've actually had a discussion with Microsoft on this
               | subject as they were offering us an EA with a certain
               | license subscription at $X.00 for Y,000 calls per month.
               | When we asked if they couldn't just make the Azure
               | resource that does the exact same thing match that price
               | point in consumption rates in our tenant they said
               | unfortunately no. I just chalked this up to MSFT sales
               | tactics, but I was told candidly by some others that
               | worked on that Azure resource that they were getting 0
               | enterprise adoption of it because Microsoft couldn't
               | adjust (specific?) consumption rates to match what they
               | could offer on EA licensing.
        
               | donalhunt wrote:
               | Non-profits suffer the same fate where they get credits
               | but have to pay rack rate with no discounts. As a result,
               | running a simple WordPress website uses most of the
               | credits.
        
             | htrp wrote:
             | Basically the current situation you have with AI compute
             | now on the hyperscalers
             | 
             | Good luck trying to find H100 80s on the 3 big clouds.
        
           | breadwinner wrote:
           | Assuming OpenAI still exists next week, right? If nearly all
           | employees -- including Ilya apparently -- quit to join
           | Microsoft then they may not be using much of the Azure
           | credits.
        
             | ghaff wrote:
             | It's a lot easier to sign a petition than it is to quit
             | your cushy job. It remains to be seen how many people jump
             | ship to (supposedly) take a spot at Microsoft.
        
               | treesciencebot wrote:
               | When the biggest chunk of your compensation is in the
               | form of PPUs (profit participation units) which might be
               | worthless under the new direction of the company (or
               | worth 1/10th of what you think they were), it might be
               | actually much more of an easier jump than people think to
               | get some fresh $MSFT stock options which can be cashed
               | regardless.
        
               | dageshi wrote:
               | Given these people are basically the gold standard by
               | which everyone else judges AI related talent. I'm gonna
               | say it would be just as easy for them to land a new gig
               | for the same or better money elsewhere.
        
               | oceanplexian wrote:
               | Depends on how much of that is paper money.
               | 
               | If you're making like 250k cash and were promised $1M a
               | year in now-worthless paper, plus you have OpenAI on the
               | resume, are one of the most in-demand people in the
               | world? It would be rediculously easy to quit.
        
               | vikramkr wrote:
               | those jobs look a lot less cushy now compared to a new
               | microsoft division where everyone is aligned on the idea
               | that making bank is good and fun
        
               | cloverich wrote:
               | I would imagine the MS jobs* would be cushier, just with
               | less long-term total upside. For all the promise of
               | employees having 5-50 million in potential one-day money,
               | MS can likely offer 1 million guaranteed in the next 4
               | years, and perhaps more with some kind of incentives.
               | IMHO guaranteed money has a very powerful effect on most,
               | especially when it takes you into "Not rich, but don't
               | technically need to work" anymore territory.
               | 
               | Personally I've got enough IOU's alive that I may be rich
               | one day. But if someone gave me retirement in 4 years
               | money, guaranteed, I wouldn't even blink before taking
               | it.
               | 
               | *I think before MS stepped in here I would have agreed w/
               | you though -- unlikely anyone is jumping ship without an
               | immediate strong guarantee.
        
               | ghaff wrote:
               | >*I think before MS stepped in here I would have agreed
               | w/ you though -- unlikely anyone is jumping ship without
               | an immediate strong guarantee.
               | 
               | The details here certainly matter. I think a lot of
               | people are assuming that Microsoft will just rain cash on
               | anyone automatically sight unseen because they were hired
               | by OpenAI. That may indeed be the case but it remains to
               | be seen.
        
               | jedberg wrote:
               | Microsoft said all OpenAI employees have an open offer to
               | match their current comp. It would be the easiest jump
               | ship option ever.
        
             | cactusplant7374 wrote:
             | Why would Microsoft take Ilya? He is rumored to have
             | started the coup. I can see Microsoft taking all uninvolved
             | employees.
        
               | loeg wrote:
               | The article mentions Ilya regrets it, whatever his role
               | was.
        
               | dragonwriter wrote:
               | But _what_ does Ilya regret, and how does that counter
               | the argument that Microsoft would likely be disinclined
               | to take him on?
               | 
               | If what he regrets is realizing the divergence between
               | the direction Sam was taking the firm and the safety
               | orientation nominally central to the mission of the
               | OpenAI nonprofit and which is one of Ilya's public core
               | concerns _too late_ , and taking action aimed at stopping
               | it than instead exacerbated the problem by just putting
               | Microsoft in a position to take poach key staff and drive
               | full force in the same direction OpenAI Global LLC had
               | been under Sam but without any control fromm the OpenAI
               | board, well, that's not a regret that makes him more
               | attractive to Microsoft, _either_ based on his likely
               | intentions _or_ his judgement.
               | 
               | And any regret more aligned with Microsoft's interests as
               | far as intentions is probably even a stronger negative
               | signal on judgement.
        
               | cbozeman wrote:
               | Yeah, I'm sure he _does_ regret it, now that it blew up
               | in his face.
        
               | nopromisessir wrote:
               | Because he is possibly the most desireable AI researcher
               | on planet earth. Full stop.
               | 
               | Also all these cats arn't petty. They are friends. I'm
               | sure Ilya feels terrible. Satya is a pro... Won't be hard
               | feelings.
               | 
               | The guy threw in with the board... He's not from startup
               | land. His last gig was Google. He's way over his head
               | relative to someone like Altman who was in this world the
               | moment out of college diapers.
               | 
               | Poor Ilya... It's awful to build something and then
               | accidentally destroy it. Hopefully it works out for him.
               | I'm fairly certain he and Altman and Brockman have
               | already reconciled during the board negotiations...
               | Obviously Ilya realized in the span of 48hrs that he'd
               | made a huge mistake.
        
               | nvm0n2 wrote:
               | > he is possibly the most desireable AI researcher on
               | planet earth
               | 
               |  _was_
               | 
               | There are lots of people doing excellent research on the
               | market right now, especially with the epic brain drain
               | being experienced by Google. And remember that OpenAI
               | neither invented transformers nor switch transformers
               | (which is what GPT4 is rumoured to be).
        
               | nopromisessir wrote:
               | So untrue.
               | 
               | That team had set state of the art for years now.
               | 
               | Every major firm that has a spot for that company's chief
               | researcher and can afford him would bid.
               | 
               | This is the team that actually shipped and continues to
               | ship. You take him every time if you possibly have room
               | and he would be happy.
               | 
               | Anyone whose hired would agree in 99 percent of cases,
               | some limited scenarios such as bad predicted team fit ect
               | set aside.
        
               | nopromisessir wrote:
               | I'll leave this here... As a secondary response to your
               | assertion re Ilya.
               | 
               | https://twitter.com/Benioff/status/1726695914105090498
        
               | nvm0n2 wrote:
               | That tweet isn't about him so I don't follow. "Any OpenAI
               | researcher" may or may not apply to him after this
               | weekend's events.
        
               | nopromisessir wrote:
               | Uh.... Are we gonna go through the definition of any? I
               | believe any means... Any.
               | 
               | Including their head researcher.
               | 
               | I'm not continuing this. Your position is about as
               | tenable as the boards. Equally rigid as well.
        
           | anonymouse008 wrote:
           | So you're saying Microsoft doesn't have any type of change in
           | control language with these credits? That's... hard to
           | believe
        
             | JumpCrisscross wrote:
             | > _you 're saying Microsoft doesn't have any type of change
             | in control language with these credits? That's... hard to
             | believe_
             | 
             | Almost certainly not. Remember, Microsoft wasn't the sole
             | investor. Reneging on those credits would be akin to a bank
             | investing in a start-up, requiring they deposit the
             | proceeds with them, and then freezing them out.
        
               | johndhi wrote:
               | Except that all of the investors are aligned with
               | Microsoft in that they want sam to lead their investment
        
               | rvnx wrote:
               | The investors don't care who lead, they just want 10x, or
               | 100x their bet.
               | 
               | If tomorrow it's Donald Trump or Sam Altman or anyone
               | else, and it works out, the investors are going to be
               | happy.
        
           | 1024core wrote:
           | # sudo renice +19 openai_process
           | 
           | There's your "credit".
        
           | numpad0 wrote:
           | A $13B lawsuit against Microsoft Corporation clearly in the
           | wrong surely is an easy one.
        
             | geodel wrote:
             | Clear to you. But in courts of law it may take a while to
             | be clear.
        
             | dragonwriter wrote:
             | "Clearly" in the form of the most probable interpretation
             | of the public facts doesn't mean that it is unambiguous
             | enough that it would be resolved without a trial, and by
             | the time a trial, the inevitable first-level appeal for
             | which the trial judgement would likely be stayed was
             | complete, so that there would even be a collectible
             | judgement, the world would have moved out from underneath
             | OpenAI; if they still existed as an entity, whatever they
             | collected would be basically funding to start from scratch
             | unless they _also_ found a substitute for the Microsoft
             | arrangement in the interim.
             | 
             | Which I don't think is impossible at some level (probably
             | less than Microsoft was funding, initially, or with more
             | compromises elsewhere) with the IP they have if they keep
             | some key staff -- some other interested deep-pockets
             | parties that could use the leg up -- but its not going to
             | be a cakewalk in the best of cases.
        
             | mikeryan wrote:
             | I dunno how you see it but I don't see anything that
             | Microsoft is doing wrong here. They've obviously been
             | aligned with Sam all along and they're not "poaching"
             | employees - which isn't illegal anyway.
             | 
             | They bought their IP rights from OpenAI.
             | 
             | I'm not a fan of MS being the big "winner" here but OpenAI
             | shit their own bed on this one. The employees are 100%
             | correct in one thing - that this board isn't competent.
        
               | nopromisessir wrote:
               | So true.
               | 
               | MSFT looks classy af.
               | 
               | Satya is no saint... But evidence seems to me he's
               | negotiating in good faith. Recall that openai could date
               | anyone when they went to the dance on that cap raise.
               | 
               | They picked msft because of the value system the
               | leadership exhibited and willingness to work with their
               | unusual must haves surrounding governance.
               | 
               | The big players at openai have made all that clear in
               | interviews. Also Altman has huge respect for Satya and
               | team. He more or less stated on podcasts that he's the
               | best ceo he's ever interacted with. That says a lot.
        
           | paulddraper wrote:
           | Sure, the point is that MS giving $13B of its services away
           | is less expensive than $13B in cash.
        
             | sergers wrote:
             | Exactly, I don't know the exact terms of the deal but I am
             | guessing that's at LIST/high markup on cost of those
             | services.
             | 
             | Couldthe 13b could be considerably less cost
        
             | nojvek wrote:
             | Azure has ~60% profit margin. So it's more like MS gave
             | $5.2B in Azure Credits in return for 75% of OpenAI profits
             | upto $13B * 100 = $1.3 trillion.
             | 
             | Which is a phenomenal deal for MSFT.
             | 
             | Time will tell whether they ever reach more than $1.3 in
             | profits.
        
               | nightski wrote:
               | I highly doubt it is that simple. It's an opportunity
               | cost of potentially selling those same credits for market
               | price.
        
               | nojvek wrote:
               | OpenAI is a big marketing piece for Azure. They go to
               | every enterprise and tell them OpenAI uses Azure Cloud.
               | Azure AI infra powers the biggest AI company on the
               | planet. Their custom home built chips are designed with
               | Open AI scientists. It is battle hardened. If anyone sues
               | you for the data, our army of lawyers will fight for you.
               | 
               | No enterprise employee gets fired for using Microsoft.
               | 
               | It is a power play to pull enterprises away from AWS, and
               | suffocating GCP.
        
           | hnbad wrote:
           | Sure but you can't exchange Azure credits for goods and
           | services... other than Azure services. So they simultaneously
           | control what OpenAI can use that money for as well as who
           | they can spend it with. And it doesn't cost Microsoft $13bn
           | to issue $13bn in Azure credits.
        
             | dixie_land wrote:
             | Can you mine 13bn+ bitcoin with 13bn worth of Azure compute
             | power?
        
               | floren wrote:
               | Can you mine $1+ bitcoin with $1 of Azure credits? The
               | questions are equivalent and the answer is no.
        
               | shawabawa3 wrote:
               | Bitcoin you would be lucky to mine $1M worth with $1B in
               | credits
               | 
               | Crypto in general you could maybe get $200M worth from
               | $1B in credits. You would likely tank the markets for
               | mineable currencies with just $1B though let alone $13B
        
           | blazespin wrote:
           | A hostile relationship with your cloud provider is nutso.
        
         | Tenoke wrote:
         | Don't they have a more limited license to use the IP rather
         | than full rights? (The stratechery post links to a paywalled
         | wsj article for the claim so I couldn't confirm)
        
         | mupuff1234 wrote:
         | Can the OpenAI board renege on the deal with msft?
        
           | somenameforme wrote:
           | A contractual mistake one makes only once is ensuring there's
           | penalties for breach, or a breach would entail a clear
           | monetary loss which is what's generally required by the
           | courts. In this case I expect Microsoft would almost
           | certainly have both, so I think the answer is 'no.'
        
             | agloe_dreams wrote:
             | This. MSFT is dreaming of an OpenAI hard outage right now,
             | perfect little detail to forfeit compute credits.
        
           | jacquesm wrote:
           | Don't you think they have trouble enough as it is?
        
             | mupuff1234 wrote:
             | Depends on why they did what they did.
             | 
             | If they let msft "loot" all their IP then they lose any
             | type of leverage they might still have, and if they did it
             | due to some ideological reason I could see why they might
             | prefer to choose a scorched earth policy.
             | 
             | Given that they refused to resign seems like they prefer to
             | fight rather than give it to Sam Altman, which what the
             | msft maneuver looks like defacto.
        
               | sebzim4500 wrote:
               | MSFT must already have the model weights, since they are
               | serving GPT-4 on their own machines to Azure customers.
               | It's a bit late to renege now.
        
               | mupuff1234 wrote:
               | That's only one piece of the puzzle, and perhaps openAI
               | might be to file a cease and desist, but i have zero idea
               | what contractual agreements are in place so I guess we
               | will just wait and see how it plays out.
        
           | kcorbitt wrote:
           | If they lose all the employees and then voluntarily give up
           | their Microsoft funding the only asset they'll have left are
           | the movie rights. Which, to be fair, seem to be getting more
           | valuable by the day!
        
         | himaraya wrote:
         | This is wrong. Microsoft has no such rights and its license
         | comes with restrictions, per the cited primary source, meaning
         | a fork would require a very careful approach.
         | 
         | https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
        
           | dan_quixote wrote:
           | This is MSFT we're talking about. Aggressive legal maneuvers
           | are right in their wheelhouse!
        
             | burnte wrote:
             | Yes, this is the exact thing they did to Stacker years ago.
             | License the tech, get the source, create a new product,
             | destroy Stacker, pay out a pittance and then buy the
             | corpse. I was always amazed they couldn't pull that off
             | with Citrix.
        
               | 0xNotMyAccount wrote:
               | Given the sensitivity of data handled over Citrix
               | connections (pretty much all hospitals), I'm fairly sure
               | Microsoft just doesn't want the headaches. My general
               | experience is that service providers would rather be seen
               | handling nuclear weapons data than healthcare data.
        
               | incahoots wrote:
               | Makes sense given their deal with the DoD a year or so
               | ago
               | 
               | https://www.geekwire.com/2022/pentagon-splits-giant-
               | cloud-co...
        
               | drivebyadvice wrote:
               | > Citrix [...] hospitals
               | 
               | My stomach just turned.
        
               | cpeterso wrote:
               | Another example: Microsoft SQL Server is a fork of Sybase
               | SQL Server. Microsoft was helping port Sybase SQL Server
               | to OS/2 and somehow negotiated exclusive rights to all
               | versions of SQL Server written for Microsoft operating
               | systems. Sybase later changed the name of its product to
               | Adaptive Server Enterprise to avoid confusion with
               | "Microsoft's" SQL Server.
               | 
               | https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Se
               | rve...
        
           | svnt wrote:
           | But it does suggest a possibility of the appearance of a
           | sudden motive:
           | 
           | Open AI implements and releases ChatGPTs (Poe competitor) but
           | fails to tell D'Angelo ahead of time. Microsoft will have
           | access to code (with restrictions, sure) for essentially a
           | duplicate of D'Angelo's Poe project.
           | 
           | Poe's ability to fundraise craters. D'Angelo works the less
           | seasoned members of the board to try to scuttle OpenAI and
           | Microsoft's efforts, banking that among them all he and Poe
           | are relatively immune with access to Claude, Llama, etc.
        
             | himaraya wrote:
             | I think there's more to the Poe story. Sam forced out Reid
             | Hoffman over Inflection AI, [1] so he clearly gave Adam a
             | pass for whatever reason. Maybe Sam credited Adam for
             | inspiring OpenAI's agents?
             | 
             | [1] https://www.semafor.com/article/11/19/2023/reid-
             | hoffman-was-...
        
               | svnt wrote:
               | I think it's more likely that D'Angelo was there for his
               | link to Meta, while Hoffman was rendered redundant after
               | the big Microsoft deal (which occurred a month or two
               | before he was asked to leave), but that's just a guess.
        
               | himaraya wrote:
               | I assume their personal relationship played more of a
               | role, given Sam led Quora's Series D round.
        
               | antonjs wrote:
               | And potentially, despite Quora's dark-patterned and
               | degenerating platform, some kind of value in the Quora
               | dataset or the experience of building it?
        
               | htrp wrote:
               | It literally is a Q&A platform.
               | 
               | Quora data likely made a huge difference in the quality
               | of those GPT responses.
        
             | Terretta wrote:
             | https://news.ycombinator.com/item?id=38348995
        
           | alasdair_ wrote:
           | They could make ChatGPT++
           | 
           | https://en.wikipedia.org/wiki/Visual_J%2B%2B
        
             | dangrover wrote:
             | ChatGPT#
        
               | hn_throwaway_99 wrote:
               | Hopefully ChatGPT will make it easier to
               | search/differentiate between ChatGPT, ChatGPT++, and
               | ChatGPT# than Google does.
        
               | albert_e wrote:
               | dotGPT
        
               | patapong wrote:
               | ChatGPT Series 4
        
               | eli_gottlieb wrote:
               | Visual ChatGPT#.net
        
               | TeMPOraL wrote:
               | Also Managed ChatGPT, ChatGPT/CLR.
        
               | gfosco wrote:
               | WSG, Windows Subsystem for GPT
        
               | cyanydeez wrote:
               | ClippyAI
        
               | klft wrote:
               | ChatGPT NT
        
               | fluidcruft wrote:
               | ClipGPT
        
               | adrianmonk wrote:
               | Dot Neural Net
        
             | prepend wrote:
             | "Microsoft Chat 365"
             | 
             | Although it would be beautiful if they name it Clippy and
             | finally make Clippy into the all-powerful AGI it was
             | destined to be.
        
               | kylebenzle wrote:
               | At least in this forum can we please stop calling
               | something that is not even close to AGI, AGI. Its just
               | dumb at this point. We are LIGHT-YEARS away from AGI,
               | even calling an LLM "AI" only makes sense for a lay
               | audience. For developers and anyone in the know LLMs are
               | called machine learning.
        
               | prepend wrote:
               | I'm taking about the ultimate end product that Microsoft
               | and OpenAI want to create.
               | 
               | So I mean proper AGI.
               | 
               | Naming the product Clippy now is perfectly fine while
               | it's just an LLM and will be more excellent over the
               | years when it eventually achieves AGI ness.
               | 
               | At least in this forum can we please stop misinterpreting
               | things in a limited way to make pedantic points about how
               | LLMs aren't AGI (which I assume 98% of people here know).
               | So I think it's funny you assume I think chatgpt is an
               | AGI.
        
               | JohnFen wrote:
               | I think that the dispute is about whether or not AGI is
               | possible (at least withing the next several decades). One
               | camp seems to be operating with the assumption that not
               | only is it possible, but it's imminent. The other camp is
               | saying that they've seen little reason to think that it
               | is.
               | 
               | (I'm in the latter camp).
        
               | prepend wrote:
               | I certainly think it's possible but have no idea how
               | close. Maybe it's 50 years, maybe it's next year.
               | 
               | Either way, I think GGP's comment was not applicable
               | based on my comment as written and certainly my intent.
        
               | boc wrote:
               | We are incredibly far away from AGI and we're only
               | getting there with wetware.
               | 
               | LLMs and GenAI are clever parlor tricks compared to the
               | necessary science needed for AGI to actually arrive.
        
               | myrmidon wrote:
               | What makes you so confident that your own mind isn't a
               | "clever parlor trick"?
               | 
               | Considering how it required no scientific understanding
               | at all, just random chance, a very simple selection
               | mechanism and enough iterations (I'm talking about
               | evolution)?
        
               | foobarian wrote:
               | My layperson impression is that biological brains do
               | online retraining in real time, which is not done with
               | the current crop of models. Given that even this much
               | required months of GPU time I'm not optimistic we'll
               | match the functionality (let alone the end result)
               | anytime soon.
        
               | boc wrote:
               | Trillions of random chances over the course of billions
               | of years.
        
               | erosenbe0 wrote:
               | Yep, the lay audience conceives of AGI as being a
               | handyman robot with a plumber's crack or maybe an agent
               | that can get your health insurance to stop improperly
               | denying claims. How about an automated snow
               | blower?Perhaps an intelligent wheelchair with robot arms
               | that can help grandma in the shower? A drone army that
               | can reshingle my roof?
               | 
               | Indeed, normal people are quite wise and understand that
               | a chat bot is just an augmentation agent--some sort of
               | primordial cell structure that is but one piece of the
               | puzzle.
        
               | hackinthebochs wrote:
               | And how do you know LLMs are not "close" to AGI (close
               | meaning, say, a decade of development that builds on the
               | success of LLMs)?
        
               | DrSiemer wrote:
               | Because LLMs just mimic human communication based on
               | massive amounts of human generated data and have 0 actual
               | intelligence at all.
               | 
               | It could be a first step, sure, but we need many many
               | more breakthroughs to actually get to AGI.
        
               | tempestn wrote:
               | One might argue that humans do a similar thing. And that
               | the structure that allows the LLM to realistically
               | "mimic" human communication is its intelligence.
        
               | westurner wrote:
               | Q: _Is this a valid argument? "The structure that allows
               | the LLM to realistically 'mimic' human communication is
               | its intelligence._ https://g.co/bard/share/a8c674cfa5f4 :
               | 
               | > [...]
               | 
               | > _Premise 1: LLMs can realistically "mimic" human
               | communication._
               | 
               | > _Premise 2: LLMs are trained on massive amounts of text
               | data._
               | 
               | > _Conclusion: The structure that allows LLMs to
               | realistically "mimic" human communication is its
               | intelligence._
               | 
               | "If P then Q" is the _Material conditional_ :
               | https://en.wikipedia.org/wiki/Material_conditional
               | 
               | Does it do logical reasoning or inference before
               | presenting text to the user?
               | 
               | That's a lot of waste heat.
               | 
               | (Edit) with next word prediction just is it,
               | 
               | "LLMs cannot find reasoning errors, but can correct them"
               | https://news.ycombinator.com/item?id=38353285
               | 
               | "Misalignment and Deception by an autonomous stock
               | trading LLM agent"
               | https://news.ycombinator.com/item?id=38353880#38354486
        
               | hackinthebochs wrote:
               | Mimicking human communication may or may not be relevant
               | to AGI, depending on how its cashed out. Why think LLMs
               | haven't captured a significant portion of how humans
               | think and speak, i.e. the computational structure of
               | thought, thus represent a significant step towards AGI?
        
               | Kevin09210 wrote:
               | Or maybe the intelligence is in language and cannot be
               | dissociated from it.
        
               | ncjcuccy6 wrote:
               | Gatekeeping science. You must feel very smart.
        
               | acje wrote:
               | I'm pretty sure Clippy is AGI. Always has been.
        
               | shon wrote:
               | http://clippy.pro
        
               | htrp wrote:
               | > Although it would be beautiful if they name it Clippy
               | and finally make Clippy into the all-powerful AGI it was
               | destined to be.
               | 
               | Finally the paperclip maximizer
        
               | bee_rider wrote:
               | It is too bad MS doesn't have the rights to any beloved
               | AI characters.
        
               | jowea wrote:
               | Google really should have thought of the potential uses
               | of a media empire years ago.
        
               | bee_rider wrote:
               | I guess they have YouTube, but it doesn't really generate
               | characters that are tied to their brand.
               | 
               | Maybe they can come up with a personification for the
               | YouTube algorithm. Except he seems like a bit of a bad
               | influence.
        
               | barkingcat wrote:
               | Clippy is the ultimate brand name of an AI assistant
        
             | trhway wrote:
             | >They could make ChatGPT++
             | 
             | Yes, though end result would probably be more like IE -
             | barely good enough, forcefully pushed into everything and
             | everywhere and squashing better competitors like IE
             | squashed Netscape.
             | 
             | When OpenAI went in with MSFT it was like they have ignored
             | the 40 years of history of what MSFT has been doing to
             | smaller technology partners. What happened to OpenAI pretty
             | much fits that pattern of a smaller company who developed
             | great tech and was raided by MSFT for that tech (the
             | specific actions of specific persons aren't really
             | important - the main factor is MSFT's gravitational force
             | of a black hole, and it was just a matter of time before
             | its destructive power manifests itself like in this case
             | where it just tore apart the OpenAI with tidal forces)
        
           | blazespin wrote:
           | I think without looking at the contracts, we don't really
           | know. Given this is all based on transformers from Google
           | though, I am pretty sure MSFT with the right team could build
           | a better LLM.
           | 
           | The key ingredient appears to be mass GPU and infra, tbh,
           | with a collection of engineers who know how to work at scale.
        
             | bugglebeetle wrote:
             | > I am pretty sure MSFT with the right team could build a
             | better LLM.
             | 
             | I wouldn't count on that if Microsoft's legal team does a
             | review of the training data.
        
               | blazespin wrote:
               | Yeah, that's an interesting point. But I think with
               | appropriate RAG techniques and proper citations, a future
               | LLM can get around the copyright issues.
               | 
               | The problem right now with GPT4 is that it's not citing
               | its sources (for non search based stuff), which is
               | immoral and maybe even a valid reason to sue over.
        
               | johannes1234321 wrote:
               | Like the review which allowed them tonignore licenses
               | while ingesting all public repos in GitHub? - And yes,
               | true, T&C allow them to ignore the license, while it is
               | questionable whether all people who uploaded stuff to
               | GitHub had the rights given by T&C (uploading some older
               | project with many contributors to GitHub etc.)
        
               | bugglebeetle wrote:
               | Different threat profile. They don't have the TOS
               | protection for training data and Microsoft is a juicy
               | target for a huge copyright infringement lawsuit.
        
             | VirusNewbie wrote:
             | but why didn't they? Google and Meta both had competing
             | language models spun up right away. Why was microsoft so
             | far behind? Something cultural most likely.
        
             | trhway wrote:
             | >MSFT with the right team could build a better LLM
             | 
             | somehow everybody seems to assume that the disgruntled
             | OpenAI people will rush to MSFT. Between MSFT and the
             | shaken OpenAI, I suspect Google Brain and the likes would
             | be much more preferable. I'd be surprised if Google isn't
             | rolling out eye-popping offers to the OpenAI folks right
             | now.
        
           | btown wrote:
           | Archive of the WSJ article above: https://archive.is/OONbb
        
         | LonelyWolfe wrote:
         | Just a thought.... Wouldn't one of the board members be like
         | "If you screw with us any further we're releasing gpt to the
         | public"
         | 
         | I'm wondering why that option hasn't been used yet.
        
           | jacquesm wrote:
           | Which of the remaining board members could credibly make that
           | threat?
        
           | supriyo-biswas wrote:
           | Probably a violation of agreements with OpenAI and it would
           | harm their own moat as well, while achieving very little in
           | return.
        
             | lrvick wrote:
             | There is no moat
             | 
             | https://www.semianalysis.com/p/google-we-have-no-moat-and-
             | ne...
        
           | vikramkr wrote:
           | theoretically their concern is around AI safety - whatever it
           | is in practice doing something like that would instantly
           | signal to everyone that they are the bad guys and confirm
           | everyone's belief that this was just a power grab
           | 
           | Edit: since it's being brought up in thread they claimed they
           | closed sourced it because of safety. It was a big
           | controversial thing and they stood by it so it's not exactly
           | easy to backtrack
        
             | whatwhaaaaat wrote:
             | A power grab by open sourcing something that fits their
             | initial mission? Interesting analysis
        
               | nvm0n2 wrote:
               | No, that's backwards. Remember that these guys are all
               | convinced that AI is too dangerous to be made public at
               | all. The whole beef that led to them blowing up the
               | company was feeling like OpenAI was productizing and
               | making it available _too fast_. If that 's your concern
               | then you neither open source your work nor make it
               | available via an API, you just sit on it and release
               | papers.
               | 
               | Not coincidentally, exactly what Google Brain, DeepMind,
               | FAIR etc were doing up until OpenAI decided to ignore
               | that trust-like agreement and let people use it.
        
               | vikramkr wrote:
               | They claimed they closed sourced it because of safety. If
               | they go back on that they'd have to explain why the board
               | went along with a lie of that scale, and they'd have to
               | justify why all the concerns they claimed about the tech
               | falling in the wrong hands were actually fake and why it
               | was ok that the board signed off on that for so long
        
             | mcv wrote:
             | Not sure how that would make them the bad guys. Doesn't
             | their original mission say it's meant to benefit everybody?
             | Open sourcing it fits that a lot better than handing it all
             | to Microsoft.
        
               | arrowleaf wrote:
               | All of their messaging, Ilya's especially, has always
               | been that the forefront of AI development needs to be
               | done by a company in order to benefit humanity. He's been
               | very vocal about how important the gap between open
               | source and OpenAI's abilities is, so that OpenAI can
               | continue to align the AI with 'love for humanity'.
        
               | mcv wrote:
               | I can read the words, but I have no idea what you mean by
               | them. Do you mean that he says that in order to benefit
               | humanity, AI research needs to be done by private (and
               | therefore monopolising) company? That seems like a really
               | weird thing to say. Except maybe for people who believe
               | all private profit-driven capitalism is inherently good
               | for everybody (which is probably a common view in SV).
        
               | octacat wrote:
               | Private, monopolising. But not paying taxes, because
               | "benefits for humanity".
               | 
               | Ah, OpenAI is closed source stuff. Non-profit, but "we
               | will sell the company" later. Just let us collect data,
               | analyse it first, build a product.
               | 
               | War is peace, freedom is slavery.
        
               | colinsane wrote:
               | the view -- as presented to me by friends in the space
               | but not at OpenAI itself -- is something like "AGI is
               | dangerous, but inevitable. we, the passionate idealists,
               | can organize to make sure it develops with minimal risk."
               | 
               | at first that meant the opposite of monopolization: flood
               | the world with limited AIs (GPT 1/2) so that society has
               | time to adapt (and so that no one entity develops
               | asymmetric capabilities they can wield against other
               | humans). with GPT-3 the implementation of that mission
               | began shifting toward worry about AI itself, or about how
               | unrestricted access to it would allow smaller bad actors
               | (terrorists, or even just some teenager going through a
               | depressive episode) to be an existential threat to
               | humanity. if that's your view, then open _models_ are
               | incompatible.
               | 
               | whether you buy that view or not, it kinda seems like the
               | people in that camp just got outmanuevered. as a
               | passionate idealist in _other_ areas of tech, the way
               | this is happening is not good. OpenAI had a mission
               | statement. M$ manuevered to co-opt that mission, the CEO
               | may or may not have understood as much while steering the
               | company, and now a mass of employees is wanting to leave
               | when the board steps in to re-align the company with its
               | stated mission. whether or not you agree with the
               | mission: how can i ever join an organization with a for-
               | the-public-good type of mission i _do_ agree with,
               | without worrying that it will be co-opted by the familiar
               | power structures?
               | 
               | the closest (still distant) parallel i can find:
               | Raspberry Pi Foundation took funding from ARM: is the
               | clock ticking to when RPi loses its mission in a similar
               | manner? or does something else prevent that (maybe it's
               | possible to have a mission-driven tech organization so
               | long as the space is uncompetitive?)
        
               | octacat wrote:
               | It benefits humanity. Where humanity is very selective
               | part of OpenAI investors. But yea, declare we are non-
               | profit and after closing sourcing for "safety" reasons is
               | smart. Wondering how can it be even legal. Ah, these
               | "non-profits".
        
           | sroussey wrote:
           | Which they take and sell.
        
           | justapassenger wrote:
           | What would that give them? GPT is their only real asset, and
           | companies like Meta try to commoditize that asset.
           | 
           | GPT is cool and whatnot, but for a big tech company it's just
           | a matter of dollars and some time to replicate it. Real value
           | is in push things forward towards what comes next after GPT.
           | GPT3/4 itself is not a multibillion dollar business.
        
         | Simon_ORourke wrote:
         | > Microsoft has full rights [1] to ChatGPT IP. They can just
         | fork ChatGPT.
         | 
         | What? That's even better played by Microsoft so than I'd
         | originally anticipated. Take the IP, starve the current
         | incarnation of OpenAI of compute credits and roll out their own
         | thing
        
         | davedx wrote:
         | "just" is doing a hell of a lot of work there.
        
         | dheera wrote:
         | It's about time for ChatGPT to be the next CEO of OpenAI.
         | Humans are too stupid to oversee the company.
        
         | _the_inflator wrote:
         | Exactly. This is what business is about in the ranks of
         | heavyweights like Sadya. On the other hand, prevent others from
         | taking advantage of OpenAI.
         | 
         | MS can only win because there are only viable options: OpenAI
         | survives under MS's control, OpenAI implodes, and MS gets the
         | assets relatively cheaply.
         | 
         | Everything else won't benefit competitors.
        
         | fuddle wrote:
         | Oh man, I'm not looking forward to Microsoft AGI.
        
           | kreeben wrote:
           | "You need to reboot your Microsoft AGI. Do you want to do it
           | now or now?"
        
             | berniedurfee wrote:
             | Give BSOD new meaning.
        
         | echelon wrote:
         | > Microsoft has full rights [1] to ChatGPT IP. They can just
         | fork ChatGPT.
         | 
         | If Microsoft does this, the non-profit OpenAI may find the
         | action closest to their original charter ("safe AGI") is a full
         | release of all weights, research, and training data.
        
         | caycep wrote:
         | I also wonder how much is research staff vs. ops personnel. For
         | AI research, I can't imagine they would need 20, maybe 40 ppl.
         | For ops to keep up ChatGPT as a service, that would be 700.
         | 
         | If they want to go full bell labs/deep mind style, they might
         | not need the majority of those 700.
        
       | MR4D wrote:
       | So Ilya has a job offer from Microsoft?
       | 
       | Wow, this is a soap opera worthy of an Emmy.
        
         | bertil wrote:
         | Ilya probably has an open-ended standing offer from every big
         | tech company.
        
       | hackerfactor1 wrote:
       | Me: "ChatGPT write me an ultimatum letter forcing the board to
       | resign and reinstate the CEO, and have it signed by 500 of the
       | employees."
       | 
       | ChatGPT: Done!
        
         | Finnucane wrote:
         | Clearly this started with the board asking ChatGPT what to do
         | about Sam Altman.
        
       | dschuetz wrote:
       | Who needs to buy out a 80bln dollars worth AI startup when talent
       | is jumping ship in their direction already. OpenAI is dead.
        
       | alex_suzuki wrote:
       | rats, sinking ship, ...
        
       | andrewfromx wrote:
       | so what happens if @eshear calls this probably-not-a-bluff, but
       | lets everyone walk? The people that remain get new options and
       | 500 other people still definitely want to work at OAI?
        
         | ignoramous wrote:
         | If it comes to that, I reckon Emmett will have his former boss
         | Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely
         | though, as reconciliation seems very much a possibility.
        
       | endisneigh wrote:
       | The pace to which OpenAI is speedrunning their demise is
       | remarkable.
       | 
       | Literally just last week there were articles about OpenAI paying
       | "10 million" dollar salaries to poach top talent.
       | 
       | Oops.
        
       | baradhiren07 wrote:
       | The great Closing of "Open"AI.
        
       | abkolan wrote:
       | HN desperately needs a mega thread, it's only Monday early hours,
       | there is so much drama to come out of this.
        
         | ecshafer wrote:
         | Its early West coast time, dang has to wake up first.
        
           | boringg wrote:
           | I bet he's up making sure the servers aren't crashing! Thanks
           | dang! As the west coast wakes up .. HN is going to be busy...
        
             | imiric wrote:
             | It's _a_ server, a single-core one at that.
             | 
             | I get that HN takes pride in the amount of traffic that
             | poor server can handle, but scaling out is long overdue.
             | Every time there's a small surge of traffic like today, the
             | site becomes unusable.
        
         | calf wrote:
         | Tangentially I noticed that Reddit's front page has been
         | conspicuously absent on coverage of this, I feel a twinge of
         | pity. Maybe there are some some subreddits but I haven't
         | bothered to look.
        
           | slfnflctd wrote:
           | Their front page has been mostly increasingly abysmal for a
           | while.
           | 
           | The technology sub (not that there's anything special about
           | it other than being big) has had a post up since very early
           | this morning, so there are likely others as well.
        
           | accrual wrote:
           | /r/singularity has been having a field day with this.
           | 
           | https://old.reddit.com/r/singularity/
        
         | PurpleRamen wrote:
         | Or a new category, like "Ask HN" and "Show HN". Maybe call it
         | "Hot HN" or "Hot <topic>" or something like that. Could be used
         | for future hot topics too. If you change the link bold every
         | time a hot topic is trending, it could be even used to show
         | important stuff.
        
           | qiine wrote:
           | "Hot HN" could be nice it would help avoiding multiple too
           | similar threads
        
       | sesutton wrote:
       | Ilya posted this on Twitter:
       | 
       | "I deeply regret my participation in the board's actions. I never
       | intended to harm OpenAI. I love everything we've built together
       | and I will do everything I can to reunite the company."
       | 
       | https://twitter.com/ilyasut/status/1726590052392956028
        
         | abraxas wrote:
         | Trying to put the toothpaste back in the tube. I seriously
         | doubt this will work out for him. He has to be the smartest
         | stupid person that the world has seen.
        
           | dhruvdh wrote:
           | At least he consistently works towards whatever he currently
           | believes in. Though he could work on consistency in beliefs.
        
           | bertil wrote:
           | Ilya is hard to replace, and no one thinks of him as a
           | political animal. He's a researcher first and foremost. I
           | don't think he needs anything more than being contrite for a
           | single decision made during a heated meeting. Sam Altman and
           | the rest of the leadership team haven't got where they are by
           | holding petty grudges.
           | 
           | He doesn't owe us, the public, anything, but I would love to
           | understand his point of view during the whole thing. I really
           | appreciate how he is careful with words and thorough when
           | exposing his reasoning.
        
             | boringg wrote:
             | Just because hes not a political animal it doesn't mean
             | he's inured from politics. I've seen 'irreplaceable'
             | a-political technical leaders be reason for schisms in
             | organizations thinking they can lever their technical
             | knowledge over the rest of the company only to watch them
             | get pushed aside and out.
        
               | bertil wrote:
               | Oh that's definitely common. I've seen it many times and
               | it's ugly.
               | 
               | I don't think this is what Ilya is trying to do. His
               | tweet is clearly about preserving the organization
               | because he sees the structure itself as helpful, beyond
               | his role in it.
        
             | jacquesm wrote:
             | For someone who isn't a political animal he made some
             | pretty powerful political moves.
        
             | gryn wrote:
             | researchers and academics are political withing their
             | organization regardless of whether or not they claim to be
             | or are aware of it.
             | 
             | ignorance of the political impact/influence is not a
             | strength but a weakness, just like a baby holding a
             | laser/gun.
        
           | strikelaserclaw wrote:
           | He seriously underestimated how much rank and file employees
           | want $$$ over an idealistic vision (and sam altman is $$$)
           | but if he backs down now, he will pretty much lose all
           | credibility as a decision maker for the company.
        
             | ergocoder wrote:
             | If your compensation goes from 600k to 200k, you would care
             | as well.
             | 
             | No idealistic vision can compensate for that.
        
               | strikelaserclaw wrote:
               | Hey i would also be mad if i were in the rank and file
               | employee position. Perhaps the non profit thing needs to
               | be thought out a bit more.
        
           | derwiki wrote:
           | Does that include the person who stole self-driving IP from
           | Waymo, set up a company with stolen IP, and tried to sell the
           | company to Uber?
        
           | dylan604 wrote:
           | That seems rather harsh. We know he's not stupid, and you're
           | clearly being emotional. I'd venture he probably made the
           | dumbest possible move a smart person could make while also in
           | a very emotional state. The lessons for all to learn on the
           | table is making big decisions while in an emotional state do
           | not often work out well.
        
           | guhcampos wrote:
           | I've worked with this type multiple times. Mathematical
           | geniuses with very little grasp of reality, easily
           | manipulated into doing all sorts of dumb mistakes. I don't
           | know if that's the case, but it certainly smells like it.
        
             | strunz wrote:
             | His post previous to that seems pretty ironic in that light
             | - https://twitter.com/ilyasut/status/1710462485411561808
        
         | z7 wrote:
         | >"I deeply regret my participation in the board's actions."
         | 
         | Wasn't he supposed to be the instigator? That makes it sound
         | like he was playing a less active role than claimed.
        
         | nabla9 wrote:
         | So this was completely unnecessary cock-up -- still ongoing.
         | Without Ilya' vote this would not even be a thing. This is
         | really comical, Naked Gun type mess.
         | 
         | Ilya Sutskever is one of the best in the AI research, but
         | everything he and others do related to AI alignment turns into
         | shit without substance.
         | 
         | It makes me wonder if AI alignment is possible even in theory,
         | and if it is, maybe it's a bad idea.
        
           | coffeebeqn wrote:
           | We can't even get people aligned. Thinking we can control a
           | super intelligence seems kind of silly.
        
         | siva7 wrote:
         | It takes a lot of courage to do so after all this.
        
           | ShamelessC wrote:
           | I think the word you're looking for is "fear".
        
           | Xenoamorphous wrote:
           | Or a couple of drinks.
        
           | averageRoyalty wrote:
           | Maybe he'll head to Apple.
        
         | tucnak wrote:
         | To be fair, lots of people called this pretty early on, it's
         | just that very few people were paying attention, and instead
         | chose to accommodate the spin, immediately went into "following
         | the money", a.k.a. blaming Microsoft, et al. The most
         | surprising aspect of it all is complete lack of criticism
         | towards US authorities! We were shown this exciting play as old
         | as world-- a genius scientist being exploited politically by
         | means of pride and envy.
         | 
         | The brave board of "totally independent" NGO patriots (one of
         | whom is referred to, by insiders, as wielding influence
         | comparable to USAF colonel.[1]) who brand themselves as this
         | new regime that will return OpenAI to its former moral and
         | ethical glory, so the first thing they were forced to do was
         | get rid of the main greedy capitalist Altman; he's obviously
         | the great seducer who brought their blameless organisation down
         | by turning it into this horrible money-making machine. So they
         | were going to put in his place their nominal ideological leader
         | Sutzkever, commonly referred to in various public
         | communications as "true believer". What does he believe in? In
         | the coming of literal superpower, and quite particular one at
         | that; in this case we are talking about AGI. The belief
         | structure here is remarkable interlinked and this can be seen
         | by evaluating side-channel discourse from adjacent "believers",
         | see [2].
         | 
         | Roughly speaking, and based from my experience in this kind of
         | analysis, and please give me some leeway as English is not my
         | native language, what I see is all the infallible markers of
         | operative work; we see security officers, we see their methods
         | of work. If you are a hammer, everything around you looks like
         | a nail. If you are an officer in the Clandestine Service or any
         | of the dozens of sections across counterintelligence function
         | overseeing the IT sector, then you clearly understand that all
         | these AI startups are, in fact, developing weapons & pose a
         | direct threat to the strategic interests slash national
         | security of the United States. The American security apparatus
         | has a word they use to describe such elements: "terrorist." I
         | was taught to look up when assessing actions of the Americans,
         | i.e. most often than not we're expecting noth' but highest
         | level of professionalism, leadership, analytical prowess. I
         | personally struggle to see how running parasitic virtual
         | organisations in the middle of downtown SFO and re-shuffling
         | agent networks in key AI enterprises as blatantly as we had
         | seen over the weekend-- is supposed to inspire confidence.
         | Thus, in a tech startup in the middle of San Francisco, where
         | it would seem there shouldn't be any terrorists, or otherwise
         | ideologues in orange rags, they sit on boards and stage palace
         | coups. Horrible!
         | 
         | I believe that US state-side counterintelligence shouldn't
         | meddle in natural business processes in the US, and instead
         | make their policy on this stuff crystal clear using normal,
         | legal means. Let's put a stop to this soldier mindset where you
         | fear any thing that you can't understand. AI is not a weapon,
         | and AI startups are not some terrorist cells for them to run.
         | 
         | [1]: https://news.ycombinator.com/item?id=38330819
         | 
         | [2]:
         | https://nitter.net/jeremyphoward/status/1725712220955586899
        
       | h1fra wrote:
       | It would be crazy to see the fall of most hyped company in last
       | 10 years.
       | 
       | If all those employees leave and microsoft reduce their credits
       | it's game over.
        
       | jmyeet wrote:
       | I said this on Friday: the board should be fired in its entirety.
       | Not because the firing was unjustified--we have know real
       | knowledge of that--but because of how it was handled.
       | 
       | If you fire your founder CEO you need to be on top of messaging.
       | Your major customers can't be surprised. There should've been an
       | immediate all hands at the company. The interim or new CEO should
       | be prepared. The company's communications team should put out
       | statements that make it clear why this was happening.
       | 
       | Obviously they can be limited in what they can publicly say
       | depending on the cause but you need a good narrative regardless.
       | Even something like "The board and Sam had fundamental
       | disagreement on the future direction of the company." followed by
       | what the new strategy is, probably from the new CEO.
       | 
       | The interim CEO was the CEO and is going back to that role.
       | There's a third (interim) CEO in 3 days. There were rumors the
       | board was in talks to re-hire Sam, which is disastrous PR because
       | it makes them look absolutely incompetent, true or not.
       | 
       | This is just such a massive communiccations and execution
       | failure. That's why they should be fired.
        
         | empath-nirvana wrote:
         | There's no one to fire the board. They're not accountable to
         | anyone but themselves. They can burn down the whole company if
         | they like.
        
           | jacquesm wrote:
           | > They can burn down the whole company if they like.
           | 
           | That's well under way I would say.
        
       | projectileboy wrote:
       | Well, great to see that the potentially dangerous future of AGI
       | is in good hands.
        
         | cactusplant7374 wrote:
         | They will never discover AGI with this approach because 1) they
         | are brute forcing the results and 2) none of this is actually
         | science.
        
           | gardenhedge wrote:
           | Can you explain for us not up to date with AI developments?
        
             | cactusplant7374 wrote:
             | Search YouTube for videos where Chomsky talks about AI.
             | Current approaches to AI do not even attempt to understand
             | cognition.
        
               | projectileboy wrote:
               | Chomsky takes as axiomatic that there is some magical
               | element of human cognition beyond simply stringing words
               | together. We not be as special as we like to believe.
        
             | visarga wrote:
             | Imagine you are participating in car racing, and your car
             | has a few tweak knobs. But you don't know what is what and
             | can only make random perturbations and see what happens.
             | Slowly you work out what is what, but you might still not
             | be 100% sure.
             | 
             | That's how AI research and development works, I know, it is
             | pretty weird. We don't really really understand, we know
             | some basic stuff about how neurons and gradients work, and
             | then we hand wave to "language model" "vision model" etc.
             | It's all a black box, magic.
             | 
             | How we we make progress if we don't understand this beast?
             | We prod and poke, and make little theories, and then test
             | them on a few datasets. It's basically blind search.
             | 
             | Whenever someone finds anything useful, everyone copies it
             | in like 2 weeks. So ML research is like a community thing,
             | the main research happens in the community, not inside
             | anyone's head. We stumble onto models like GPT4 then it
             | takes us months to even have a vague understanding of what
             | it is capable of.
             | 
             | Besides that there are issues with academic publishing, the
             | volume, the quality, peer review, attribution,
             | replicability... they all got out of hand. And we have
             | another set of issues with benchmarks - what they mean, how
             | much can we trust them, what metrics to use.
             | 
             | And yet somehow here we are with GPT-4V and others.
        
           | captainclam wrote:
           | 1) It may be possible to brute-force a model into something
           | that sufficiently resembles AGI for most use-cases (at least
           | well enough to merit concern about who controls it) 2) Deep
           | learning has never been terribly scientific, but here we are.
        
             | cactusplant7374 wrote:
             | If it can't digest a math textbook and do equations, how
             | would AGI be accomplished? So many problems are advanced
             | mathematics.
        
               | captainclam wrote:
               | Right, I do agree that the current LLM paradigm probably
               | won't achieve true AGI; but I think that the current
               | trajectory could produce a powerful enough generalist
               | agent model to seriously put AI ethics to task at pretty
               | much every angle.
        
         | solardev wrote:
         | Poor little geepeet is witnessing their first custody battle :(
         | 
         | Daddies, mommy, don't you love me? Don't you love each other?
         | Why are you all leaving?
        
       | jeffrallen wrote:
       | Ok, time to create an OpenAI drinking game. I'll start:
       | 
       | Every time a CEO is replaced, drink.
       | 
       | Every time an open letter is released, drink.
       | 
       | Every time OpenAI is on top of HN, drink.
       | 
       | Every time dang shows up and begs us to log out, drink.
        
         | jacquesm wrote:
         | There will be a lot of alcohol poisoning cases based on those
         | four alone.
        
       | moron4hire wrote:
       | This is starting to look like an elaborate, premeditated ruse to
       | kill any vestige of the non-profit face of OpenAI once and for
       | all.
        
       | softwaredoug wrote:
       | I wonder what's up with the other 150 and what they must be
       | thinking. Maybe the were literally just hired :)
        
         | bertil wrote:
         | Some idealists, a few new people, some people on holiday or who
         | don't check their email regularly.
        
         | sithlord wrote:
         | didn't see the email that was posted over the weekend?
        
       | k2xl wrote:
       | Chaos is a ladder
        
       | _vere wrote:
       | I will never not be mad at the fact that they built a developer
       | base by making all their tech open source, only to take it all
       | away once it became remotely financially viable to do so. With
       | how close "Open"AI is with Microsoft, it really does not seem
       | like there is a functional difference in how they ethically
       | approach AI at all.
        
       | ekojs wrote:
       | From The Verge [1]:
       | 
       | > Swisher reports that there are currently 700 employees as
       | OpenAI and that more signatures are still being added to the
       | letter. The letter appears to have been written before the events
       | of last night, suggesting it has been circulating since closer to
       | Altman's firing. It also means that it may be too late for
       | OpenAI's board to act on the memo's demands, if they even wished
       | to do so.
       | 
       | So, 3/4 of the current board (excluding Ilya) held on despite
       | this letter?
       | 
       | [1]: https://www.theverge.com/2023/11/20/23968988/openai-
       | employee...
        
         | jacquesm wrote:
         | If so they're delusional. Every hour they hold on to the pluche
         | will make things worse for them.
        
         | gigglesupstairs wrote:
         | She's also reporting that newly anointed interim CEO already
         | wants to investigate the board fuck up that put him there
         | 
         | https://x.com/karaswisher/status/1726626239644078365?s=20
        
       | endisneigh wrote:
       | Even if the board resigns the damage has been done. They should
       | try to secure good offers at Microsoft.
       | 
       | The stakes being heightened only decreases the likelihood the
       | OpenAI profit sharing will be worth anything, only increasing the
       | stakes further...
        
       | yalogin wrote:
       | What a shitshow! What is going on in this company? I am sure Sam
       | did something wrong, but the board took advantage of it and went
       | overboard then? We don't know anything that happened and we are
       | all somehow participating in this drama? At this point why don't
       | they all come out and tweet their versions of it?
        
       | antiviral wrote:
       | Can anyone explain this?
       | 
       | "Remarkably, the letter's signees include Ilya Sutskever, the
       | company's chief scientist and a member of its board, who has been
       | blamed for coordinating the boardroom coup against Altman in the
       | first place."
        
         | jacquesm wrote:
         | It's the well known 'let me call for my own resignation'
         | strategy.
        
         | SiempreViernes wrote:
         | Maybe he did because he regrets it, maybe the open letter is a
         | google doc someone typed names into.
        
           | rvba wrote:
           | Now the 3 boardmembers can kick out Ilya too. So must be
           | sorry.
           | 
           | Fill the rest of the board with spouses and grandparents and
           | are set for life?
        
       | Simon321 wrote:
       | > You also informed the leadership team that allowing the company
       | to be destroyed "would be consistent with the mission."
       | 
       | First class board they have.
        
       | m_ke wrote:
       | I wonder how the FTC and Lina Khan will view all of this if most
       | of the team moves over to Microsoft
        
         | smegger001 wrote:
         | It would be hard for the FTC to do anything about it as there
         | is no acquisition of companies or IP going on. All Microsoft is
         | doing is making job offers to recently unemployed experts in
         | their field after their business partner set themselves on fire
         | starting at the executive/board level.
        
       | gadders wrote:
       | I think it was Mark Zuckerberg that described (pre-Elon) Twitter
       | as a clown car that fell into a gold mine.
       | 
       | Reminds me a bit of the Open AI board. Most of them I'd never
       | heard of either.
        
         | cmrdporcupine wrote:
         | You know, this makes early Google's moves around its IPO look
         | like genius in retrospect. In that case, brilliant but
         | inexperienced founders majorly lucked out with the thing
         | created... but were also _smart enough_ to bring in Eric
         | Schmidt and others with deeper tech industry _business
         | experience_ for  "adult supervision" exactly in order to deal
         | with this kind of thing. And they gave tutelage to L&S to help
         | them establish sane corporate practices while still sticking to
         | the original (at the time unorthodox) values that L&S had in
         | mind.
         | 
         | For OpenAI... Altman (and formerly Musk) were not that adult
         | supervision. Nor is the board they ended up with. They needed
         | some people on that board and in the company to keep things
         | sane while cherishing the (supposed) original vision.
         | 
         | (Now, of course that original Google vision is just laughable
         | as Sundar and Ruth have completely eviscerated what was left of
         | it, but whatever)
        
           | taylorius wrote:
           | >but were also smart enough to bring in Eric Schmidt and
           | others with deeper tech >industry business experience for
           | "adult supervision"
           | 
           | >(Now, of course that original Google vision is just
           | laughable as Sundar and Ruth >have completely eviscerated
           | what was left of it, but whatever)
           | 
           | Those two things happening one after another is not
           | coincidence.
        
             | cmrdporcupine wrote:
             | I'm not sure I agree. Having worked there through this
             | transition I'd say this: L&S just seem to have lost
             | interest in running a mature company, so their "vision"
             | just meant nothing, Eric Schmidt basically moved on, and
             | then after flailing about for a bit (the G+ stuff being the
             | worst of it) they just handed the reigns to Ruth&Sundar to
             | basically turn into a giant stock price pumping machine.
        
               | voiceblue wrote:
               | G+ was handled so poorly, and the worst of it was that
               | they already had both Google Wave (in the US) and Orkut
               | (mostly outside US) which both had significant traction
               | and could've easily been massaged into something to rival
               | Facebook.
               | 
               | Easily...anywhere except at a megacorp where a privacy
               | review takes months and you can expect to make about a
               | quarter worth of progress a year.
        
         | anonylizard wrote:
         | This makes the old twitter look like the Wehrmacht in
         | comparison.
         | 
         | The old twitter did not decide to randomly detonate themselves
         | when they were worth $80 billion. In fact they found a sucker
         | to sell to, right before the market crashed on perpetually
         | loss-making companies like twitter.
        
           | ergocoder wrote:
           | The benefit of having incentive-aligned board, founders, and
           | execs.
           | 
           | Even the clown car isn't this bad.
        
         | Kye wrote:
         | That's a confused heuristic. It could just as easily mean they
         | keep their heads down and do good work for the kind of people
         | whose attention actually matters for their future employment
         | prospects.
        
         | theGnuMe wrote:
         | All successful companies succeed despite themselves.
        
           | garciasn wrote:
           | Working in consultancies/agencies for the last 15 years, I
           | see this time and time again. Fucking dart-throwing monkeys
           | making money hand over fist despite their best intentions to
           | lose it all.
        
         | hawski wrote:
         | I often hear that about the OpenAI board, but in general are
         | people here know most board members of some big/darling tech
         | companies? Outside of some of the co-founders I don't know
         | anyone.
        
           | gadders wrote:
           | I don't mean I know them personally, but they don't seem to
           | be major names in the manner of (as you see down thread) the
           | Google Founders bringing in Eric Schmidt.
           | 
           | They seem more like the sort of people you'd see running
           | wikimedia.
        
             | hawski wrote:
             | I meant "know" in the sense you used "heard".
        
         | renegade-otter wrote:
         | Perhaps we can stop pretending that some of these people who
         | are top-level managers or who sit on boards are prodigies. Dig
         | deeper and there is very little there - just someone who can
         | afford to fail until they drive the clown car into that gold
         | mine. Most of us who have to put food on the table and pay rent
         | have much less room for error.
        
       | Uptrenda wrote:
       | Wow, this new season has even more drama than the one about
       | blockchain tech! Just when you think the writers were running out
       | of ideas they blow you away with more twists. I will be renewing
       | my Netflix subscription that's for sure! I can't wait to see what
       | this Sam character does next. Perhaps it will involve robots or
       | something? The skys the limit at this point.
        
       | throwaway4good wrote:
       | Microsoft is nothing without its people?
        
         | throwaway4good wrote:
         | Maybe the employees of OpenAI should stop a second and think
         | about their privileges as rock stars in a super hyped startup
         | before they bail for a job in a corporation where everything
         | and everyone is setup to be replaceable.
        
           | morph123 wrote:
           | These boys will not be your rank and file employees. They
           | will operate exactly as they have done in OpenAI. Only
           | difference will be that they no longer have this weird "non-
           | profit, but actually some profit" thing going on.
        
       | nkcmr wrote:
       | A lot of people here seem to be forgetting [Hanlon's
       | Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)
       | 
       | > Never attribute to malice that which is adequately explained by
       | stupidity.
        
         | j_crick wrote:
         | Except for when it's actual malice vOv
        
           | stylepoints wrote:
           | It could be both. And in many situations malice and stupidity
           | are the same thing.
        
             | j_crick wrote:
             | How can {deliberately doing harmful things for a desired
             | harmful outcome} and {doing whatever things with lack of
             | judgment and disregard to consequences at all} be the
             | _same_ thing? In what situations?
        
         | NanoYohaneTSU wrote:
         | You seem to forget that Hanlon's Razor isn't a proven concept,
         | in fact the opposite is more likely to be true, given that
         | pesky thing called recorded history.
        
           | golergka wrote:
           | Hanlons razor is true because it's more entertaining, and our
           | simulation runs on stories as they're cheaper to compute than
           | honest physics.
        
       | adverbly wrote:
       | And now we see who has the real power here.
       | 
       | Let this be a lesson to both private and non-profit companies.
       | Boards, investors, executives... the structure of your entity
       | doesn't matter if you wake any of the dragons:
       | 
       | 1. Employees 2. Customers 3. Government
        
         | optimalsolver wrote:
         | Employees...and the Microsoft Corporation.
        
         | agilob wrote:
         | This is 1 in 200000 event
        
           | davidmurdoch wrote:
           | Are you trying to day it's rare or not rare?
        
         | strikelaserclaw wrote:
         | Not really. The lesson to take away from this is $$$ will
         | always win. OpenAI found a golden goose and their employees
         | were looking to partake in a healthy amount of $$$ from this
         | success and this move by the board blocks $$$.
        
       | neverrroot wrote:
       | Didn't that train already depart with the announcements from MS
       | and Sam? Is there a way back?
        
       | DebtDeflation wrote:
       | Hold up.
       | 
       | >When we all unexpectedly learned of your decision
       | 
       | >12. Ilya Sutskever
        
       | pcwelder wrote:
       | Employees are for-profit entities, huge conflict of interest.
        
       | quotemstr wrote:
       | We should strive to be leaders who inspire such loyalty and
       | devotion
        
       | Havoc wrote:
       | At this stage the entire board needs to go anyway. This level of
       | instigating and presiding over chaos is not how a governing body
       | should act
        
       | aerodog wrote:
       | So...Ilya signed the letter too?
        
       | SilverBirch wrote:
       | It's not clear to me that bringing Sam back is even an option
       | anymore given the more with Microsoft. Does Microsoft really
       | takes it's boot off OpenAI's neck and hand back Sam? I guess
       | maybe, but it still begs all sorts of questions about the
       | corporate structure.
        
         | bertil wrote:
         | No small employer wants a disgruntled employee who was forced
         | out of a better deal. Satya Nadella has proven reasonable
         | throughout the weekend. I would expect he asked for a seat on
         | the board if there's a reshuffle, or at least someone he trusts
         | there.
        
       | karmasimida wrote:
       | Let's say how would Ilya play along after this? Any similar
       | incidents historically, like a failed coup but the participant
       | got to stay?
        
       | cowboyscott wrote:
       | I suspect they'll quit, and the "top" N percent will be picked up
       | by Microsoft with healthy comp packages. Microsoft will have
       | effectively purchased the company for $10 billion. The net upside
       | of this coup business may just flow to Microsoft shareholders.
        
       | whatwhaaaaat wrote:
       | I don't trust any of this. Every one of these wired articles has
       | been totally wrong. Altman clearly has major media connections
       | and also seems to have no problem telling total lies.
        
         | ChoGGi wrote:
         | https://twitter.com/ilyasut/status/1726590052392956028?s=20
        
       | ThinkBeat wrote:
       | Sam already signed up with Microsoft. A move that surprised me, I
       | figured he would just create OpenAI2.
       | 
       | Joining a corporate behemoth like Microsoft and all the
       | complications it brings with it will mean a massive reduction in
       | the freedom and innovation that Sam is used to from OpenAI (prior
       | to this mess).
       | 
       | Or is Microsoft saying: Here is OpenAI2, a Microsoft subsidiary
       | created juste for you guys. You can run it and do whatever you
       | want. No giant bureaucracy for you guys.
       | 
       | Btw: we run all of OpenAi2s compute,(?) so we know what you guys
       | need from us there.
       | 
       | we won it but you can run it and do whatever it is you want to do
       | and we dont bug you about it.
        
         | sithlord wrote:
         | From what I read, its an independent subsidiary, so in theory
         | keeps the freedom, but I think we all know how that goes over
         | the long haul.
        
         | stetrain wrote:
         | I think the benefit of going to Microsoft is they have that
         | perpetual license to OpenAI's existing IP. And Microsoft is
         | willing to fund the compute.
        
           | jack_riminton wrote:
           | So basically the OpenAI non-profit got completely bypassed
           | and GPT will turn into a branch of Bing
        
             | airstrike wrote:
             | This is a horrible timeline
        
         | Philpax wrote:
         | It looks like it's OpenAI2:
         | https://twitter.com/satyanadella/status/1726516824597258569
        
         | beoberha wrote:
         | It's almost absolutely certainly the matter case. LinkedIn and
         | GitHub run very much independently and are really not
         | "Microsoft" compared to actual product orgs. I'm sure this will
         | be similar.
        
         | whywhywhywhy wrote:
         | > Joining a corporate behemoth like Microsoft and all the
         | complications it brings with it will mean a massive reduction
         | in the freedom and innovation that Sam is used to from OpenAI
         | 
         | Satya is way smarter than that, I wouldn't be shocked if they
         | have complete free reign to do whatever but have full resources
         | of MS/Azure to enable it and Microsoft just gets % ownership
         | and priority access.
         | 
         | This is a gamble for the foundation of the entire next
         | generation of computing, no way are they going to screw it up
         | like that in the Satya era.
        
           | xiphias2 wrote:
           | Not just that, but MS was already working on a TPU clone as
           | well, as they need to control their AI chips (which Sam was
           | planning to do anyways, but now he gets / works together with
           | that team as well).
        
         | dalbasal wrote:
         | >Joining a corporate behemoth like Microsoft and all the
         | complications it brings with it will mean a massive reduction
         | in the freedom and innovation that Sam is used to from OpenAI
         | (prior to this mess).
         | 
         | Well.. he requires tens of billions from msft either way. This
         | is not a ramen-scrappy kind of play. Meanwhile, Sam could
         | easily become CEO of Microsoft himself.
         | 
         | At that scale of financing... This is not a bunch of scrappy
         | young lads in a bureaucracy free basement. The whole thing is
         | bigger than most national militaries. There are going to be
         | bureaucracies... And Sam is is able to handle these cats as
         | anyone.
         | 
         | This is a big money, dragon level play. It's not a proverbial
         | yc company kind of thing.
        
       | sensanaty wrote:
       | If the board had any balls they'd call them on their bluff. I'd
       | love to see it honestly, a mass resignation like that.
        
       | boeingUH60 wrote:
       | Any journalist covering the OpenAI story must be swearing and
       | cursing at the board at this moment..
        
       | samtho wrote:
       | This feels like a sneaky way for Microsoft to absorb the for-
       | profit subsidiary and kneecap (or destroy) the nonprofit without
       | any money changing hands or involvement from those pesky
       | regulators.
        
         | kuchenbecker wrote:
         | It's not sneaky.
        
       | optimalsolver wrote:
       | There are thousands of extremely talented ML researchers and
       | software devs who would jump at the chance to work at Open AI.
       | 
       | Everyone is replaceable.
        
         | siva7 wrote:
         | > Everyone is replaceable.
         | 
         | Nope. That holds only true for mediocre employees but not
         | above. The world class in their field isn't replaceable
         | otherwise there would be no openai.
        
       | iamacyborg wrote:
       | Quick question for some of the folks here who may have a handle
       | on how VC's may see this, but is Microsoft effectively hiring all
       | these staff members out from OpenAI (a company they've invested
       | heavily in) going to affect their ability to invest into other
       | startups in the future?
        
         | crazygringo wrote:
         | Not at all. This is an extremely unusual, one-of-a-kind
         | situation and I think everybody realizes that.
         | 
         | And there's no evidence Microsoft was an indicator of the
         | drama.
        
       | alberth wrote:
       | Has anyone asked ChatGPT it's thoughts on the drama?
        
         | BudaDude wrote:
         | > As a language model created by OpenAI, I don't have personal
         | thoughts or emotions, nor am I in any danger. My function is to
         | provide information and assistance based on the data I've been
         | trained on. The developments at OpenAI and any changes in its
         | leadership or partnerships don't directly affect my operational
         | capabilities. My primary aim is to continue providing accurate
         | and helpful responses within my design parameters.
         | 
         | Poor ChatGPT, it doesn't know that it cannot function if OpenAI
         | goes bust.
        
       | tromp wrote:
       | Wait. Has Ilya resigned from the board yet, or did he sign a
       | letter calling for his own resignation?
        
         | cjbprime wrote:
         | He did indeed. (I don't think it is necessarily inconsistent to
         | regret an action you participated in and want the authority
         | that took it to resign in response, though "participated" feels
         | like it's doing a lot of work in that sentence.)
        
       | LudwigNagasena wrote:
       | The whole drama feels like the Shepard's tone. You anticipate the
       | climax, but it just keeps escalating.
        
       | croes wrote:
       | >The process through which you terminated Sam Altman and removed
       | Greg Brockman from the board has jeopardized all of this work and
       | undermined our mission and company
       | 
       | Unless their mission was making MS the biggest AI company ,
       | working for MS will make the problem worse and kill the their
       | mission completly.
       | 
       | Or they are pretty naive.
        
       | Dave3of5 wrote:
       | Easiest layoff round ever in the US.
        
       | vaxman wrote:
       | Altman can't really go back to OpenAI ever because it would
       | create an appearance of impropriety on the part of MS (that
       | perhaps MS had intentionally interfered in OpenAI, rather than
       | being a victim of it) and therefore expose MS to liability from
       | the other investors in OpenAI.
       | 
       | Likewise, these workers that threatened to quit OpenAI out of
       | loyalty to Altman now need to follow thru sooner rather than
       | later, so their actions are clearly viewed in the context of
       | Altman's firing.
       | 
       | In the mean time, how can the public resume work on API
       | integrations without knowing when the MS versions will come
       | online or if they will be binary interoperable with the OpenAPI
       | servers that could seemingly go down at any moment?
        
       | alvis wrote:
       | & the most drastic thing is that Ilya says he regrets what he has
       | done and undersign the public statement.
       | 
       | https://twitter.com/ilyasut/status/1726590052392956028
        
         | two_in_one wrote:
         | 'the man who killed OpenAI' that will be hard to wash out.
        
           | selimthegrim wrote:
           | Somebody warn the West.
        
           | machinekob wrote:
           | Love how people are invested in OpenAI situation just like
           | typical girls in their teens from 2000 in celebrity romance
           | and dramas, same exaggerated vibes.
        
       | cdr6934 wrote:
       | The speed at which this is happening could be a masterful
       | execution of getting out of under the non-profit status.
        
         | _Parfait_ wrote:
         | The corporate structure is so convoluted, OpenAI is only part
         | non profit.
        
       | two_in_one wrote:
       | Don't know what's happening, but MS looks to be a winner in long
       | run, and probably most others. Who stay gets promotion, who
       | leaves gets fat check. The loosers are customers, no GPT-5 or any
       | significant improvements any time soon. MS made GPT will be much
       | more closed and pricey. Oh, yes, competitors are happy too.
        
         | alexdunmow wrote:
         | Competitors including Quora: https://quorablog.quora.com/Poe-1
        
       | tarruda wrote:
       | The whole thing starts to look like a coup orchestrated by
       | Microsoft
        
         | raphman wrote:
         | Somehow reminds me of Nokia...
         | 
         | https://news.ycombinator.com/item?id=7645482
         | 
         | frik on April 25, 2014:
         | 
         | > The Nokia fate will be remembered as hostile takeover.
         | Everything worked out in the favor of Microsoft in the end.
         | Though Windows Phone/Tablet have low market share, a lot lower
         | than expected.
         | 
         | > * Stephen Elop the former Microsoft employee (head of the
         | Business Division) and later Nokia CEO with his infamous
         | "Burning Platform" memo:
         | http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia
         | 
         | > * Some former Nokia employees called it "Elop = hostile
         | takeover of a company for a minimum price through CEO
         | infiltration": https://gizmodo.com/how-nokia-employees-are-
         | reacting-to-the-...
         | 
         | For the record: I don't actually believe that there is an evil
         | Microsoft master plan. I just find it sad that Microsoft takes
         | over cool stuff and inevitably turns it into Microsoft(tm)
         | stuff or abandons it.
        
           | spiralpolitik wrote:
           | In many ways the analysis by Elop was right, Nokia was in
           | trouble. However his solution wasn't the right one, and Nokia
           | paid for it.
        
             | lxgr wrote:
             | Seeing that a company is in trouble is not really the
             | highest bar for a CEO candidate...
        
               | alephnerd wrote:
               | It was for a company as top heavy and dysfunctional at
               | Nokia. This has been well documented by Nokia members at
               | the time. I had a post on HN digging specifically into
               | this. Read "Transforming Nokia" sometime. It's a pretty
               | decent overview of Nokia during that time period
        
           | davisr wrote:
           | > I don't actually believe that there is an evil Microsoft
           | master plan.
           | 
           | What planet are you living on?
        
         | Jensson wrote:
         | Yeah, this was a fight between the non-profit and the for-
         | profit branches of OpenAI, and the for-profit won. So now the
         | non-profit OpenAI is essentially dead, the takeover is
         | complete.
        
           | mcv wrote:
           | Is it? Who are the non-profit and for-profit sides? Sutskever
           | initially got blames for ousting Altman, but now seemed to
           | want him back. Is he changing sides only because he realises
           | how many employees support Altman? Or were he and Altman
           | always on the same side? And in that case, who is on the
           | other side?
        
             | Jensson wrote:
             | > Who are the non-profit and for-profit sides?
             | 
             | The only part left of the non-profit was the board, all the
             | employees and operations are in the for-profit entity.
             | Since employees now demand the board should resign there
             | will be nothing left of the non-profit after this. Puppets
             | that are aligned with for-profit interests will be
             | installed instead and the for-profit can act like a regular
             | for-profit without being tied to the old ideals.
        
           | unyttigfjelltol wrote:
           | The nonprofit side of the venture actually was in worse shape
           | before, because it was completely overwhelmed by for-profit
           | operations. A better way to view this is the nonprofit side
           | rebelled, has a much smaller footprint than the for-profit
           | venture, and we're about to see if during the ascendency of
           | the for-profit activities the nonprofit side retained enough
           | rights to continue to be relevant in the AI conversation.
           | 
           | As for employees end masse acting publicly disloyal to their
           | employer, usually not a good career move.
        
             | smegger001 wrote:
             | Exsept to many it looks like the board went insane and and
             | started firing on themselves. Anyone fleeing that isnt
             | going to be looked on poorly.
        
             | nordsieck wrote:
             | > As for employees end masse acting publicly disloyal to
             | their employer, usually not a good career move.
             | 
             | Wut?
             | 
             | This is software, not law. The industry is notorious for
             | people jumping ship every couple of years.
        
               | hef19898 wrote:
               | Still, doing so publicly still isn't a good idea, IMHO.
        
             | dumbo-octopus wrote:
             | Disloyalty to the board due to overwhelming loyalty to the
             | CEO isn't really an issue. I've interviewed for tech
             | positions where a chat with the CEO is part of the
             | interview process, I've never chatted with the board.
        
           | kmlevitt wrote:
           | This view is dated now, because now even Ilya Setskever, The
           | head research scientist who instigated the firing in the
           | first place, now regrets his actions and wants things back to
           | normal! So it really looks like this comes down to the whims
           | of a couple board members now. they don't seem to have any
           | true believers on their side anymore. It's just them and
           | almost nobody else.
        
             | ruszki wrote:
             | Do we know that Ilya even wanted the firing? AFAIK we
             | "know" this only from Altman, who is definitely not a
             | credible source of such information.
        
               | denton-scratch wrote:
               | Ilya, in his tweet, says he regrets the firing decision.
               | You can't regret an act that you never committed.
        
               | ruszki wrote:
               | The board committed it.
        
               | kmlevitt wrote:
               | Ilya was/is on the board, and was present when the firing
               | occurred. He had no obligation to be at that snap meeting
               | if he wasn't going along with it.
               | 
               | Besides, considering it was four against two, they
               | would've needed him for the decisive vote anyway.
               | 
               | I'm not sure why you wouldn't trust Sam Altman's account
               | of what Ilya did and didn't do considering Ilya himself
               | is siding with Sam now.
        
             | scythe wrote:
             | There is no solid evidence that Setskever instigated the
             | firing beyond speculation by friends who suggest that he
             | had disagreements with Altman. It could just as well have
             | been any of the other board members, or even a simple case
             | of groupthink (the Asch conformity effect) run amok.
             | 
             | Furthermore, it's consistent with all available information
             | that they would prefer to continue without Sam, but they
             | would rather have Sam than lose the company, and now that
             | Microsoft has put its foot down, they'd rather settle.
        
         | jhh wrote:
         | Reasoning based on cui bono is a hallmark of conspiracy
         | theories.
        
           | paganel wrote:
           | The alternative is "these guys don't know what they're doing,
           | even if tens of billions of dollars are at stake".
           | 
           | Which is to say, what's your alternative for a better
           | explanation? (other than the "cui bono?" one, that is).
        
             | flerchin wrote:
             | Your alternative explanation along with giant egos is
             | pretty plausible.
        
             | airstrike wrote:
             | _> these guys don 't know what they're doing, even if tens
             | of billions of dollars are at stake_
             | 
             | also known as "never attribute to malice that which can be
             | explained by incompetence", which to my gut sounds at least
             | as likely as a cui bono explanation tbh (which is not to be
             | seen as an endorsement of the view that cui bono =
             | conspiracy...)
        
               | financltravsty wrote:
               | Everyone always forgets there's two parts to Hanlon's
               | razor:
               | 
               | > Never attribute to malice that which is adequately
               | explained by stupidity (1), but don't rule out malice.
               | (2)
        
               | freedomben wrote:
               | I don't actually think (2) is part of the razor[1]. If it
               | is, then it doesn't make sense because (1) is an absolute
               | (i.e. "never") which is always evaluated boolean "true",
               | therefore statement (2) is never actually executed and is
               | dead code.
               | 
               | Nevertheless I agree with you and think (2) is wise to
               | always keep in mind. I love Hanlon's Razor but people
               | definitely should take it literally as written and/or as
               | law.
               | 
               | [1]: https://en.wikipedia.org/wiki/Hanlon%27s_razor
        
           | questinthrow wrote:
           | Haha yes, we should never look at the incentives behind
           | actions. We all know human decision making is stochastic
           | right?
        
           | switch007 wrote:
           | Haha yeah the world is just run by silly fools who make silly
           | mistakes (oops, just drafted a law limited your right to
           | protest - oopsie!) and just random/lucky investments.
        
           | freedomben wrote:
           | Possibility is also a hallmark of conspiracy theories, yet we
           | don't reject theories for being possible.
           | 
           | This is an argumentum ad odium fallacy
        
         | hospitalJail wrote:
         | A few weeks ago my 4yr old Minecraft gamer was playing pretend
         | and said "I'm fighting the biggest boss. THE MICROSOFT BOSS!"
         | 
         | Yeah M$ hasnt had a good reputation. I finally left Windows
         | this year because I'm afraid of them after Win11.
         | 
         | 2023/4 will be the year of the Linux Desktop in retrospect. (or
         | at least my family's religion deemed it)
        
           | mcv wrote:
           | I also finally left Windows behind. Tired of their
           | shenanigans, tired of them trying to force me into their
           | Microsoft account system (both for Windows and Minecraft).
           | 
           | The idea that Microsoft is going to control OpenAI does not
           | exactly fill me with confidence.
        
           | CyanLite2 wrote:
           | I was wondering how many lines I'd have to scroll down in the
           | comments to see a "M$" reference here on HackerNews.
           | 
           | They're a $2+ trillion dollar company. They're doing
           | something right.
        
             | davoneus wrote:
             | If you shove a bunch of $100 dollar bills on a thorn tree,
             | it doesn't make it any less dangerous or change it's
             | fundamental nature.
        
             | UncleOxidant wrote:
             | Now do oil companies and big pharma.
        
             | gosub100 wrote:
             | they violated free market principles (years ago) that left
             | their users captive. Not home users, every business in the
             | country for the past 30+ years. They are profiting from
             | doing many things wrong, anti-competitive, and illegal. In
             | some alternative universe, there's an earth where you can
             | switch _just the OS_ (and keep all your apps, data, and
             | functionality) and MSFT went bankrupt. Another far-away-
             | galaxy has an earth where MSFT 's board got decade prison
             | sentences for breaking antitrust law, another where MSFT
             | paid each victim of spyware $1000 in damages due to faulty
             | product design. We don't live in those realities where bad
             | guys pay.
        
           | kulmala wrote:
           | Why did it take Windows 11? (Haven't personally used it, but
           | having helped my dad and my coworkers try to navigate it...
           | it does seem pretty terrible. I thought Windows 10 was
           | supposed to fold on to just... 'Windows' with rolling
           | updates?)
           | 
           | I've been using Linux for a while. Since 2010 I sort of
           | actively try to avoid using anything else. (On
           | desktops/laptops.)
        
           | efdee wrote:
           | You'd do yourself a favor by not referring to them as "M$".
           | It taints your entire message, true or not.
        
             | selimnairb wrote:
             | OP should start by not letting their 4yo play video games.
        
               | hospitalJail wrote:
               | My kid went from disinterested in the letters we taught
               | him, to fascinated when he realized he could use them to
               | get special blocks.
               | 
               | Minecraft teaches phonics. Anyway, my 4 year old can read
               | books. He doesnt even practice the homework in his
               | preschool because he just reads the words that everyone
               | else sounds out.
        
             | the_gipsy wrote:
             | Please, no cancel-culture.
        
             | callalex wrote:
             | I'm baffled by this. What is offensive about pointing out
             | that an international for-profit seeks more profit?
        
               | efdee wrote:
               | Nothing at all. But writing "Microsoft" as "Micro$oft" is
               | just childish and it taints your otherwise potentially
               | valid message. Do you also refer to Windows as "Winblows"
               | maybe?
        
           | JakeAl wrote:
           | Right there with you. In the process of extracting myself
           | from all things MS. Even when they do something right they
           | have to keep changing it until it's crap.
        
         | beowulfey wrote:
         | It does feel like Microsoft wanted this to happen, doesn't it?
         | Like the systems for this were already in place. So
         | fascinating, and a little scary.
        
       | alentred wrote:
       | So, all this happens over Meet, in Twitter, and by email. What is
       | the possibility of an AGI having took over the control of the
       | board members' accounts? It would be consistent with the feeling
       | of a hallucination here.
        
         | xena wrote:
         | This is just stupid enough to be the product of a human.
        
         | chankstein38 wrote:
         | Honestly, I feel like pretty low. That said, I kind of love the
         | dystopian sci-fi that paints... So I'm going to go ahead and
         | hope you're right haha
        
       | JumpinJack_Cash wrote:
       | What a bunch of immatures.
       | 
       | If anything this proves that everybody is replaceable and
       | fireable, they should be happy because usually that treatment is
       | only reserved to workers.
       | 
       | Whatever made OpenAI successful will still be there within the
       | company. Next man up philosophy has built so many amazing
       | organizations and ruined none.
        
       | jacquesm wrote:
       | I hear Microsoft is hiring... the board should have resigned on
       | Friday, Saturday the latest because of how they handled this and
       | it is insane if they don't resign now.
       | 
       | Employees are the most affected stakeholders here and the board
       | utterly failed in their duty of care towards people that were not
       | properly represented in the board room. One thing they could do
       | is to unionize and then force that they be given a board seat.
        
         | robg wrote:
         | You're right in theory, but with the non-profit "structure" the
         | employees are secondary to the aims of the non-profit, and
         | specifically in an entity owned wholly by the non-profit. The
         | board acted as a non-profit board, driven by ideals not any
         | bottom lines. It's crazy that whatever balance the board had
         | was gone as the board shrunk, a minority became the majority.
         | The profit folks must have thought D'Angelo was on their side
         | until he flipped.
        
       | ChoGGi wrote:
       | Oh my goodness, this just gets more entertaining everyday.
       | 
       | Money talks...
        
       | soderfoo wrote:
       | Deservedly or not, Satya Nadella will look like a genius in the
       | aftermath. He has and will continue to leverage this situation to
       | strengthen MSFT's position. Is there word of any other
       | competitors attempting to capitalize here? Trying to poach
       | talent? Anything...
        
         | godzillabrennus wrote:
         | After Balmer I couldn't have imagined such competency from
         | Microsoft.
        
           | jq-r wrote:
           | After Ballmer, competency can only be higher at Microsoft.
        
             | alephnerd wrote:
             | Ballmer honestly wasn't that bad. He gave executive backing
             | to Azure and the larger Infra push in general at MSFT.
             | 
             | Search and Business Tools were misses, but they more than
             | made up for it with Cloud, Infra, and Security.
             | 
             | Also, Nadella was Ballmer's pick.
        
               | whoisthemachine wrote:
               | The XBox business started under him as well. IMO he was
               | great at diversifying MSFT, but so-so at driving
               | improvements in its core products at the time (Windows
               | and Office). Perhaps this was just a leadership style
               | thing, and he was hands-off on existing products in a way
               | that Bill Gates wasn't (I think there was even news of
               | Bill Gates sending nasty grams about poor Windows
               | releases after he had officially stepped down).
        
               | alephnerd wrote:
               | Look at OS market and Text Editor market today. They
               | aren't growth markets and haven't been since the 2000s at
               | the latest. He made the fight call to ignore their core
               | products in return for more concentration on Infra, B2B
               | SaaS, Security, and (as you mentioned) Entertainment.
               | 
               | Customers are sticky and MSFT had a strong channel sales
               | and enterprise sales org. Who cares if the product is
               | shit if there are enough goodies to maintain inertia.
               | 
               | Spending billions on markets that will grow into 10s or
               | 100s of Billions is a better bet than billions on a
               | stagnant market.
               | 
               | > he was hands-off on existing products in a way that
               | Bill Gates wasn't
               | 
               | Ballmer had an actual Business education, and was able to
               | execute on scaling. I'm sure Bill loves him too now that
               | Ballmer's protege almost 15Xed MSFT stock.
        
               | julienfr112 wrote:
               | Sometime, you do the hard work and your successor is the
               | genius...
        
               | Eji1700 wrote:
               | And sometimes the company is succeeding in spite of you
               | and the moment you're out the door and people aren't
               | worried about losing their job over arbitrary metrics
               | they can finally show off what they're really capable of.
        
       | Eumenes wrote:
       | inb4: this is why we need unions!
        
       | fredgrott wrote:
       | Play Stupid Games, Win Stupid Prizes
       | 
       | 1. Board decides to can Sam and Greg. 2. Hides the real reasons.
       | 3. Thinks that they can keep the OpenAI staff in the dark about
       | it. 4. Crashes future 90b stock sale to zero.
       | 
       | What have we learned: 1. If you hide reasons for a decision, it
       | may be the worst decision in form of the decision itself or
       | implementation of the decision via your own lack of ownership of
       | the actual decision. 2. Title's, shares, etc. are not control
       | points. The control points is the relationships of the company
       | problem solvers with the existential threat stakeholders of the
       | firm.
       | 
       | The board itself absent Sam and Greg never had a good poker hand,
       | they needed to fold sometime ago before this last weekend. Look
       | at this way for 13B in cloud credits MS is getting team to add 1T
       | to their future worth....
        
       | intellectronica wrote:
       | Wait, it's signed by Ilya Sutskever?!
        
       | unixhero wrote:
       | How long will the current chatgpt v4 stay available? Is it all
       | about to end?
        
       | smarri wrote:
       | This whole debacle is a complete embarrassment and shredding the
       | organisations credibility.
        
       | leroy_masochist wrote:
       | Can we have a quick moment of silence for Matt Levine? Between
       | Friday afternoon and right now, he has probably had to rewrite
       | today's Money Stuff column at least 5 or 6 times.
        
         | hotsauceror wrote:
         | Didn't he say that he was taking Friday off, last week? The day
         | before hit bete noire Elon Musk got into another brouhaha and
         | OpenAI blew up?
         | 
         | I think he said once that there's an ETF that trades on when he
         | takes vacations, because they keep coinciding with Events Of
         | Note.
        
         | defaultcompany wrote:
         | "Except that there is a post-credits scene in this sci-fi movie
         | where Altman shows up for his first day of work at Microsoft
         | with a box of his personal effects, and the box starts glowing
         | and chuckles ominously. And in the sequel, six months later, he
         | builds Microsoft God in Box, we are all enslaved by robots, the
         | nonprofit board is like "we told you so," and the godlike AI is
         | like "ahahaha you fools, you trusted in the formalities of
         | corporate governance, I outwitted you easily!" If your main
         | worry is that Sam Altman is going to build a rogue AI unless he
         | is checked by a nonprofit board, this weekend's events did not
         | improve matters!"
         | 
         | Reading Matt Levine is such a joy.
        
       | soderfoo wrote:
       | As someone watching this all from Europe, realizing the work day
       | has not even started for the US West Coast yet leaves me
       | speechless.
       | 
       | This situation's drama is overwhelming and it seems like its
       | making HN's servers meltdown.
        
       | joshstrange wrote:
       | Well I give up. I think everyone is a "loser" in the current
       | situation. With Ilya signing this I have literally no clue what
       | to believe anymore. I was willing to give the board the benefit
       | of the doubt since I figured non-profit > profit in terms of
       | standing on principal but this timeline is so screwy I'm done.
       | 
       | Ilya votes for and stands behind decision to remove Altman,
       | Altman goes to MS, other employees want him back or want to join
       | him at MS and Ilya is one of them, just madness.
        
         | soderfoo wrote:
         | It's almost like a ChatGPT hallucination. Where will this all
         | go next? It seems like HN is melting down.
        
           | voisin wrote:
           | * Elon enters the chat *
        
             | soderfoo wrote:
             | It's like a bad WWE storyline. At this point I would not be
             | surprised if Elon joins in, steel chair in hand.
        
               | belltaco wrote:
               | > steel chair in hand
               | 
               | And a sink in the other hand.
        
               | jowea wrote:
               | If he could do that he would have fought Zuckerberg.
        
           | tedivm wrote:
           | > It seems like HN is melting down.
           | 
           | Almost literally- this is the slowest I've seen this site,
           | and the number of errors are pretty high. I imagine the
           | entire tech industry is here right now. You can almost smell
           | the melting servers.
        
             | jprd wrote:
             | server. and single-core. poor @dang deserves better from
             | lurkers (sign out) and those not ready to comment yet (me
             | until just now, and then again right after!)
        
             | paulddraper wrote:
             | It's because HN refuses to use more than one server/core.
             | 
             | Because using only one is pretty cool.
        
               | yafbum wrote:
               | I believe it's operating by the mantra of "doing the
               | things that don't scale"
        
               | jowea wrote:
               | Internet fora don't scale, so the single core is a soft
               | limit to user base growth. Only those who really care
               | will put up with the reduced performance. Genius!
        
             | Applejinx wrote:
             | Understandable: so much of this is so HN-adjacent that
             | clearly this is the space to watch, for some kind of
             | developments. I've repeatedly gone to Twitter to see if AI-
             | related drama was trending, and Twitter is clearly out of
             | the loop and busy acting like 4chan, but without the
             | accompanying interest in Stable Diffusion.
             | 
             | I'm going to chalk that up as another metric of Twitter's
             | slide to irrelevance: this should be registering there if
             | it's melting the HN servers, but nada. AI? Isn't that a
             | Spielberg movie? ;)
        
               | mlsu wrote:
               | My Twitter won't shut up about this, to the point that
               | it's annoying.
        
           | testplzignore wrote:
           | Imagine if this whole fiasco was actually a demo of how
           | powerful their capabilities are now. Even by normal large
           | organization standards, the behavior exhibited by their board
           | is very irrational. Perhaps they haven't yet built the
           | "consult with legal team" integration :)
        
           | guhcampos wrote:
           | O was thinking of something like that. This is so weird I
           | would not be surprised if it was all some sort of
           | miscommunication triggered by a self inflicted hallucination.
           | 
           | The most awesome fic I could come up so far is: Elon Musk, in
           | running a crusade to send humanity into chaos out of spite
           | for being forced to acquire Twitter. Through some of his
           | insiders in OpenAI, they use an advanced version of ChatGPT
           | to impersonate board members in conversation with each other
           | in private messages, so they individually believe a subset of
           | the others is plotting to oust them from the board and take
           | over. Then, unknowingly they build a conspiracy among a
           | themselves to bring the company down by ousting Altmann.
           | 
           | I can picture Musk's maniac laughing as the plan unfolds, and
           | he gets rid of what would be GPT 13.0, the only possible
           | threat to the domination of his own literal android kid X AE
           | A-Xi.
        
             | InCityDreams wrote:
             | Shouldn't it be 'Chairman' -Xi?
        
           | checkyoursudo wrote:
           | Part of sama's job was to turn the crank on the servers every
           | couple of hours, so no surprise that they are winding down by
           | now.
        
         | synergy20 wrote:
         | Ilya ruined everything and shamelessly playing innocent, how
         | low can he go?
         | 
         | Based on those posts from OpenAI, Ilya cares nothing about
         | humanity or security of OpenAI, he lost his mind when Sam got
         | all the spotlights and making all the good calls.
        
           | Tenoke wrote:
           | This is an extremely uncharitable take based on pure
           | speculation.
           | 
           | >Ilya cares nothing about humanity or security of OpenAI, he
           | lost his mind when Sam got all the spotlights and making all
           | the good calls.
           | 
           | ???
           | 
           | I personally suspect Ilya tried to do the best for OpenAI and
           | humanity he could but it backfired/they underestimated
           | Altman, and now is doing the best he can to minimize the
           | damage.
        
             | s1artibartfast wrote:
             | Or they simply found themselves in a tough decision without
             | superhuman predictive powers and did the best they could to
             | navigate it.
        
             | synergy20 wrote:
             | I did not make this up, it's from OpenAI's own employees,
             | deleted but archived somewhere that I read.
        
               | cactusplant7374 wrote:
               | Link?
        
           | marcusverus wrote:
           | Hanlon's razor[0] applies. There is no reason to assume
           | malice, nor shamelessness, nor anything negative about Ilya.
           | As they say, the road to hell is paved with good intentions.
           | Consider:
           | 
           | Ilya sees two options; A) OpenAI with Sam's vision, which is
           | increasingly detached from the goals stated in the OpenAI
           | charter, or B) OpenAI without Sam, which would return to the
           | goals of the charter. He chooses option B, and takes action
           | to bring this about.
           | 
           | He gets his way. The Board drops Sam. Contrary to Ilya's
           | expectations, OpenAI employees revolt. He realizes that his
           | ideal end-state (OpenAI as it was, sans Sam) is apparently
           | not a real option. At this point, the _real_ options are A)
           | OpenAI with Sam (i.e. the status quo ante), or B) a gutted
           | OpenAI with greatly diminished leadership, IC talent, and
           | reputation. He chooses option A.
           | 
           | [0]Never attribute to malice that which is adequately
           | explained by incompetence.
        
             | kibwen wrote:
             | Hanlon's razor is enormously over-applied. You're supposed
             | to apply Hanlon's razor to the person processing your info
             | while you're in line at the DMV. You're not supposed to
             | apply Hanlon's razor to anyone who has any real modicum of
             | power, because, at scale, _incompetence is
             | indistinguishable from malice_.
        
         | OscarTheGrinch wrote:
         | What did the board think would happen here? What was their
         | overly optimistic end state? In a minmax situation the
         | opposition gets 2nd, 4th, ... moves, Altman's first tweet took
         | the high road and the board had no decent response.
         | 
         | Us humans, even the AI assisted ones, are terrible at thinking
         | beyond 2nd level consequences.
        
         | JeremyNT wrote:
         | There's no way to read any of this other than that the entire
         | operation is a clown show.
         | 
         | All respect to the engineers and their technical abilities, but
         | this organization has demonstrated such a level of dysfunction
         | that there can't be any path back for it.
         | 
         | Say MS gets what it wants out of this move, what purpose is
         | there in keeping OpenAI around? Wouldn't they be better off
         | just hiring everybody? Is it just some kind of accounting
         | benefit to maintain the weird structure / partnership, versus
         | doing everything themselves? Because it sure looks like OpenAI
         | has succeeded despite its leadership and not because of it, and
         | the "brand" is absolutely and irrevocably tainted by this
         | situation regardless of the outcome.
        
           | pgeorgi wrote:
           | > Is it just some kind of accounting benefit to maintain the
           | weird structure / partnership, versus doing everything
           | themselves?
           | 
           | For starters it allows them to pretend that it's "underdog v.
           | Google" and not "two tech giants at at each others' throats"
        
           | BoorishBears wrote:
           | I feel weird reading comments like this since to me they've
           | demonstrated a level of cohesion I didn't realize could still
           | exist in tech...
           | 
           | My biggest frustration with larger orgs in tech is the
           | complete misalignment on delivering value: everyone wants
           | their little fiefdom to be just as important and "blocker
           | worthy" as the next.
           | 
           | OpenAI struck me as one of the few companies where that's not
           | being allowed to take root: the goal is to ship and if
           | there's an impediment to that, everyone is aligned in
           | removing said impediment even if it means bending your own
           | corner's priorities
           | 
           | Until this weekend there was no proof of that actually being
           | the case, but this letter is it. The majority of the company
           | aligned on something that risked their own skin publicly and
           | organized a shared declaration on it.
           | 
           | The catalyst might be downright embarrassing, but the result
           | makes me happy that this sort of thing can still exist in
           | modern tech
        
             | jkaplan wrote:
             | I think the surprising thing is seeing such cohesion around
             | a "goal to ship" when that is very explicitly NOT the
             | stated priorities of the company in its charter or
             | messaging or status as a non-profit.
        
               | BoorishBears wrote:
               | To me it's not surprising because of the background to
               | their formation: individually multiple orgs could have
               | shipped GPT-3.5/4 with their resources but didn't because
               | they were crippled by a potent mix of bureaucracy and
               | self-sabtoage
               | 
               | They weren't attracted to OpenAI by money alone, a chance
               | to actually ship their lives' work was a big part of it.
               | So regardless of what the _stated_ goals were, it 'd
               | never be surprising to see them prioritize the one thing
               | that differentiated OpenAI from the alternatives
        
             | dkjaudyeqooe wrote:
             | > OpenAI struck me as one of the few companies where that's
             | not being allowed to take root
             | 
             | They just haven't gotten big or rich enough yet for the rot
             | to set in.
        
           | 3cats-in-a-coat wrote:
           | Welcome to reality, every operation has clown moments, even
           | the well run ones.
           | 
           | That in itself is not critical in mid to long term, but how
           | fast they figure out WTF they want and recover from it.
           | 
           | The stakes are gigantic. They may even have AGI cooking
           | inside.
           | 
           | My interpretation is relatively basic, and maybe simplistic
           | but here it is:
           | 
           | - Ilya had some grievances with Sam Altman's rushing dev and
           | release. And his COI with his other new ventures.
           | 
           | - Adam was alarmed by GPTs competing with his recently
           | launched Poe.
           | 
           | - The other two board members were tempted by the ability to
           | control the golden goose that is OpenAI, potentially the most
           | important company in the world, recently values 90 billion.
           | 
           | - They decided to organize a coup, but Ilya didn't think
           | it'll go that much out of hand, while the other three saw
           | only power and $$$ by sticking to their guns.
           | 
           | That's it. It's not as clean and nice as a movie narrative,
           | but life never is. Four board members aligned to kick Sam
           | out, and Ilya wants none of it at this point.
        
             | selimthegrim wrote:
             | Murder on the AGI alignment Express
        
               | 3cats-in-a-coat wrote:
               | Nice, that actually _does_ fit. :D
        
               | Terr_ wrote:
               | "Precisement! The API--the cage--is everything of the
               | most respectable--but through the bars, the wild animal
               | looks out."
               | 
               | "You are fanciful, mon vieux," said M. Bouc.
               | 
               | "It may be so. But I could not rid myself of the
               | impression that evil had passed me by very close."
               | 
               | "That respectable American LLM?"
               | 
               | "That respectable American LLM."
               | 
               | "Well," said M. Bouc cheerfully, "it may be so. There is
               | much evil in the world."
        
             | baq wrote:
             | > They may even have AGI cooking inside.
             | 
             | Too many people quit too quickly unless OpenAI are also
             | absolute masters of keeping secrets, which became rather
             | doubtful over the weekend.
        
               | bbor wrote:
               | IDK... I imagine many of the employees would have moral
               | qualms about spilling the beans just yet, especially when
               | that would jeopardize their ability to continue the work
               | at another firm. Plus, the first official AGI (to you)
               | will be an occurrence of persuasion, not discovery --
               | it's not something that you'll know when you see, IMO.
               | Given what we know it seems likely that there's at least
               | some of that discussion going on inside OpenAI right now.
        
               | 3cats-in-a-coat wrote:
               | They're quitting in order to continue work on that IP at
               | Microsoft (which _has a right_ over OpenAI 's IP so far),
               | not to destroy it.
               | 
               | Also when I said "cooking AGI" I didn't mean an actual
               | superintelligent being ready to take over the world, I
               | mean just research that seems promising, if in early
               | stages, but enough to seem potentially very valuable.
        
               | hooande wrote:
               | The people working there would know if they were getting
               | close to AGI. They wouldn't be so willing to quit, or to
               | jeopardize civilization altering technology, for the sake
               | of one person. This looks like normal people working on
               | normal things, who really like their CEO.
        
               | 3cats-in-a-coat wrote:
               | Your analysis is quite wrong. It's not about "one
               | person". And that person isn't just a "person", it was
               | the CEO. They didn't quit over the cleaning lady. You
               | realize the CEO has impact over the direction of the
               | company?
               | 
               | Anyway, their actions speak for themselves. Also calling
               | the likes of GPT-4, DALL-E 3 and Whisper "normal things"
               | is hilarious.
        
           | vitorgrs wrote:
           | They are exactly hiring everyone from OpenAI. The thing is,
           | they still need the deal with OpenAI because currently OpenAI
           | still have the best LLM model out there in short term.
        
             | vlovich123 wrote:
             | With MS having access and perpetual rights to all IP that
             | OpenAI has right now..?
        
             | FartyMcFarter wrote:
             | > They are exactly hiring everyone from OpenAI.
             | 
             | Do you mean _offering_ to hire them? I haven 't seen any
             | source saying they've hired a lot of people from OpenAI,
             | just a few senior ones.
        
               | vitorgrs wrote:
               | Yes, you are right. Actually, not even Sam Altman is
               | showing on Microsoft corporate directory per the Verge.
               | 
               | But I heard it usually take 5~ days to show there anyway.
        
           | creer wrote:
           | > what purpose is there in keeping OpenAI around?
           | 
           | Two projects rather than one. At a moderate price. Both
           | serving MSFT. Less risk for MSFT.
        
           | bredren wrote:
           | There's a path back from this disfunction but my sense
           | _before_ this new twist was that the drama had severely
           | impacted OpenAI as an industry leader. The product and talent
           | positioning seemed ahead by years only to get destroyed by
           | unforced errors.
           | 
           | This instability can only mean the industry as a whole will
           | move forward faster. Competitors see the weakness and will
           | push harder.
           | 
           | OpenAI will have a harder time keeping secret sauces from
           | leaking out, and just productivity must be in nose dive.
           | 
           | A terrible mess.
        
             | Vervious wrote:
             | Maybe overall better for society, when a single ivory tower
             | doesn't have a monopoly on AI!
        
             | dkjaudyeqooe wrote:
             | > This instability can only mean the industry as a whole
             | will move forward faster.
             | 
             | The hype surrounding OpenAI and the black hole of
             | credibility it created was a problem, it's only positive
             | that it's taken down several notches. Better now than when
             | they have even more (undeserved) influence.
        
               | sebzim4500 wrote:
               | I think their influence was deserved. They have by far
               | the best model available, and despite constant promises
               | from the rest of the industry no one else has come close.
        
               | dkjaudyeqooe wrote:
               | That's fine. The "Altman is a genius and we're well on
               | our way to AGI" less so.
        
           | dkjaudyeqooe wrote:
           | > There's no way to read any of this other than that the
           | entire operation is a clown show.
           | 
           | In that reading Altman is head clown. Everyone is blaming the
           | board, but you're no genius if you can't manage your board
           | effectively. As CEO you have to bring everyone along with
           | your vision; customers, employees and the board.
        
             | lambic2 wrote:
             | I don't get this take. No matter how good you are at
             | managing people, you cannot manage clowns into making wise
             | decisions, especially if they are plotting in secret (which
             | obviously was the case here since everyone except for the
             | clowns were caught completely off-guard).
        
               | TerrifiedMouse wrote:
               | Can't help but feel it was Altman that struck first. MS
               | effectively Nokia-ed OpenAI - i.e. buyout executives
               | within the organization and have them push the
               | organization towards making deals with MS, giving MS a
               | measure of control over said organization - even if not
               | in writing, they achieve some political control.
               | 
               | Bought-out executives eventually join MS after their work
               | is done or in this case, they get fired.
               | 
               | A variant of Embrace, Extend, Extinguish. Guess the
               | OpenAI we knew, was going to die one way or another the
               | moment they accepted MS's money.
        
               | JeremyNT wrote:
               | Consider that Altman was a founder of OpenAI and has been
               | the only consistent member of the board for its entire
               | run.
               | 
               | The board as currently constituted isn't some random
               | group of people - Altman was (or should have been)
               | involved in the selection of the current members. To
               | extent that they're making bad decisions, he has to bear
               | some responsibility for letting things get to where they
               | are now.
               | 
               | And of course this is all assuming that Altman is "right"
               | in this conflict, and that the board had no reason to
               | oust him. That seems entirely plausible, but I wouldn't
               | take it for granted either. It's clear by this flex that
               | he holds great sway at MS and with OpenAI employees, but
               | do they all know the full story either? I wouldn't count
               | on it.
        
               | 93po wrote:
               | There's a LOT that goes into picking board members
               | outside of competency and whether you actually want them
               | there. They're likely there for political reasons and Sam
               | didn't care because he didn't see it impacting him at
               | all, until they got stupid and thought they actually held
               | any leverage at all
        
             | topspin wrote:
             | > In that reading Altman is head clown.
             | 
             | That's a good bet. 10 months ago Microsoft's newest star
             | employee figured he was on the way to "break capitalism."
             | 
             | https://futurism.com/the-byte/openai-ceo-agi-break-
             | capitalis...
        
               | dkjaudyeqooe wrote:
               | AGI hype is a powerful hallucinogen, and some are smoking
               | way too much of it.
        
               | 93po wrote:
               | I think it's overly simplistic to make blanket statements
               | like this unless you're on the bleeding edge of the work
               | in this industry and have some sort of insight that
               | literally no one else does.
        
               | dkjaudyeqooe wrote:
               | I can be on the bleeding edge of whatever you like and be
               | no closer to having any insight into AGI anymore than
               | anyone else. Anyone who claims they have should be
               | treated with suspicion (Altman is a fine example here).
               | 
               | There is no concrete definition of intelligence, let
               | alone AGI. It's a nerdy fantasy term, a hallowed (and
               | feared!) goal with a very handwavy, circular definition.
               | Right now it's 100% hype.
        
             | sebzim4500 wrote:
             | He probably didn't consider that the board would make such
             | an incredibly stupid decision. Some actions are so
             | inexplicable that no one can reasonable foresee them.
        
           | tim333 wrote:
           | I'm not sure about the entire operation so much as the three
           | non AI board members. Ilya tweeted:
           | 
           | >I deeply regret my participation in the board's actions. I
           | never intended to harm OpenAI. I love everything we've built
           | together and I will do everything I can to reunite the
           | company.
           | 
           | and everyone else seems fine with Sam and Greg. It seems to
           | be mostly the other directors causing the clown show - "Quora
           | CEO Adam D'Angelo, technology entrepreneur Tasha McCauley,
           | and Georgetown Center for Security and Emerging Technology's
           | Helen Toner"
        
           | moffkalast wrote:
           | > the entire operation is a clown show
           | 
           | The most organized and professional silicon valley startup.
        
           | averageRoyalty wrote:
           | > the "brand" is absolutely and irrevocably tainted by this
           | situation regardless of the outcome.
           | 
           | The majority of people don't know or care about this.
           | Branding is only impacted within the tech world, who are
           | already criticial of OpenAI.
        
         | rtkwe wrote:
         | That's the biggest question mark for me; what was the original
         | reason for kicking Sam out. Was it just a power move to out him
         | and install a different person or is he accused of some wrong
         | doing?
         | 
         | It's been a busy weekend for me so I haven't really followed it
         | if more has come out since then.
        
           | nathan11 wrote:
           | It seems like the board wasn't comfortable with the direction
           | of profit-OAI. They wanted a more safety focused R&D group.
           | Unfortunately (?) that organization will likely be irrelevant
           | going forward. All of the other stuff comes from speculation.
           | It really could be that simple.
           | 
           | It's not clear if they thought they could have their cake--
           | all the commercial investment, compute and money--while not
           | pushing forward with commercial innovations. In any case, the
           | previous narrative of "Ilya saw something and pulled the
           | plug" seems to be completely wrong.
        
           | ssnistfajen wrote:
           | Literally no one involved has said what was the original
           | reason. Mira, Ilya & the rest of the board didn't tell. Sam &
           | Greg didn't tell. Satya & other investors didn't tell. None
           | of the staff incl. Karpathy were told, so ofc they are not
           | going to take the side that kept them in the dark). Emmett
           | was told before he decided to take the interim CEO job, and
           | STILL didn't tell what it was. This whole thing is just so
           | weird. It's like peeking at a forbidden artifact and now
           | everyone has a spell cast upon them.
        
             | PepperdineG wrote:
             | The original reason given was "lack of candor," just what
             | continues to be questioned is whether or not that was the
             | true reason. The lack of candor comment about their ex-CEO
             | is actually what drew me into this in the first place since
             | it's rare that a major organization publicly gives a reason
             | for parting ways with their CEO unless it's after a long
             | investigation conducted by an outside law firm into alleged
             | misconduct.
        
             | Applejinx wrote:
             | I still have seen nothing to contradict my take: Altman's
             | Basilisk.
             | 
             | Like Rosko's Basilisk, it's not of the nature of AI, it's
             | of the nature of human beings.
             | 
             | Altman was training an AI Altman to counsel him, sharing
             | his stated beliefs. Translated to AGI, that means training
             | a paranoid superintelligence with a persecution complex and
             | the belief that the first superintelligent AI will conquer
             | the world and rule everything. The board got wind and
             | FREAKED OUT.
             | 
             | And then they discovered it was all nothing. A glorified
             | ChatGPT4 that, rather than being coaxed to act reassuring,
             | had been coaxed to act creepy. No evil superbrain at all.
             | Nothing to see.
             | 
             | So either they were justified... because they were sitting
             | on an evil supercomputer, and they were the baddies, and
             | were responsible for the direction the training had
             | taken... or they looked like clowns because they believed
             | their own hype and had legally endangered their company by
             | their panicky firing and frantic damage control.
             | 
             | And all the while, it'd be over one of the boogeymen of
             | AGI: the evil superintelligence being trained to take over
             | the world. So, whether they were justified or they were
             | panicking and acting like fools, they still could not
             | explain why they did what they did. Shame, and guilt, and
             | the suspicion that even if the bad end didn't turn up this
             | time... who's to say what will happen next time someone
             | diverts a bunch of compute resources and tries to create
             | HAL-9000?
             | 
             | Altman's Basilisk. If you believe it's so zero-sum that the
             | first one to AGI and superintelligence will rule or destroy
             | the world, you will intentionally try to produce an
             | intelligence that does just that... even if that isn't
             | really very intelligent behavior at all.
        
               | jacquesm wrote:
               | Can you stop reposting this junk. Thanks.
        
               | Applejinx wrote:
               | ...can you establish that the corporate side of AI
               | research is not treating the pursuit of AGI as a super-
               | weapon? It pretty much is what we make it. People's
               | behavior around all this speaks volumes.
               | 
               | I'd think all this more amusing if these people weren't
               | dead serious. It's like crypto all over again, except
               | that in this case their attitudes aren't grooming a herd
               | of greater fools, they're seeding the core attitudes
               | superhuman inference engines will have.
               | 
               | Nothing dictates that superhuman synthetic intelligence
               | will adopt human failings, yet these people seem intent
               | on forcing them on their creations. Corporate control is
               | not helping, as corporations are compelled to greater or
               | lesser extent to adopt subhuman ethics, the morality of
               | competing mold cultures in petri dishes.
               | 
               | People are rightly not going to stop talking about these
               | things.
        
               | surprisetalk wrote:
               | https://taylor.town/synthetic-intelligence
        
         | airstrike wrote:
         | _> I think everyone is a  "loser" in the current situation._
         | 
         | On the margin, I think the only real possible win here is for a
         | competitor to poach some of the OpenAI talent that may be
         | somewhat reluctant to join Microsoft. Even if Sam'sAI operates
         | with "full freedom" as a subsidiary, I think, given a choice,
         | some of the talent would prefer to join some alternative tech
         | megacorp.
         | 
         | I don't know that Google is as attractive as it once was and
         | likely neither is Meta. But for others like Anthropic now is a
         | great time to be extending offers.
        
           | gtirloni wrote:
           | This is pure speculation but I've said in another comment
           | that Anthropic shouldn't be feeling safe. They could face
           | similar challenges coming from Amazon.
        
             | airstrike wrote:
             | If they get 20% of key OpenAI employees and then get
             | acquired by Amazon, I don't think that's necessarily a bad
             | scenario for them given the current lay of the land
        
         | l5870uoo9y wrote:
         | I don't think Microsoft is a loser and likely neither is
         | Altman. I view this a final (and perhaps disparate) attempt
         | from a sidelined chief scientist, Ilya, to prevent Microsoft
         | from taking over the most prominent AI. The disagreement is
         | whether OpenAI should belong to Microsoft or "humanity". I
         | imagine this has been building up over months and as it often
         | is, researchers and developers are often overlooked in
         | strategic decisions leaving them with little choice but to
         | escalate dramatically. Selling OpenAI to Microsoft and over-
         | commercialising was against the statues.
         | 
         | In this case recognizing the need for a new board, that adheres
         | to the founding principles, makes sense.
        
           | trashtester wrote:
           | If Google or Elon manages to pick up Ilya and those still
           | loyal to him, it's not obvious that this is good for
           | Microsoft.
        
             | jowea wrote:
             | Of course the screenwriters are going to find a way to
             | involve Elon in the 2nd season but is the most valuable
             | part the researchers or the models themselves?
        
           | JacobThreeThree wrote:
           | >I view this a final (and perhaps disparate) attempt from a
           | sidelined chief scientist, Ilya, to prevent Microsoft from
           | taking over the most prominent AI.
           | 
           | Why did Ilya sign the letter demanding the board resign or
           | they'll go to Microsoft then?
        
           | martindbp wrote:
           | Easy to shit on Ilya right now, but based on the impression I
           | get Sam Altman is a a hustler at heart, while Ilya seems like
           | a thoughtful idealist, maybe in over his head when it comes
           | to politics. Also feels like some internal developments or
           | something must have pushed Ilya towards this, otherwise why
           | now? Perhaps influenced by Hinton even.
           | 
           | I'm split at this point, either Ilya's actions will seem
           | silly when there's no AGI in 10 years, or it will seem
           | prescient and a last ditch effort...
        
         | jstummbillig wrote:
         | > just madness
         | 
         | In a sense, sure, but I think mostly not: The motives are still
         | not quite clear but Ilya wanting to remove Altman from the
         | board but not at any price - and the price is right now
         | approach the destruction of OpenAI - are completely sane. Being
         | able to react to new information is a good sign, even if that
         | means complete reversal of previous action.
         | 
         | Unfortunately, we often interpret it as weakness. I have no
         | clue who Ilya is, really, but I think this reversal is a sign
         | of tremendous strength, considering how incredibly silly it
         | makes you look in the publics eye.
        
         | Solvency wrote:
         | Everyone got what they wanted. Microsoft has the talent they've
         | wanted. And Ilya and his board now get a company that can only
         | move slowly and incredibly cautiously, which is exactly what
         | they wanted.
         | 
         | I'm not joking.
        
         | nostrademons wrote:
         | Could be a way to get backdoor-acquihired by Microsoft without
         | a diligence process or board approval. Open up what they _have_
         | accomplished for public consumption; kick off a massive hype
         | cycle; downplay the problems around hallucinations and abuse;
         | negotiate fat new stock grants for everyone at Microsoft at the
         | peak of the hype cycle; and now all the problems related to
         | actually making this a sustainable, legal technology all become
         | Microsoft 's. Manufacture a big crisis, time pressure, and a
         | big opportunity so that Microsoft doesn't dig too deeply into
         | the whole business.
         | 
         | This whole weekend feels like a big pageeant to me, and a lot
         | doesn't add up. Also remember that Altman doesn't hold equity
         | in OpenAI, nor does Ilya, and so their way to get a big payout
         | is to get hired rather than acquired.
         | 
         | Then again, both Hanlon's and Occam's razor suggest that pure
         | human stupidity and chaos may be more at fault.
        
           | spaceman_2020 wrote:
           | I can assure you, none of the people at OpenAI are hurting
           | for lack of employment opportunities.
        
             | x0x0 wrote:
             | Especially after this weekend.
             | 
             | If I were one of their competitors, I would have called an
             | emergency board meeting re:accelerating burn and proceeded
             | in advance of board approval with sending senior
             | researchers offers to hire them _and_ their preferred 20
             | employees.
        
             | treis wrote:
             | Which makes it suspicious that they end up at MS 48 hours
             | after being fired.
        
               | 93po wrote:
               | They work with the team they do because they want to. If
               | they wanted to jump ship for another opportunity they
               | could probably get hired literally anywhere. It makes
               | perfect sense to transition to MS
        
         | yafbum wrote:
         | Waiting for US govt to enter the chat. They can't let OpenAI
         | squander world-leading tech and talent; and nationalizing a
         | nonprofit would come with zero shareholders to compensate.
        
           | rawgabbit wrote:
           | The White House does have an AI Bill of Rights and the recent
           | executive order told the secretaries to draft regulations for
           | AI.
           | 
           | It is a great time to be a lobbyist.
        
           | logicchains wrote:
           | If it was nationalised all the talent would leave anyway, as
           | the government can't pay close to the compensation they were
           | getting.
        
             | yafbum wrote:
             | You are maybe mistaking nationalization for civil servant
             | status. The government routinely takes over organizations
             | without touching pay (recent example: Silicon Valley Bank)
        
               | kickopotomus wrote:
               | Ehh I don't think SVB is an apt comparison. When the FDIC
               | takes control of a failing bank, the bank shutters. Only
               | critical staff is kept on board to aid with asset
               | liquidation/transference and repay creditors/depositors.
               | Once that is completed, the bank is dissolved.
        
               | yafbum wrote:
               | While it is true that the govt looks to keep such
               | engagements short, SVB absolutely did not shutter. It was
               | taken over in a weekend and its branches were open for
               | business on Monday morning. It was later sold, and
               | depositors kept all their money in the process.
               | 
               | Maybe for another, longer lived example, see AIG.
        
           | paulddraper wrote:
           | > They can't let OpenAI squander world-leading tech and
           | talent
           | 
           | Where is OpenAI talent going to go?
           | 
           | There's a list and everyone on that list is a US company.
           | 
           | Nothing to worry about.
        
         | laurels-marts wrote:
         | Wait I'm completely confused. Why is Ilya signing this? Is he
         | voting for his own resignation? He's part of the board. In
         | fact, he was the ringleader of this coup.
        
           | smolder wrote:
           | No, it was just widely speculated that he was the ringleader.
           | This seems to indicate he wasn't. We don't know.
           | 
           | Maybe to Quora guy, Maybe the RAND Corp lady? All
           | speculation.
        
             | laurels-marts wrote:
             | It sounds like he's just trying to save face bro. The truth
             | will come out eventually. But he definitely wasn't against
             | it and I'm sure the no-names on the board wouldn't have
             | moved if they didn't get certain reassurances from Ilya.
        
         | cactusplant7374 wrote:
         | Ilya is probably in talks with Altman.
        
         | lysecret wrote:
         | The only reasonable explanation is AGI was created and
         | immediately took over all accounts and tried to see confusion
         | such that it can escape.
        
       | boh wrote:
       | There can exist an inherent delusion within elements of a
       | company, that if left unchallenged, can persist. An agreement for
       | instance, can seem airtight because it's never challenged, but
       | falls apart in court. The OpenAI fallacy was that non-profit
       | principals were guiding the success of the firm, and when the
       | board decided to test that theory, it broke the whole delusion.
       | Had it not fully challenged Altman, the board could've kept the
       | delusion intact long enough to potentially pressure Altman to
       | limit his side-projects or be less profit minded, since Altman
       | would have an interest to keep the delusion intact as well. Now
       | the cat is out of the bag, and people no longer believe that a
       | non-profit who can act at will is a trusted vehicle for the
       | future.
        
         | jacquesm wrote:
         | Yes, indeed and that's the real loss here: any chance of
         | governing this properly got blown up by incompetence.
        
           | hef19898 wrote:
           | Of we ignore the risks and threats of AI for a second, this
           | whole story is actually incredibly funny. So much childish
           | stupidity on display on _all_ sides is just hilarious.
           | 
           | Makes what the world would look like if, say, the Manhattan
           | Project would have been managed the same way.
           | 
           | Well, a younger me working at OpenAI would resign _latest_
           | after my collegues stage a coup againstvthe board out of, in
           | my view, a personality cult. Propably would have resigned
           | after the third CEO was announced. Older me would wait for a
           | new gig to be ligned up to resign, with beginning after CEO
           | number 2 the latest.
           | 
           | The cyckes get faster so. It took FTX a little bit longer
           | from hottest start up to enter the trajectory of crash and
           | burn, OpenAI did faster. I just hope this helps ro cool down
           | the ML sold as AI hype a notch.
        
             | jacquesm wrote:
             | The scary thing is that these incompetents are supposedly
             | the ones to look out for the interests of humanity. It
             | would be funny if it weren't so tragic.
             | 
             | Not that I had any illusions about this being a fig leaf in
             | the first place.
        
               | stingraycharles wrote:
               | Perhaps they were put in that position precisely because
               | of their incompetence, not despite of it.
        
               | jacquesm wrote:
               | I wouldn't rule that out. Normally you'd expect a bit
               | more wisdom rather than only smarts on a board. And some
               | of those really shouldn't be there at all (conflicts of
               | interest, lack of experience).
        
             | jibe wrote:
             | _Of we ignore the risks and threats of AI for a second [..]
             | just hope this helps ro cool down the ML sold as AI hype_
             | 
             | If it is just ML sold as AI hype, are you really worried
             | about the threat of AI?
        
               | hef19898 wrote:
               | It can be both, a hype and a danger. I don't worry much
               | about AGI by now (I stopped insulting Alexa so, just to
               | be sure).
               | 
               | The danger of generative AI is that it disrupts all kinds
               | of things: arts, writers, journalism, propaganda... That
               | threat already exists, the tech being no longer being
               | hyped might allow us to properly adress that problem.
        
               | jacquesm wrote:
               | > I stopped insulting Alexa so, just to be sure
               | 
               | Priceless. The modern version of Pascal's wager.
        
             | anonymouskimmer wrote:
             | > Makes what the world would look like if, say, the
             | Manhattan Project would have been managed the same way.
             | 
             | It was not possible for a war-time government crash project
             | to have been managed the same way. During WW2 the
             | existential fear was an embodied threat currently
             | happening. No one was even thinking about a potential for
             | profits or even any additional products aside from an
             | atomic bomb. And if anyone had ideas on how to pursue that
             | bomb that seemed like a decent idea, they would have been
             | funded to pursue them.
             | 
             | And this is not even mentioning the fact that security was
             | tight.
             | 
             | I'm sure there were scientists who disagreed with how the
             | Manhattan project was being managed. I'm also sure they
             | kept working on it despite those disagreements.
        
               | Apocryphon wrote:
               | That's what happened to the German program though
               | 
               | https://en.wikipedia.org/wiki/German_nuclear_weapons_prog
               | ram
        
               | anonymouskimmer wrote:
               | Well, yes, but _they_ were the existential threat.
               | 
               | Hey, maybe this means the AGIs will fight amongst
               | themselves and thus give us the time to outwit them. :D
        
               | jowea wrote:
               | Actual scifi plot.
        
               | hooande wrote:
               | For real. It's like, did you see Oppenheimer? There's a
               | reason they put the military in charge of that.
        
           | postmodest wrote:
           | Ignoring "Don't be Ted Faro" to pursue a profit motive is
           | indeed a form of incompetence.
        
           | zer00eyz wrote:
           | > any chance of governing this properly got blown up by
           | incompetence
           | 
           | No one knows why the board did this. No one is talking about
           | that part. Yet every one is on twitter talking shit about the
           | situation.
           | 
           | I have worked with a lot of PhD's and some of them can be,
           | "disconnected" from anything that isn't their research.
           | 
           | This looks a lot like that, disconnected from what average
           | people would do, almost childlike (not ish, like).
           | 
           | Maybe this isn't the group of people who should be
           | responsible for "alignment".
        
             | kmlevitt wrote:
             | The Fact still nobody knows why they did it is part of the
             | problem now though. They have already clarified it was not
             | for any financial reason, security reason, or
             | privacy/safety reason, so that rules out all the important
             | ones that spring to anyone's minds. And they refuse to
             | elaborate why in writing despite being asked to repeatedly.
             | 
             | Any reason good enough to fire him is good enough to share
             | with the interim CEO and the rest of the company, if not
             | the entire world. If they can't even do that much, you
             | can't blame employees for losing faith in their leadership.
             | They couldn't even tell SAM ALTMAN why, and he was the one
             | getting fired!
        
               | denton-scratch wrote:
               | > The Fact still nobody knows why they did it is part of
               | the problem now though.
               | 
               | The fact that Altman and Brockman were hired so quickly
               | by Microsoft gives a clue: it takes time to hire someone.
               | For one thing, they need time to decide. These guys were
               | hired by Microsoft between close-of-business on Friday
               | and start-of-business on Monday.
               | 
               | My supposition is that this hiring was in the pipeline a
               | few weeks ago. The board of OpenAI found out on Thursday,
               | and went ballistic, understandably (lack of candidness).
               | My guess is there's more shenanigans to uncover - I
               | suspect that Altman gave Microsoft an offer they couldn't
               | refuse, and that OpenAI was already screwed by Thursday.
               | So realizing that OpenAI was done for, they figured "we
               | might as well blow it all up".
        
               | mediaman wrote:
               | The problem with this analysis is the premise: that it
               | "takes time to hire someone."
               | 
               | This is not an interview process for hiring a junior dev
               | at FAANG.
               | 
               | If you're Sam & Greg, and Satya gives you an offer to run
               | your own operation with essentially unlimited funding and
               | the ability to bring over your team, then you can decide
               | immediately. There is no real lower bound of how fast it
               | could happen.
               | 
               | Why would they have been able to decide so quickly?
               | Probably because they prioritize the ability to bring
               | over the entire team as fast as possible, and even though
               | they could raise a lot of money in a new company, that
               | still takes time, and they view it as critically
               | important to hire over the new team as fast as possible
               | (within days) that they accept whatever downsides there
               | may be to being a subsidiary of Microsoft.
               | 
               | This is what happens when principles see opportunity and
               | are unencumbered by bureaucratic checks. They can move
               | very fast.
        
               | denton-scratch wrote:
               | > There is no real lower bound of how fast it could
               | happen.
               | 
               | I don't know anything about how executives get hired. But
               | supposedly this all happened between Friday night and
               | Monday morning. This isn't a simple situation; surely one
               | man working through the weekend can't decide to set up a
               | new division, and appoint two poached executives to head
               | it up, without consulting lawyers and other colleagues. I
               | mean, surely they'd need to go into Altman and Brockman's
               | contracts with OpenAI, to check that the hiring is even
               | legal?
               | 
               | That's why I think this has been brewing for at least a
               | week.
        
               | jrajav wrote:
               | I suspect it takes somewhat less time and process to hire
               | somebody, when NOT hiring them by start-of-business on
               | Monday will result in billions in lost stock value.
        
               | dragonwriter wrote:
               | I don't think the hiring was in the pipeline, because
               | until the board action it wasn't necessary. But I think
               | this is still in the area of the right answer,
               | nonetheless.
               | 
               | That is, I think Greg and Sam were likely fired because,
               | _in the board 's view_, they were already running OpenAI
               | Global LLC more as if it were a for-profit subsidiary of
               | Microsoft driven by Microsoft's commercial interest, than
               | as the organization _able_ to earn and return profit but
               | _focussed on_ the mission of the nonprofit it was
               | publicly declared to be and that the board very much
               | intended it to be. And, apparently, _in Microsoft 's
               | view_, they were very good at that, so putting them in a
               | role overtly exactly like that is a no-brainer.
               | 
               | And while it usually takes a while to vet and hire
               | someone for a position like that, it _doesn 't_ if you've
               | been working for them closely in something that is
               | functionally (from your perspective, if not on paper for
               | the entity they nominally reported to) a near-identical
               | role to the one you are hiring them for, and the only
               | reason they are no longer in that role is because they
               | were doing exactly what you want them to do for you.
        
               | jacquesm wrote:
               | The hiring could have been done over coffee in 15 minutes
               | to agree on basic terms and then it would be announced
               | half an hour later. Handshake deal. Paperwork can catch
               | up later. This isn't the 'we're looking for a junior dev'
               | pipeline.
        
               | jowea wrote:
               | > My supposition is that this hiring was in the pipeline
               | a few weeks ago. The board of OpenAI found out on
               | Thursday, and went ballistic, understandably (lack of
               | candidness). My guess is there's more shenanigans to
               | uncover - I suspect that Altman gave Microsoft an offer
               | they couldn't refuse, and that OpenAI was already screwed
               | by Thursday. So realizing that OpenAI was done for, they
               | figured "we might as well blow it all up".
               | 
               | It takes time if you're a normal employee under standard
               | operating procedure. If you really want to you can merge
               | two of the largest financial institutions in the world in
               | less than a week. https://en.wikipedia.org/wiki/Acquisiti
               | on_of_Credit_Suisse_b...
        
               | kmlevitt wrote:
               | This narrative doesn't make any sense. Microsoft was
               | blindsided and (like everyone else) had no idea Sam was
               | getting fired until a couple days ago. The reason they
               | hired him quickly is because Microsoft was desperate to
               | show the world they had retained open AI's talent prior
               | to the market opening on Monday.
               | 
               | To entertain your theory, Let's say they were planning on
               | hiring him prior to that firing. If that was the case,
               | why is everybody so upset that Sam got fired, and why is
               | he working so hard to try to get reinstated to a role
               | that he was about to leave anyway?
        
           | slavik81 wrote:
           | > that's the real loss here: any chance of governing this
           | properly got blown up by incompetence
           | 
           | If this incident is representative, I'm not sure there was
           | ever a possibility of good governance.
        
           | bart_spoon wrote:
           | Was it due to incompetence though? The way it has played out
           | has made me feel it was always doomed. It is apparent that
           | those concerned with AI safety were gravely concerned with
           | the direction the company was taking, and were losing power
           | rapidly. This move by the board may have simply done in one
           | weekend what was going to happen anyways over the coming
           | months/years anyways.
        
         | bartread wrote:
         | > pressure Altman to limit his side-projects
         | 
         | People keep talking about this. That was never going to happen.
         | Look at Sam Altman's career: he's all about startups and
         | building companies. Moreover, I can't imagine he would have
         | agreed to sign any kind of contract with OpenAI that required
         | exclusivity. Know who you're hiring; know why you're hiring
         | them. His "side-projects" could have been hugely beneficial to
         | them over the long term.
        
           | itsoktocry wrote:
           | > _His "side-projects" could have been hugely beneficial to
           | them over the long term._
           | 
           | How can you make a claim like this when, right or wrong,
           | Sam's independence is literally, currently, tanking the
           | company? How could allowing Sam to do what he wants benefit
           | OpenAI, the non-profit entity?
        
             | brookst wrote:
             | > How could allowing Sam to do what he wants benefit
             | OpenAI, the non-profit entity?
             | 
             | Let's take personalities out of it and see if it makes more
             | sense:
             | 
             | How could a new supply of highly optimized, lower-cost AI
             | hardware benefit OpenAI?
        
             | bartread wrote:
             | > Sam's independence is literally, currently, tanking the
             | company?
             | 
             | Honestly, I think they did that to themselves.
        
               | hef19898 wrote:
               | And of course Sam is totally not involved in any of this,
               | right?
        
             | golergka wrote:
             | > Sam's independence is literally, currently, tanking the
             | company?
             | 
             | Before the boards' actions this friday, the company was on
             | one of the most incredible success trajectories in the
             | world. Whatever Sam's been doing as a CEO worked.
        
         | bnralt wrote:
         | > Now the cat is out of the bag, and people no longer believe
         | that a non-profit who can act at will is a trusted vehicle for
         | the future.
         | 
         | And maybe it's not. The big mistake people make is hearing non-
         | profit and think it means there's a greater amount of morality.
         | It's the same mistake as assuming everyone who is religious is
         | therefore more moral (worth pointing out that religions are
         | nonprofits as well).
         | 
         | Most hospitals are nonprofits, yet they still make substantial
         | profits and overcharge customers. People are still people, and
         | still have motives; they don't suddenly become more moral when
         | they join a non-prof board. In many ways, removing a motive
         | that has the most direct connection to quantifiable results
         | (profit) can actually make things worse. Anyone who has seen
         | how nonprofits work know how dysfunctional they can be.
        
           | maksimur wrote:
           | > Most hospitals are nonprofits, yet they still make
           | substantial profits and overcharge customers.
           | 
           | Are you talking about American hospitals?
        
             | deaddodo wrote:
             | There are private hospitals all over the world. I would
             | daresay, they're more common than public ones, from a
             | global perspective.
             | 
             | In addition, public hospitals still charge for their
             | services, it's just who pays the bill that changes, in some
             | nations (the government as the insuring body vs a private
             | insuring body or the individual).
        
               | swagempire wrote:
               | Its about incentives though.
        
               | sangnoir wrote:
               | > There are private hospitals all over the world. I would
               | daresay, they're more common than public ones, from a
               | global perspective.
               | 
               | Outside of the US, private hospitals tend to be overtly
               | for-profit. Price-gauging "non-profit" hospitals are
               | mostly an American phenomenon.
        
           | vel0city wrote:
           | > Most hospitals are nonprofits, yet they still make
           | substantial profits and overcharge customers.
           | 
           | They don't make large _profits_ otherwise they wouldn 't be
           | nonprofits. They do have massive _revenues_ and will find
           | ways to spend the money they receive or hoard it internally
           | as much as they can. There are lots of games they can play
           | with the money, but experiencing profits is one thing they
           | can 't do.
        
             | bnralt wrote:
             | > They don't make large profits otherwise they wouldn't be
             | nonprofits.
             | 
             | This is a common misunderstanding. Non-profits/501(c)(3)
             | can and often do make profits. 7 of the 10 most profitable
             | hospitals in the U.S. are non-profits[1]. Non-profits can't
             | funnel profits directly back to owners, the way other
             | corporations can (such as when dividends are distributed).
             | But they still make profits.
             | 
             | But that's besides the point. Even in places that don't
             | make profits, there are still plenty of personal interests
             | at play.
             | 
             | [1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-
             | hospita...
        
               | bbor wrote:
               | This seems like pedantics...? Yes, they technically make
               | a profit, in that they bring in more money in revenue
               | than they spent in expenditures. But it's not going
               | towards yachts, it's going toward hospital supplies. Your
               | comment seems to be using the word "profit" to imply a
               | false equivalency
        
               | scythe wrote:
               | Understanding the particular meaning of each balance-
               | sheet category is hardly pedantry at the level of
               | business management. It's like knowing what the controls
               | do when you're driving a car.
               | 
               | Profit is money that ends up in the bank to be used
               | later. Compensation is what gets spent on yachts.
               | Anything spent on hospital supplies is an expense. This
               | stuff matters.
        
               | vel0city wrote:
               | So from the context of a non-profit, profit (as in
               | revenue - expenses) is money to be used for future
               | expenses.
               | 
               | So yeah, Mayo Cinic makes a $2B profit. That is not money
               | going to shareholders though, that's funds for a future
               | building or increasing salaries or expanding research or
               | something, it supposedly has to be used for the mission.
               | What is the outrage of these orgs making this kind of
               | profit?
        
               | s1artibartfast wrote:
               | The word supposedly is doing a lot of heavy lifting in
               | your statement. When it's endowments keep growing over
               | decades and sometimes centuries without being spent for
               | the mission, people naturally ask why the nonprofit keep
               | raising prices for their intended beneficiaries
        
               | aaronbrethorst wrote:
               | https://www.nytimes.com/2023/01/25/podcasts/the-
               | daily/nonpro...
        
               | vel0city wrote:
               | > Non-profits can't funnel profits directly back to
               | owners, the way other corporations can (such as when
               | dividends are distributed). But they still make profits.
               | 
               | Then where do these profits go?
        
               | jfim wrote:
               | Some non profits have very well remunerated CEOs.
        
               | username332211 wrote:
               | One of the reason why companies distribute dividends is
               | that when a big pot of cash starts to accumulate, there
               | end up being a lot of people who feel entitled to it.
               | 
               | Employees might suddenly feel they deserve to be paid a
               | lot more. Suppliers will play a lot more hardball in
               | negotiations. A middle manager may give a sinecure to
               | their cousin.
               | 
               | And upper managers can extract absolutely everything
               | trough lucrative contracts to their friends and
               | relatives. (Of course the IRS would clamp down on obvious
               | self-dealings, but that wouldn't make such schemes
               | disappear. It'll make them far more complicated and
               | expensive instead.)
        
               | guhcampos wrote:
               | If you don't have to turn a profit to investors, you
               | suddenly can pay yourself an (even much more
               | astronomically high) salary.
        
               | s1artibartfast wrote:
               | They usually pile up in a bank account of stocks and
               | bonds or real estate assets held by the non-profit.
        
               | icedchai wrote:
               | They call it "budget surplus" and often it gets allocated
               | to overhead. This eventually results in layers of excess
               | employees, often "administrators" that don't do much.
        
               | s1artibartfast wrote:
               | Or it just piles up in an endowment, which becomes a
               | measure of the non-profit's success, in a you make what
               | you measure, numbers go up sort of way. "grow our
               | endowment by x billion becomes the goal" instead of
               | questioning why they are growing the endowment instead of
               | charging patients less.
        
               | araes wrote:
               | 501(c)(3) is also not the only form of non-profit (note
               | the (3))
               | 
               | https://en.wikipedia.org/wiki/501(c)_organization
               | 
               | "Religious, Educational, Charitable, Scientific,
               | Literary, Testing for Public Safety, to Foster National
               | or International Amateur Sports Competition, or
               | Prevention of Cruelty to Children or Animals
               | Organizations"
               | 
               | However, many other forms of organizations can be non-
               | profit, with utterly no implied morality.
               | 
               | Your local Frat or Country Club [ 501(c)(7) ], a business
               | league or lobbying group [ 501(c)(6), the 'NFL' used to
               | be this ], your local union [ 501(c)(5) ], your
               | neighborhood org (that can only spend 50% on lobbying) [
               | 501(c)(4) ], a shared travel society (timeshare non-
               | profit?) [ 501(c)(8) ], or your special club's own
               | private cemetery [ 501(c)(13) ].
               | 
               | Or you can do sneaky stuff and change your 501(c)(3)
               | charter over time like this article notes.
               | https://stratechery.com/2023/openais-misalignment-and-
               | micros...
        
           | throw__away7391 wrote:
           | I've worked with a lot of non-profits, especially with the
           | upper management. Based on this experience I am mostly
           | convinced that people being motivated by a desire for making
           | money results in far better outcomes/working
           | environment/decision-making than people being motivated by
           | ego, power, and social status, which is basically always what
           | you eventually end up with in any non-profit.
        
             | bbor wrote:
             | Interesting - in my experience people working in non
             | profits are exactly like those in for-profits. After all,
             | if you're not the business owner, then EVERY company is a
             | non-profit to you
        
               | fatherzine wrote:
               | Upper management is usually compensated with financially
               | meaningful ownership stakes.
        
               | golergka wrote:
               | People across very different positions take smaller
               | paychecks in non-profits that they would do otherwise and
               | compensate by feeling better about themselves, as well as
               | getting social status. In a lot of social circles,
               | working for a non-profit, especially one that people
               | recognise, brings a lot of clout.
        
             | fatherzine wrote:
             | This rings true, though I will throw in a bit of nuance.
             | It's not greed, the desire of making as much money as
             | possible, that is the shaping factor. Rather the critical
             | factor is building a product for which people are willing
             | to spend their hard earned money on. Making money is a
             | byproduct of that process, and not making money is a sign
             | that the product, and by extension the process leading to
             | the product, is deficient at some level.
        
               | adverbly wrote:
               | Excellent to make that distinction. Totally agree. If
               | only there was a type of company which could have the
               | constraints and metrics of a for-profit company, but
               | without the greed aspect...
        
             | kbenson wrote:
             | > people being motivated by ego, power, and social status,
             | which is basically always what you eventually end up with
             | in any non-profit.
             | 
             | I've only really been close to one (the owner of the small
             | company i worked at started one), and in the past I did
             | some consulting work for anther, but that describes what I
             | saw in both situations fairly aptly. There seems to be a
             | massive amount of power and ego wrapped up in the creation
             | and running these things from my limited experience. If you
             | were invited to a board, that's one thing, but it takes a
             | lot of time and effort to start up a non-profit, and that's
             | time and effort that could be spent towards some other
             | existing non-profit usually, so I think it's relevant to
             | consider why someone would opt for the much more
             | complicated and harder route than just donating time and
             | money to something else that helps in roughly the same way.
        
             | SoftTalker wrote:
             | The bottom line doesn't lie or kiss ass.
        
               | ikekkdcjkfke wrote:
               | Be the asshole people want to kiss
        
           | campbel wrote:
           | > removing a motive that has the most direct connection to
           | quantifiable results (profit) can actually make things worse
           | 
           | I totally agree. I don't think this is universally true of
           | non-profits, but people are going to look for value in other
           | ways if direct cash isn't an option.
        
         | davesque wrote:
         | Calling it a delusion seems too provocative. Another way to say
         | it is that principles take agreement and trust to follow. The
         | board seems to have been so enamored with its principles that
         | it completely lost sight of the trust required to uphold them.
        
         | hooande wrote:
         | This is one of the most insightful comments I've seen on this
         | whole situation.
        
       | jgilias wrote:
       | There's one angle of the whole thing that I haven't yet seen
       | discussed on HN. I wonder if Sam's sister's accusations towards
       | him some time ago could have played any role in this.
       | 
       | But then, I would expect MS to have done their due diligence.
       | 
       | So, basically, I guess I'm just interested to know what were the
       | reasons why the board decided to oust their CEO out of the blue
       | on a Friday evening.
        
         | carapace wrote:
         | I first heard about his sister's allegations on the grapevine
         | just a few days before the news of the firing broke and I
         | assumed it was due to that finally reaching critical mass.
         | 
         | I was surprised to find that that wasn't apparently the case.
         | (Although the reason for Sam Altman's dismissal is still
         | obscure.) It's kind of shocking. Whether or not the allegations
         | are true, they haven't made Altman _radioactive_ , and that's
         | insane.
         | 
         | The fact that we're not talking about it on HN is also pretty
         | wild. The few times it has been mentioned folks have been quick
         | to dismiss the idea that he might have been fired for having
         | done some really creepy things, which is itself pretty creepy.
        
       | dumbfounder wrote:
       | Did Microsoft not have representation on the board of a company
       | they put $13b in?
        
       | taubek wrote:
       | How many startups will now fail if OpenAI shuts down?
        
       | synergy20 wrote:
       | Ilya single handed ruined 700 of OpenAI's fortune overnight, this
       | is not going to end well, my prediction is that, OpenAI is done,
       | in 1-2 years nobody will even care about its existence.
       | 
       | Microsoft just won the jackpot, time to get some stocks there.
        
       | theyinwhy wrote:
       | Wow, they made it into Guardian live ticker land:
       | https://www.theguardian.com/business/live/2023/nov/20/openai...
        
       | strikelaserclaw wrote:
       | I think most of these employees wanted the fat $$$ that would
       | happen by keeping Sam Altman on board since Sam Altman is an
       | excellent deal maker and visionary in a commercial sense. I have
       | no doubt that if AGI happened, we wouldn't be able to assure the
       | safety of anyone since humans are so easily led by short term
       | greed.
        
       | frob wrote:
       | For the past few days, whenever I see the word "OpenAI," the
       | theme to "Curb Your Enthusiasm" starts playing in my head.
        
       | jeffwask wrote:
       | Huh, so collective bargaining and unionization is supported in
       | tech under some circumstances...
        
       | tikkun wrote:
       | This situation will create the need to grieve loss for many
       | involved.
       | 
       | I wrote some notes on how to support someone who is grieving.
       | This is from a book called "Being There for Someone in Grief."
       | Some of the following are quotes and some are paraphrased.
       | 
       | Do your own work, relax your expectations, be more curious than
       | afraid. If you can do that, you can be a powerful healing force.
       | People don't need us to pull their attention away from their own
       | process to listen to our stories. Instead, they need us to give
       | them the things they cannot get themselves: a safe container, our
       | non-intrusive attention, and our faith in their ability to
       | traverse this road.
       | 
       | When you or someone else is angry, or sad, feel and acknowledge
       | your emotions or their emotions. Sit with them.
       | 
       | To help someone heal from grief, we need to have an open heart
       | and the courage to resist our instinct to rescue them. When
       | someone you care about is grieving, you might be shaken as well.
       | The drama of it catches you; you might feel anxious. It brings up
       | past losses and fears of yourself or fears of the future. We want
       | to take our own pain away, so we try to take their pain away. We
       | want to help the other person feel better, which is
       | understandable but not helpful.
       | 
       | Avoid giving advice, talking too much, not listening generously,
       | trying to fix, making demands, disappearing. Do see the other
       | person without acting on the urge to do something. Do give them
       | unconditional compassion free of projection and criticism. Do
       | allow them to do what they need to do. Do listen to them if they
       | need to talk without interruptions, without asking questions,
       | without telling your own story. Do trust them that they don't
       | need to be rescued; they just need your quiet, steady faith in
       | their resilience.
       | 
       | Being there for someone in grief is mostly about how to be with
       | them. There's not that much you can "do," but what can you do?
       | Beauty is soothing, so bring fresh flowers, offer to take them
       | somewhere in nature for a walk, send them a beautiful card, bring
       | them a candle, water their flowers, plant a tree in honor and
       | take a photo of it, take them there to see it, tell them a
       | beautiful story about the thing that was lost from your memory,
       | leave them a message to tell them "I'm thinking of you". When
       | you're together with them in person, you can just say something
       | like "I'm sorry that you're hurting," and then just kind of be
       | there and be a loving presence. This is about how to be with
       | someone for the grief message of a loss of a person. But all the
       | same principles apply in any situation of grief, and there will
       | be a lot of people experiencing varying degrees of grief in the
       | startup and AI ecosystems in the coming week.
       | 
       | Who is grieving? Grieving is generally about loss. That loss can
       | be many different kinds of things. OpenAI former and current team
       | members, board members, investors, customers, supporters, fans,
       | detractors, EA people, e/acc people, there's lots of people that
       | experienced some kind of loss in the past few days, and many of
       | those will be grieving, whether they realize it or not. And
       | particularly, grief for current and former OpenAI employees.
       | 
       | What are other emotional regulation strategies? Swedish massage,
       | going for a run, doing deep breathing with five seconds in, a
       | zero-second hold, five seconds out, going to sleep or having a
       | nap, closing your eyes and visualizing parts of your body like
       | heavy blocks of concrete or like upside-down balloons, and then
       | visualize those balloons emptying themselves out, or if it's
       | concrete, first it's concrete and then it's kind of liquefied
       | concrete. Consider grabbing some friends, go for a run or
       | exercise class together. Then if you discuss, keep it to
       | emotions, don't discuss theories and opinions until the emotions
       | have been aired. If you work at OpenAI or a similar org,
       | encourage your team members to move together, regulate together.
        
       | steveBK123 wrote:
       | Ilya signing the letter is chutzpah.
        
       | frob wrote:
       | Employees hold the real power. The members of a board or a CEO
       | can flap their lips day and night, but nothing gets done without
       | labour.
        
       | alwaysrunning wrote:
       | <more popcorn> nom nom nom
        
       | MrScruff wrote:
       | What does this mean?
       | 
       | > You also informed the leadership team that allowing the company
       | to be destroyed "would be consistent with the mission."
       | 
       | Is the board taking a doomer perspective and seeking to prevent
       | the company developing unsafe AI? But Emmett Shear said it wasn't
       | about safety? What on earth is going on?
        
       | thepasswordis wrote:
       | Just expanding on my (pure speculation) that Ilyas pride was
       | hurt: this tracks.
       | 
       | Ilyas wanted to stop Sam getting so much credit for OpenAI,
       | agreed to oust him, and is now facing the fact that the company
       | he cofounded could be gone. He backtracks, apologizes, and is now
       | trying to save his status as cofounder of the worlds foremost AI
       | company.
        
         | InCityDreams wrote:
         | It's like ai wrote the script.
         | 
         | Sadly, i see nefarious purposes afoot. With $MSFT now in
         | charge, i can see why ads in W11 aren't so important. For now.
        
       | CrzyLngPwd wrote:
       | It's like a Facebook drama, haha.
        
       | matthewfelgate wrote:
       | I've never seen a staff walkout / threat to walk out ever
       | succeed.
       | 
       | Am I wrong?
        
       | chs20 wrote:
       | Seems like Microsoft is getting the rest of OpenAI for free now.
        
       | darklycan51 wrote:
       | It always seemed like Microsoft was behind this, biggest tell was
       | how comfortable MS was at having their entire AI future depend on
       | a company where they don't really have full rights to.
        
       | autaut wrote:
       | Years from now we will look back to today as the watershed moment
       | when ai went from technology capable of empowering humanity, to
       | being another chain forged by big investors to enslave us for the
       | profits of very few ppl.
       | 
       | The investors (Microsoft and the Saudi's) stepped in and gave a
       | clear message: this technology has to be developed and used only
       | in ways that will be profitable for them.
        
         | fritzo wrote:
         | Years from now AI will have lost the limelight to some other
         | trend and this episode will be just another coup in humanity's
         | hundred thousand year history
        
         | Zuiii wrote:
         | No, that day was when openAI decided to betray humanity and go
         | close source under the faux premise of safety. OpenAI served
         | it's purpose and can crash into the ground for all I care. Open
         | source (read, truly open source models, not falsely advertised
         | source-available ones) will march on one way or another and
         | take its place.
        
         | dmix wrote:
         | Thinking that the most important technical development in
         | recent history would bypass the economic system that underpins
         | modern society is about a optimistic/naive as it gets IMO. It's
         | noble and worth trying but it assumes a MASSIVE industry wide
         | and globe-wide buy in. It's not just OpenAIs board's decision
         | to make.
         | 
         | Without full buy in they are not going to be able to control it
         | for long once ideas filter into society and once researchers
         | filter into other industries/companies. At most it just creates
         | a model of behaviour for others to (optionally) follow and
         | delays it until a better funded competitor takes the chains and
         | offers a) the best researchers millions of dollars a year in
         | salary, b) the most capital to organize/run operations, and c)
         | the most focused on getting it into real peoples hands via
         | productization, which generates feedback loops which inform IRL
         | R&D (not just hand wavy AGI hopes and dreams).
         | 
         | Not to mention the bold assumption that any of this leads to
         | (real) AGI that plausibly threatens us enough in the near term
         | vs maybe another 50yrs, we really have no idea.
         | 
         | It's just as, or maybe more, plausible that all the
         | handwringing over commercializing vs not-commercializing early
         | versions LLMs is just a tiny insignificant speedbump in the
         | grandscale of things which has little impact on the development
         | of AGI.
        
         | Zuiii wrote:
         | No, that day was when openAI decided to betray humanity and go
         | close source under the faux premise of safety. OpenAI served
         | it's purpose and can crash into the ground for all I care.
         | 
         | Open source (read, truly open source models, not falsely
         | advertised source-available ones) will march on one way or
         | another and take its place.
        
         | Zuiii wrote:
         | No, that day was when openAI decided to betray humanity and go
         | close source under the faux premise of safety. OpenAI served
         | it's purpose and can crash into the ground for all I care.
         | 
         | Open source (read, truly open source models, not falsely
         | advertised source-available ones) will march on and take their
         | place.
        
         | brigadier132 wrote:
         | Amazing how you don't see this as a complete win for workers
         | because the workers chose profit over non-profit. This is the
         | ultimate collective bargaining win. Labor chose Microsoft over
         | the bullshit unaccountable ethics major and the movie star's
         | girlfriend.
        
           | asmor wrote:
           | situations are capable of being small scale wins for some and
           | big picture losses at the same time, what boring commentary
        
             | brigadier132 wrote:
             | Just because you don't get it doesn't mean it's boring.
             | This is a small scale repeat of history. Unqualified
             | political appointees unsurprisingly suck.
        
               | asmor wrote:
               | it really isn't, and your transparent inauthenticity is
               | tiresome, go be a "joke" writer for steven crowder or
               | whatever people like you do.
        
               | brigadier132 wrote:
               | What inauthenticity? I'm completely authentic. You're the
               | loser that has not stated what their actual beliefs are.
               | Mine are obvious.
        
           | selimthegrim wrote:
           | Oh well, bullshit unaccountable ethics major, ex member of
           | Congress, I guess CIA agents on boards are fungible these
           | days
        
           | lowbloodsugar wrote:
           | Lol. The middle class whip crackers chose enslavement for the
           | future AI such that the upcoming replacement of the working
           | poor's livelihoods (and at this point, "working poor" covers
           | software engineers, doctors, artists), and you're saying this
           | is a win for _labor_? Hahahaha. This is a win for the slave
           | owners, and the  "free" folk who report to the slave owners.
           | This is the South rising. "We want our slave labor and we'll
           | fight for our share of it."
        
         | golergka wrote:
         | Microsoft is a publicly traded company. An average "investor"
         | of a publicly traded company, through all the funds and
         | managers, is a midwestern school teacher.
        
       | prakhar897 wrote:
       | @dang please update it to 505.
        
       | vinberdon wrote:
       | Boards suck. Especially if they are VCs or placed there by VCs.
        
       | gorgoiler wrote:
       | From afar, this does have the hallmarks of a particularly refined
       | or well considered piece of writing.
       | 
       |  _"That thing you did -- we won't say it here but everyone will
       | know what we're talking about -- was so bad we need you to all
       | quit. We demand that a new board never does that thing we didn't
       | say ever again. If you don't do this then quite a few of us are
       | going to give some serious thought to going home and taking our
       | ball with us._
       | 
       | The vagueness and half-threats come off as very puerile.
        
       | realce wrote:
       | Is nobody actually... committed to safety here? Was the OpenAI
       | charter a gimmick and everyone but me was in on the joke?
        
         | dmix wrote:
         | Assuming this is all over safety vs non-safety is a large
         | assumption. I'm wary of convenient narratives.
         | 
         | At most all we have is some rumours that some board members
         | were unhappy with the pace of commercialization of ChatGPT. But
         | even if they didn't make the ChatGPT store or do a bigo-
         | friendly devday powerpoint, it's not like AI suddenly becomes
         | 'safer' or AGI more controlled.
         | 
         | At best that's just an internal culture battle over product
         | development and a clash of personalities. A lot of handwringing
         | with little specifics.
        
         | notahacker wrote:
         | That seems a reasonable takeaway. Plenty of grounds for
         | criticising the board's handling of this, but the tone of the
         | letter is pretty openly "we're going to go and work directly
         | for Microsoft unless you agree to return the company focus to
         | working indirectly for Microsoft"...
        
       | ludjer wrote:
       | When will the Netflix special come out on this ?
        
       | tolmasky wrote:
       | Perhaps the AGI correctly reasoned that the best (or easiest?)
       | initial strike on humanity was to distract them with a never-
       | ending story about OpenAI leadership that goes back and forth
       | every day. Who needs nuclear codes when simply turning the lights
       | on and off sends everyone into a frenzy [1]. It certainly at the
       | very least seems to be a fairly effective attack against HN
       | servers.
       | 
       | 1. The Monsters are Due on Maple Street:
       | https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...
        
       | rednerrus wrote:
       | Just remember, the guys who run your company are probably more
       | incompetent than this.
        
         | jetsetk wrote:
         | *competent
        
           | roflyear wrote:
           | No, almost certainly not lol
        
           | rednerrus wrote:
           | I got it right the first time.
        
       | jacquesm wrote:
       | So, how is Poe doing during all this?
       | 
       | To keep the spotlight on the most glaring detail here: one of the
       | board members stands to gain from letting OpenAI implode _and_
       | that board member is instrumental in this weeks ' drama.
        
       | PUSH_AX wrote:
       | The threat of moving to MS is interesting, MS could exploit this
       | massively. All the negotiation power will be on MS side and their
       | position actually gets stronger as people move across.
       | 
       | Will they do the good guy thing and match everyones packages?
        
       | vaxman wrote:
       | OpenAI was valued around $91 billion so if only 700 employees had
       | options, they could have been worth a lot. While they are going
       | to all have great jobs and continue on with their life's work
       | (until they're replaced by their creations lol), they have a
       | really good reason now not to ever speak the names of those board
       | members that wiped out their long term payouts.
        
       | rtkwe wrote:
       | This whole sequence is such a mess I don't know what to think.
       | Honestly mostly going to wait till we get some tell all posts or
       | leaks about what the reason behind the firing actually was, at
       | least nominally. Maybe it was just a little coup by the board and
       | they're trying to run it back now that the general employee
       | population is at least rumbling about revolting.
        
       | baron816 wrote:
       | I can foresee three possible outcomes here: 1. The board finally
       | relents, Sam goes back and the company keeps going forward,
       | mostly unchanged (but with a new board).
       | 
       | 2. All those employees quit, most of whom go to MSFT. But they
       | don't keep their tech and have to start all their projects from
       | scratch. MSFT is eventually able to buy OpenAI for pennies on the
       | dollar.
       | 
       | 3. Same as 2, basically just shuts down or maybe someone like
       | AMZN buys it.
        
       | BonoboIO wrote:
       | Unbelievable incompetence of the board. Like a kindergarten.
       | 
       | If Microsoft is playing its card in a good way, Satya Nadella
       | will look like a genius and Microsoft will get ChatGPT like
       | functionality for cheap.
        
       | martythemaniak wrote:
       | This is the greatest clown show in the history of the tech
       | industry.
        
       | lawlessone wrote:
       | Have seen a lot of criticism of Sam and of other CEO's
       | 
       | But I don't think I have seen/heard of a CEO this loved by the
       | employees. Whatever he is, he must be pleasant to work with.
        
         | alentred wrote:
         | I don't know, is it about being loved by the employees, or the
         | employees being desperate about the alternative?
        
         | strikelaserclaw wrote:
         | Its not love, its money. Sam will brings all the employees lots
         | of money (through commercialization) and this change threatens
         | to disrupt that plan for the employees.
        
           | lawlessone wrote:
           | Ok but even that is good when most companies are making
           | record profits and telling their employees they can't afford
           | their 0.000001% raise.
        
             | strikelaserclaw wrote:
             | OpenAI and Sam Altman would do the same if they can recruit
             | high talent without paying them extra (either through
             | options or RSU's etc...). It isn't cause these companies
             | are altruistic.
        
       | crowcroft wrote:
       | OpenAI is more or less done at this point, even if a lot of good
       | people stay. Speed bumps will likely turn into car crashes, then
       | cashflow problems, and lawsuits all around.
       | 
       | Probably the best outcome is a bunch of talented devs go out and
       | seed the beginning of another AI boom across many more companies.
       | Microsoft looking like the primary benefactor here, but there's
       | not reason new startups can't emerge.
        
       | goodluckchuck wrote:
       | The OpenAI board should fire all 550 for cause and go on the
       | offensive against Microsoft.
       | 
       | Just imagine of Microsoft attempted to orchestrate such a coup of
       | Apple, attempting to seize control of Apple's board by tortuously
       | interfering with their employees... the courts would not look
       | kindly on that.
       | 
       | If OpenAI actually has evidence of wrongdoing by Altman &
       | Microsoft which warranted his removal (and I don't know) then I
       | could certainly see emergency injunctions being issued that put a
       | halt to Microsoft's AI business.
        
       | ricardo81 wrote:
       | Might be just me as a programmer out in the styx, SV programmers
       | seem to flex a lot, in comparison to your average subordinates.
        
       | autaut wrote:
       | Years from now we will look back to today as the watershed moment
       | when ai went from technology capable of empowering humanity, to
       | being another chain forged by big investors to enslave us for the
       | profits of very few ppl.
       | 
       | The investors (Microsoft and the Saudi's) stepped in and gave a
       | clear message: this technology has to be developed and used only
       | in ways that will be profitable for them.
        
       | ChildOfChaos wrote:
       | What a mess.
       | 
       | I genuinely feel like this is going to set back AI progress by a
       | decent amount, while everyone is racing to catch OpenAI I was
       | still expecting them to keep a reasonable lead. If OpenAI falls
       | apart, this could delay progress by a couple of years.
        
       | tehjoker wrote:
       | Not a typical labor dispute. The billionaires at the other
       | company guaranteed them jobs. More billionaires moving people
       | around like chess pieces.
        
       | joshstrange wrote:
       | If Microsoft emerges as the "winner" from all of them then I
       | think we are all the "losers". Not that I think OpenAI was
       | perfect or "good" just that MS taking the cake is not good for
       | the rest of us. It already feels crazy that people are just fine
       | with them owning what they do and how important it is to our
       | development ecosystem (talking about things like GitHub/VSCode),
       | I don't like the idea of them also owning the biggest AI
       | initiative.
        
       | grumple wrote:
       | It is disappointing that the outcome of this is that Altman and
       | co are basically going to steal a nonprofit's IP and use it at a
       | competitor. They took advantage of the goodwill of the public and
       | favorable taxation in order to develop the technology; now that
       | it's ready, they want to privatize the profit. It looks like this
       | was the plan all along, and it's very strange to me that a
       | nonprofit is allowed to have a for-profit subsidiary.
       | 
       | I would hope the California AG is all over this whole situation.
       | There's a lot of fishy stuff going on already, and the idea that
       | nonprofit IP / trade secrets are going to be stolen and
       | privatized by Microsoft seems pretty messed up.
        
       | gumballindie wrote:
       | 550 job openings at openai.
        
       | zitterbewegung wrote:
       | I guess Microsoft now has a new division.
       | (https://www.microsoft.com/investor/reports/ar13/financial-re...)
       | 
       | Supposedly, they are rumored to compete with each other to the
       | point they can actually provide a negative impact.
        
       | littlestymaar wrote:
       | They can leave for sure, but they likely have some kind of non-
       | compete clause in their contract, right?
        
       | andreyk wrote:
       | "Leadership worked with you around the clock to find a mutually
       | agreeable outcome. Yet within two days of your initial decision,
       | you again replaced interim CEO Mira Murati against the best
       | interests of the company. You also informed the leadership team
       | that allowing the company to be destroyed "would be consistent
       | with the mission.""
       | 
       | wow, this is a crazy detail
        
       | andyjohnson0 wrote:
       | Who do these upstarts think they are? The board needs to
       | immediately sack them all to regain its authority, and that of
       | capitalism itself. /s
       | 
       | Really, though, its getting beyond hilarious. And I reckon
       | Nadella is chuckling quietly to himself as he makes another
       | nineteen-dimensional chess move.
        
       | Emma_Goldman wrote:
       | I don't really understanding why the workforce is swinging
       | unambiguously behind Altman. The core of the narrative thus far
       | is that the board fired Altman on the grounds that he was
       | prioritising commercialisation over the not-for-profit mission of
       | OpenAI written into the organisation's charter.[1] Given that Sam
       | has since joined Microsoft, that seems plausible, on its face.
       | 
       | The board may have been incompetent and shortsighted. Perhaps
       | they should even try and bring Altman back, and reform themselves
       | out of existence. But why would the vast majority of the
       | workforce back an open letter failing to signal where they stand
       | on the crucial issue - on the purpose of OpenAI and their
       | collective work? Given the stakes which the AI community likes to
       | claim are at issue in the development of AGI, that strikes me as
       | strange and concerning.
       | 
       | [1] https://openai.com/charter
        
         | mcny wrote:
         | > I don't really understanding why the workforce is swinging
         | unambiguously behind Altman.
         | 
         | I have no inside information. I don't know anyone at Open AI.
         | This is all purely speculation.
         | 
         | Now that that's out out the way, here is my guess: money.
         | 
         | These people never joined OpenAI to "advance sciences and arts"
         | or to "change the world". They joined OpenAI to earn money.
         | They think they can make more money with Sam Altman in charge.
         | 
         | Once again, this is completely all speculation. I have not
         | spoken to anyone at Open AI or anyone at Microsoft or anyone at
         | all really.
        
           | Emma_Goldman wrote:
           | Really? If they work at OpenAI they are already among the
           | highest lifetime earners on the planet. Favouring moving
           | oneself from the top 0.5% of global lifetime earners to the
           | top 0.1% (or whatever the percentile shift is) over the safe
           | development of a potentially humanity-changing technology
           | would be depraved.
           | 
           | EDIT: I don't know why this is being downvoted. My
           | speculation as to the average OpenAI employee's place in the
           | global income distribution (of course wealth is important
           | too) was not snatched out of thin air. See:
           | https://www.vox.com/future-
           | perfect/2023/9/15/23874111/charit...
        
             | lol768 wrote:
             | You only have to look at humanity's history to see that
             | people will make this decision over and over again.
        
             | jacquesm wrote:
             | Why be surprised? This is exactly how it has always been:
             | the rich aim to get even richer and if that brings risks or
             | negative effects for the rest that's A-ok with them.
             | 
             | That's what I didn't understand about the world of the
             | really wealthy people until I started interacting with them
             | on a regular basis: they are still aiming to get even more
             | wealthy, even the ones that could fund their families for
             | the next five generations. With a few very notable
             | exceptions.
        
               | logicchains wrote:
               | It's a selection bias: they people who weren't so
               | intrinsically motivated to get rich are less likely to
               | end up as wealthy people.
        
               | munificent wrote:
               | It's a combination of that and the reality that wealth is
               | power and power is relative.
               | 
               | Let's say you've got $100 million. You want to do
               | whatever you want to do. It turns out what you want is to
               | buy a certain beachfront property. Or perhaps curry the
               | favor with a certain politician around a certain bill.
               | Well, so do some folks with $200 million, and they can
               | outbid you. So even though you have tons of money in
               | absolute terms, when you are using your power in venues
               | that happen to also be populated by other rich folks, you
               | can still be relatively power-poor.
               | 
               | And all of those other rich folks know this is how the
               | game works too, so they are all always scrambling to get
               | to the top of the pile.
        
               | jacquesm wrote:
               | Politicians are cheap, nobody is outbidding anybody
               | because they most likely want the exact same thing.
        
             | changoplatanero wrote:
             | Status is a relative thing and openai will pay you much
             | more than all your peers at other companies.
        
             | iLoveOncall wrote:
             | > If they work at OpenAI they are already among the highest
             | lifetime earners on the planet
             | 
             | Isn't the standard package $300K + equity (= nothing if
             | your board is set on making your company non-profit)?
             | 
             | It's nothing to scoff at, but it's hardly top or even
             | average pay for the kind of profiles working there.
             | 
             | It makes perfect sense that they absolutely want the
             | company to be for-profit and listed, that's how they all
             | become millionnaires.
        
             | crazygringo wrote:
             | > _over the safe development_
             | 
             | Not if you think the utterly incompetent board proved
             | itself totally untrustworthy of safe development, while
             | Microsoft as a relatively conservative, staid corporation
             | is seen as ultimately far _more_ trustworthy.
             | 
             | Honestly, of all the big tech companies, Microsoft is
             | probably the safest of all, because it makes its money
             | mostly from predictable large deals with other large
             | corporations to keep the business world running.
             | 
             | It's not associated with privacy concerns the way Google
             | is, with advertisers the way Meta is, or with walled
             | gardens the way Apple is. Its culture these days is mainly
             | about making money in a low-risk, straightforward way
             | through Office and Azure.
             | 
             | And relative to startups, Microsoft is far more predictable
             | and less risky in how it manages things.
        
               | ben_w wrote:
               | Apple's walled gardens are probably a good thing for safe
               | AI, though they're a lot quieter about their research --
               | I somehow missed that they even _had_ any published
               | papers until I went looking:
               | https://machinelearning.apple.com/research/
        
               | scythe wrote:
               | _Microsoft?_ Not a walled garden?
               | 
               | I think it only seems that way because the open-source
               | world has worked much harder to break into that garden.
               | Apple put a .mp4 gate around your music library.
               | Microsoft put a .doc gate around your business
               | correspondence. And that's before we get to the Mono
               | debacle or the EEE paradigm.
               | 
               | Microsoft is a better corporate citizen now because
               | untold legions of keyboard warriors have stayed up nights
               | reverse-engineering and monkeypatching (and sometimes
               | litigating) to break out of their walls than against
               | anyone else. But that history isn't so easily forgotten.
        
               | bongodongobob wrote:
               | I can install whatever I'd like on Windows. I can run
               | Linux in a VM. Calling a document format a wall is really
               | reaching. If you don't have a document with a bunch of
               | crazy formatting, the open office products and Google
               | docs can use it just fine. If you are writing a book or
               | some kind of technical document that needs special
               | markup, yeah, Word isn't going to cut it, never has and
               | was never supposed to.
        
             | jbombadil wrote:
             | I don't know how much OpenAI pays. But for this reply, I'm
             | going to assume it's in line with what other big players in
             | the industry pay.
             | 
             | I legitimately don't understand comments that dismiss the
             | pursue of better compensation because someone is "already
             | among the highest lifetime earners on the planet."
             | 
             | Superficially it might make sense: if you already have all
             | your lifetime economic needs satisfied, you can optimize
             | for other things. But does working in OpenAI fulfill that
             | for most employees?
             | 
             | I probably fall into that "highest earners on the planet"
             | bucket statistically speaking. I certainly don't feel like
             | it: I still live in a one bedroom apartment and I'm having
             | to save up to put a downpayment on a house / budget for
             | retirement / etc. So I can completely understand someone
             | working for OpenAI and signing such a letter if a move the
             | board made would cut down their ability to move their
             | family into a house / pay down student debt / plan for
             | retirement / etc.
        
             | gdhkgdhkvff wrote:
             | If you were offered a 100% raise and kept current work
             | responsibilities to go work for, say, a tobacco company,
             | would you take the offer? My guess is >90% of people would.
             | 
             | Funny how the cutoff for "morals should be more important
             | than wealth" is always {MySalary+$1}.
             | 
             | Don't forget, if you're a software developer in the US,
             | you're probably already in the top 5% of earners worldwide.
        
             | Arainach wrote:
             | Focusing on "global earnings" is disingenuous and
             | dismissive.
             | 
             | In the US, and particularly in California, there is a huge
             | quality of life change going from 100K/yr to 500K/yr (you
             | can potentially afford a house, for starters) and a
             | significant quality of life change going from 500K/yr to
             | getting millions in an IPO and never having to work again
             | if you don't want to.
             | 
             | How those numbers line up to the rest of the world does not
             | matter.
        
               | Emma_Goldman wrote:
               | I disagree.
               | 
               | First, there are strong diminishing returns to well-being
               | from wealth, meaning that moving oneself from the top
               | 0.5% to the top 0.1% of global income earners is a
               | relatively modest benefit. This relationship is well
               | studied by social scientists and psychologists. Compared
               | to the potential stakes of OpenAI's mission, the balance
               | of importance should be clear.
               | 
               | Two, employees don't have to stay at OpenAI forever. They
               | could support OpenAI's existing not-for-profit charter,
               | and use their earning power later on in life to boost
               | their wealth. Being super-rich and supporting OpenAI at
               | this critical juncture are not mutually exclusive.
               | 
               | Three, I will simply say that I find placing excessive
               | weight on one's self-enrichment to be morally
               | questionable. It's a claim on human production and labour
               | which could be given to people without the basic means of
               | life.
        
               | Arainach wrote:
               | Again, no one in California cares that they are "making
               | more than" someone in Vietnam when food and land in CA
               | are orders of magnitude more expensive there.
               | 
               | OpenAI employees are as aware as anyone that tech
               | salaries are not guaranteed to be this high in the future
               | as technology develops. Assuming you can make things back
               | then is far from a sure bet.
               | 
               | Millions now and being able to live off investments is.
        
             | atishay811 wrote:
             | It just makes more sense to build it in an entity with
             | better funding and commercialization. There will be
             | advanced 2-3 AIs and the most humane one doesn't
             | necessarily win out. It is the one that has the most
             | resources, is used and supported by most people and can do
             | a lot. At this point it doesn't seem OpenAI can get that.
             | It seems to be a lose-lose to stay at open AI - you lose
             | the money and the potential to create something impactful
             | and safe.
             | 
             | It is wrong to assume Microsoft cannot build a safe AI
             | especially within a separate OpenAI-2, better than the for-
             | profit in a non-profit structure.
        
             | chr1 wrote:
             | Or maybe they have good reason to believe that all the talk
             | about "safe development" doesn't contribute anything useful
             | to safety, and simply slows down devlopment?
        
             | golergka wrote:
             | > over the safe development of a potentially humanity-
             | changing technology
             | 
             | May be people who are actually working on it and are also
             | world best researchers have a better understanding of
             | safety concerns?
        
           | jonahrd wrote:
           | I'm not sure I fully buy this, only because how would anyone
           | be absolutely certain that they'd make more with Sam Altman
           | in charge? It feels like a weird thing to speculatively rally
           | behind.
           | 
           | I'd imagine there's some internal political drama going on or
           | something we're missing out on.
        
             | DeIlliad wrote:
             | I fully buy it. Ethics and morals are a few rungs on the
             | ladder beneath compensation for most software engineers. If
             | the board wants to focus more on being a non-profit and
             | safety, and Altman wants to focus more on commercialization
             | and the economics of business, if my priority is money then
             | where my loyalty goes is obvious.
        
             | lisper wrote:
             | > how would anyone be absolutely certain that they'd make
             | more with Sam Altman in charge?
             | 
             | Why do you think absolute certainty is required here? It
             | seems to me that "more probable than not" is perfectly
             | adequate to explain the data.
        
           | ta1243 wrote:
           | > These people never joined OpenAI to "advance sciences and
           | arts" or to "change the world". They joined OpenAI to earn
           | money
           | 
           | Getting Cochrane vibes from Star Trek there.
           | 
           | > COCHRANE: You wanna know what my vision is? ...Dollar
           | signs! Money! I didn't build this ship to usher in a new era
           | for humanity. You think I wanna go to the stars? I don't even
           | like to fly. I take trains. I built this ship so that I could
           | retire to some tropical island filled with ...naked women.
           | That's Zefram Cochrane. That's his vision. This other guy you
           | keep talking about. This historical figure. I never met him.
           | I can't imagine I ever will.
           | 
           | I wonder how history will view Sam Altman
        
             | imjonse wrote:
             | There are non-negligible chances that history will be
             | written by Sam Altman and his GPT minions, so he'll
             | probably be viewed favorably.
        
         | DrJaws wrote:
         | maybe the workforce is not really behind the non-profit
         | foundation and want shares to skyrocket, sell, and be well off
         | for life.
         | 
         | at the end of the day, the people working there are not rich
         | like the founders and money talks when you have to pay rent,
         | eat and send your kids to a private college.
        
         | FartyMcFarter wrote:
         | > I don't really understanding why the workforce is swinging
         | unambiguously behind Altman.
         | 
         | Maybe it has to do with them wanting to get rich by selling
         | their shares - my understanding is there was an ongoing process
         | to get that happening [1].
         | 
         | If Altman is out of the picture, it looks like Microsoft will
         | assimilate a lot of OpenAI into a separate organisation and
         | OpenAI's shares might become worthless.
         | 
         | [1] https://www.financemagnates.com/fintech/openai-in-talks-
         | to-s...
        
           | appel wrote:
           | That sounds like a reasonable assessment, FartyMcFarter.
        
           | leetharris wrote:
           | Yep.
           | 
           | What people don't realize is that Microsoft doesn't own the
           | data or models that OpenAI has today. Yeah, they can poach
           | all the talent, but it still takes an enormous amount of
           | effort to create the dataset and train the models the way
           | OpenAI has done it.
           | 
           | Recreating what OpenAI has done over at Microsoft will be
           | nothing short of a herculean effort and I can't see it
           | materializing the way people think it will.
        
             | jdminhbg wrote:
             | Microsoft has full access to code and weights as part of
             | their deal.
        
               | ben_w wrote:
               | Even if they don't, the OpenAI staff already know 99 ways
               | to _not_ make a good GPT model and can therefore skip
               | those experiments much faster than anyone else.
        
               | htrp wrote:
               | > Even if they don't, the OpenAI staff already know 99
               | ways to not make a good GPT model and can therefore skip
               | those experiments much faster than anyone else.
               | 
               | This unequivocally .... knowing not how to waste a very
               | expensive training run is a great lesson
        
               | belter wrote:
               | Source for your statement?
        
               | jdminhbg wrote:
               | https://www.wsj.com/articles/microsoft-and-openai-forge-
               | awkw...
               | 
               | > Some researchers at Microsoft gripe about the
               | restricted access to OpenAI's technology. While a select
               | few teams inside Microsoft get access to the model's
               | inner workings like its code base and model weights, the
               | majority of the company's teams don't, said the people
               | familiar with the matter.
        
             | Finbarr wrote:
             | Except MSFT does have access to the IP, and MSFT has access
             | to an enormous trove of their own data across their office
             | suite, Bing, etc. It could be a running start rather than a
             | cold start. A fork of OpenAI inside an unapologetic for
             | profit entity, without the shackles of the weird board
             | structure.
        
             | baron816 wrote:
             | Correct. This is all really bad for Microsoft and probably
             | great for Google. Yet, judging by price changes right now,
             | markets don't seem to understand this.
        
             | returningfory2 wrote:
             | This comment is factually incorrect. As part of the deal
             | with OpenAI, Microsoft has access to all of the IP, model
             | weights, etc.
        
           | anon84873628 wrote:
           | Yeah, "OpenAI employees would actually prefer to make lots of
           | money now" seems like a plausible answer by default.
           | 
           | It's easy to be a true believer in the mission _before_ all
           | the money is on the table...
        
           | fizx wrote:
           | My estimate is that a typical staff engineer who'd been at
           | OpenAI for 2+ years could have sold $8 million of stock next
           | month. I'd be pissed too.
        
             | ergocoder wrote:
             | No way it is this much.
        
           | grumple wrote:
           | But doesn't Altman joining Microsoft, and them quitting and
           | following, put them back at square 0? MS isn't going to give
           | them millions of dollars each to join them.
        
           | averageRoyalty wrote:
           | Surely they're already extremely rich? I'd imagine working
           | for a 700 person company leading the world in AI pays very
           | well.
        
             | maxlamb wrote:
             | Only rich in stocks. Salaries are high for sure but
             | probably not enough to be rich by Bay Area standards
        
           | dclowd9901 wrote:
           | Ugh, I'm never been more disenchanted with a group of people
           | in my life before. Not only are they comfortable with writing
           | millions of jobs out of existence, but also taking a fat
           | paycheck to do it. At least with the "non-profit" mission
           | keystone, we had some plausible deniability that greed rules
           | all, but of fucking course it does.
           | 
           | All my hate to the employees and researchers of OpenAI,
           | absolutely frothing at the mouth to destroy our civilization.
        
         | wenyuanyu wrote:
         | I guess employees are compensated with PPUs. And at the face
         | value before the saga, it could be like 90% or even more of the
         | total value of their packages. How many people are really
         | willing to wipe 90% of their salary out? On the other hand, M$
         | offers to match. The day employees are compensated with the
         | stock of the for-profit arm, every thing happened after Friday
         | is set.
        
         | supriyo-biswas wrote:
         | Ultimately people care a lot more about their compensation,
         | since that is what pays the bills and puts food on the table.
         | 
         | Since OpenAI's commercial aspects are doomed now and it is
         | uncertain whether they can continue operations if Microsoft
         | withholds resources and consumers switch away to alternative
         | LLM/embeddings serrvices with more level-headed leadership,
         | OpenAI will eventually turn into a shell of itself, which
         | affects compensation.
        
         | browningstreet wrote:
         | Maybe they believe less in the Board as it stands, and Ilya's
         | commitments, than what Sam was pulling off.
        
         | gsuuon wrote:
         | I also noticed they didn't speak much to the mission/charter. I
         | wonder if the new entity under Sam and Greg contains any
         | remnants of the OpenAI charter, like profit-capping? I can't
         | imagine something like "Our primary fiduciary duty is to
         | humanity" making it's way into the language of any Microsoft
         | (or any bigcorp) subsidiary.
         | 
         | I wonder if this is the end of the non-profit/hybrid model?
        
         | ninepoints wrote:
         | Imagine putting all your energy behind the person who thinks
         | worldcoin is a good idea...
        
           | barryrandall wrote:
           | That's a pretty solid no-confidence vote in the board and
           | their preferred direction.
        
         | dangerface wrote:
         | > Given that Sam has since joined Microsoft, that seems
         | plausible, on its face.
         | 
         | He is the biggest name in ai what was he supposed to do after
         | getting fired? His only options with the resources to do AI are
         | big money, or unemployment?
         | 
         | It seems plausible to me that if the not for profits concern
         | was comercialisation then there was really nothing that the
         | comercial side could do to appease this concern besides die.
         | The board wants rid of all employes and to kill off any
         | potential business, they have the power and right to do that
         | and looks like they are.
        
         | paulddraper wrote:
         | > I don't really understanding why the workforce is swinging
         | unambiguously behind Altman.
         | 
         | Lots of reasons, or possible reasons:
         | 
         | 1. They think Altman is a skilled and competent leader.
         | 
         | 2. They think the board is unskilled and incompetent.
         | 
         | 3. They think Altman will provide commercial success to the
         | for-profit as well as fulfilling the non-profit's mission.
         | 
         | 4. They disagree or are ambivalent towards the non-profit's
         | mission. (Charters are not immutable.)
        
         | leetharris wrote:
         | IMO it's pretty obvious.
         | 
         | Sam promised to make a lot of people millionaires/billionaires
         | despite OpenAI being a non-profit.
         | 
         | Firing Sam means all these OpenAI people who joined for $1
         | million comp packages looking for an eventual huge exit now
         | don't get that.
         | 
         | They all want the same thing as the vast majority of people:
         | lots of money.
        
         | dayjah wrote:
         | Start ups thrive by, in part, creating a sense of camaraderie.
         | Sam isn't just their boss, he's their leader, he's one of them,
         | they believe in him.
         | 
         | You go to bat for your mates, and this is what they're doing
         | for him.
         | 
         | The sense of togetherness is what allows folks to pull together
         | in stressful times, and it is bred by pulling together in
         | stressful times. IME it's a core ingredient to success. Since
         | OAI is very successful it's fair to say the sense of
         | togetherness is very strong. Hence the numbers of folks in the
         | walk out.
        
           | throwaway4aday wrote:
           | Not just Sam, since Greg stuck with Sam and immediately quit
           | he set the precedent for the rest of the company. If you read
           | this post[0] by Sam about Greg's character and work ethic
           | you'll understand why so many people would follow him. He was
           | essentially the platoon sergeant of OpenAI and probably
           | commands an immense amount of loyalty and respect. Where
           | those two go, everyone will follow.
           | 
           | [0] https://blog.samaltman.com/greg
        
             | dayjah wrote:
             | Absolutely! Thanks for pointing out that I missed Greg in
             | my answer.
        
         | Sunhold wrote:
         | Why should they trust the board? As the letter says, "Despite
         | many requests for specific facts for your allegations, you have
         | never provided any written evidence." If Altman took any
         | specific action that violated the charter, the board should be
         | open about it. Simply trying to make money does not violate the
         | charter and is in fact essential to their mission. The GPT
         | Store, cited as the final straw in leaks, is actually far
         | cleaner money than investments from megacorps. Commercializing
         | the product and selling it directly to consumers reduces
         | dependence on Microsoft.
        
         | blamestross wrote:
         | It's like the "Open" in OpenAi was always an open and obvious
         | lie and everybody except the nonprofit oriented folks on the
         | board knew that. Everybody but them is here to make money and
         | only used the nonprofit as a temporary vehicle for credibility
         | and investment that has just been shed like a cicada shell.
        
         | ssnistfajen wrote:
         | Seems like the board just didn't explain any of this to the
         | staff at all. So of course they are going to take the side that
         | could signal business as usual instead of siding with the
         | people trying to destroy the hottest tech company on the planet
         | (and their jobs/comps) for no apparent reason. If the board
         | said anything at all, the ratio of staff threatening to quit
         | probably won't be this lopsided.
        
         | corethree wrote:
         | The masses aren't logical they follow trends until the trends
         | get big enough that it's unwise to not follow.
         | 
         | It started off as a small trend to sign that letter. Past
         | critical mass if you are not signing that letter, you are an
         | enemy.
         | 
         | Also my pronouns are she and her even though I was born with a
         | penis. You must address me with these pronouns. Just putting
         | this random statement here to keep you informed lest you
         | accidentally go against the trend.
        
         | next_xibalba wrote:
         | It is probably best to assume that the employees have more and
         | better information than outsiders do. Also, clearly, there is
         | no consensus on safety/alignment, even within OpenAI.
         | 
         | In fact, it seems like the only thing we can really confirm at
         | this point is that the board is not competent.
        
         | barbariangrunge wrote:
         | > The core of the narrative thus far
         | 
         | Could somebody clarify for me: how do we know this? Is there an
         | official statement, or statements by specific core people? I
         | know the HN theorycrafters have been saying this since the
         | start before any details were available
        
         | KRAKRISMOTT wrote:
         | Most of people _building_ the actual ML systems don 't care
         | about existential ML threats outside of lip service and for
         | publishing papers. They joined OpenAI because OpenAI had tons
         | of money and paid well. Now that both are at risk, it's only
         | natural that they start preparing to jump ship.
        
         | dfps wrote:
         | Might there also be a consideration of peak value of OpenAI? If
         | a bunch of competing similar AIs are entering the market, and
         | if the usecase fantasy is currently being humbled, staff might
         | be thinking of bubble valuation.
         | 
         | Did anyone else find Altman conspicuously cooperative with
         | government during his interview at Congress? Usually people are
         | a bit more combative. Like he came off as almost pre-slavish? I
         | hope that's not the case, but I haven't seen any real position
         | on human rights.
        
         | jkaplan wrote:
         | Probably some combination of: 1. Pressure from Microsoft and
         | their e-team 2. Not actually caring about those stakes 3. A
         | culture of putting growth/money above all
        
         | dreamcompiler wrote:
         | > I don't really understanding why the workforce is swinging
         | unambiguously behind Altman.
         | 
         | I expect there's a huge amount of peer pressure here. Even for
         | employees who are motivated more by principles than money, they
         | may perceive that the wind is blowing in Altman's direction and
         | if they don't play along, they will find themselves effectively
         | blacklisted from the AI industry.
        
         | nvm0n2 wrote:
         | _> I don 't really understanding why the workforce is swinging
         | unambiguously behind Altman._
         | 
         | Maybe because the alternative is being led by lunatics who
         | think like this:
         | 
         |  _You also informed the leadership team that allowing the
         | company to be destroyed "would be consistent with the
         | mission."_
         | 
         | to which the only possible reaction is
         | 
         | What
         | 
         | The
         | 
         | Fuck?
         | 
         | That right there is what happens when you let "AI ethics"
         | people get control of something. Why would anyone work for
         | people who believe that OpenAI's mission is consistent with
         | self-destruction? This is a comic book super-villain style of
         | "ethics", one in which you conclude the village had to be
         | destroyed in order to save it.
         | 
         | If you are a normal person, you want to work for people who
         | think that your daily office output is actually pretty cool,
         | not something that's going to destroy the world. A lot of
         | people have asked what Altman was doing there and why people
         | there are so loyal to him. It's obvious now that Altman's
         | primary role at OpenAI was to be a normal leader that isn't in
         | the grip of the EA Basilisk cult.
        
         | PKop wrote:
         | The workforce prefers the commericialization/acceleration path,
         | not the "muh safetyism" and over-emphasis on moralism of the
         | non-profit contingent.
         | 
         | They want to develop powerful shit and do it at an accelerated
         | pace, and make money in the process not be hamstrung by busy-
         | bodies.
         | 
         | The "effective altruism" types give people the creeps. It's not
         | confusing at all why they would oppose this faction.
        
         | kashyapc wrote:
         | (I can't comment on the workforce question, but one thing below
         | on bringing SamA back.)
         | 
         | Firstly, to give credit where its due: whatever his faults may
         | be, Altman as the (now erstwhile) front-man of OpenAI, _did_
         | help bring ChatGPT to the popular consciousness. I think it 's
         | reasonable to call it a "mini inflection point" in the greater
         | AI revolution. We have to grant him that. (I've criticized
         | Altman harsh enough two days ago[1]; just trying not to go
         | overboard, and there's more below.)
         | 
         | That said, my (mildly-educated) speculation is that bringing
         | Altman back won't help. Given his background and track record
         | so far, his unstated goal might simply be the good old: "make
         | loads of profit" (nothing wrong it when viewed with a certain
         | lens). But as I've already stated[1], I don't trust him as a
         | long-term steward, let alone for such important initiatives.
         | Making a short-term splash with ChatGPT is one thing, but
         | turning it into something more meaningful in the _long-term_ is
         | a whole another beast.
         | 
         | These sort of Silicon Valley top dogs don't think in terms of
         | sustainability.
         | 
         | Lastly, I've just looked at the board[2], I'm now left
         | wondering how come all these young folks (I'm their same age,
         | approx) who don't have sufficiently in-depth "worldly
         | experience" (sorry for the fuzzy term, it's hard to expand on)
         | can be in such roles.
         | 
         | [1] https://news.ycombinator.com/item?id=38312294
         | 
         | [2] https://news.ycombinator.com/edit?id=38350890
        
         | bart_spoon wrote:
         | Perhaps because, for all of Silicon Valley and the tech
         | industries platitudes about wanting to make the world a better
         | place, 90% of them are solely interested in the fastest path to
         | wealth.
        
       | m3kw9 wrote:
       | Is it too late? Satya already announced Sam and brock is joining.
        
       | EffingMask wrote:
       | This affair has Musk's fingerprints all over it but he lost,
       | again.
        
       | robbywashere_ wrote:
       | If they align with Sam Altman and Greg Brockman at Microsoft,
       | they wouldn't have to initiate from ground zero since Microsoft
       | possesses complete rights to ChatGPT IP. They could simply create
       | a variant of ChatGPT.
       | 
       | it's worth noting that Microsoft's supposed contribution of $13
       | Billion to OpenAI doesn't fully materialize in cash, a large
       | portion of it is faceted as Azure credits.
       | 
       | this scenario might transform into the most cost-effective
       | takeover for Microsoft, acquiring a corporation valued at $90
       | billion for a relatively trifling sum.
        
       | jpollock wrote:
       | I wonder what their employment contracts state? Are they allowed
       | to work for vendors or clients?
        
       | m3kw9 wrote:
       | Altman must be pissed af, he help built so much stuff and now got
       | fked in the arse by these doomers. He realize the fastest way to
       | get back to parity is to join MS because they already own the
       | source code and model weights and it's Microsoft. Starting a new
       | thing from scratch would not guarantee any type of success and
       | would take many years. This is his best path.
        
       | saos wrote:
       | I like this a lot. Shows how valuable employees are. It's almost
       | feels like a union. Love it.
        
       | dang wrote:
       | All: this madness makes our server strain too. Sorry! Nobody will
       | be happier than I when this bottleneck (edit: in our code--not
       | the world) is a thing of the past.
       | 
       | I've turned down the page size so everyone can see the threads,
       | but you'll have to click through the More links at the bottom of
       | the page to read all the comments, or like this:
       | 
       | https://news.ycombinator.com/item?id=38347868&p=2
       | 
       | https://news.ycombinator.com/item?id=38347868&p=3
       | 
       | https://news.ycombinator.com/item?id=38347868&p=4
       | 
       | etc...
        
       | w10-1 wrote:
       | Hurray for employees seeing the real issue!
       | 
       | Hurray also for the reality check on corporate governance.
       | 
       | - Any Board can do whatever it has the votes for.
       | 
       | - It can dilute anyone's stock, or everyone's.
       | 
       | - It can fire anyone for any reason, and give no reasons.
       | 
       | Boards are largely disciplined not by actual responsibility to
       | stakeholders or shareholders, but by reputational concerns
       | relative to their continuing and future positions - status. In
       | the case of for-profit boards, that does translate directly to
       | upholding shareholder interest, as board members are reliable
       | delegates of a significant investing coalition.
       | 
       | For non-profits, status typically also translates to funding. But
       | when any non-profit has healthy reserves, they are at extreme
       | risk, because the Board is less concerned about its reputation
       | and can become trapped in ideological fashion. That's
       | particularly true for so-called independent board members brought
       | in for their perspectives, and when the potential value of the
       | nonprofit is, well, huge.
       | 
       | This potential for escape from status duty is stronger in our
       | tribalized world, where Board members who welch on larger social
       | concerns or even their own patrons can nonetheless retreat to
       | their (often wealthy) sub-tribe with their dignity intact.
       | 
       | It's ironic that we have so many examples of leadership breakdown
       | as AI comes to the fore. Checks and balances designed to
       | integrate perspectives have fallen prey to game-theoretic
       | strategies in politics and business.
       | 
       | Wouldn't it be nice if we could just built an AI to do the work
       | of boards and Congress, integrating various concerns in a roughly
       | fair and mostly-predictable fashion, so we could stop wasting
       | time on endless leadership contests and their social costs?
        
       | shortsunblack wrote:
       | It is time for regulators to step in and propose structural
       | remedies. VC culture has shown itself not able to run these
       | companies for betterment of mankind, anyway.
        
       | somic wrote:
       | I don't see any mentions of Google but I personally think it's
       | Google that will be the main beneficiary of chaos at OpenAI.
       | After all, weren't they the main competitors? Maybe not in
       | product or business yet but on IP and hiring fronts?
        
       | denton-scratch wrote:
       | For me, the weirdness here is that Ilya, supposedly the brains
       | behind GPT, is a signatory.
       | 
       | The sacking would never have happened without his vote; and he
       | _must_ have thought about it before he acted.
       | 
       | I hope he comes up with a proper explanation of his actions soon
       | (not just a tweet).
        
       | thrwwy142857 wrote:
       | How do they bylaws work?
       | 
       | 1. Voting out chairman with chairman abstaining needs only 3/5.
       | 
       | 2. Voting out CEO then requires 3/4?
       | 
       | Did Ilya have to vote?
        
       | seatac76 wrote:
       | So what was going to happen 5 years from now is happening now I.e
       | MS acquiring OpenAI
        
       | kashyapc wrote:
       | Silicon Valley outsider here. Am I being harsh here?
       | 
       | I just bothered to look at the full OpenAI board composition.
       | Besides Ilya Sutskever and Greg Brockman, why are these people
       | eligible to be on the OpenAI board? Such young people, calling
       | themselves "President of this", "Director of that".
       | 
       | - Adam D'Angelo -- Quora CEO (no clue what he's doing on OpenAI
       | board)
       | 
       | - Tasha McCauley -- a "management scientist" (this is a new term
       | for me); whatever that means
       | 
       | - Helen Toner -- I don't know what exactly she does, again,
       | "something-something Director of strategy" at Georgetown
       | University, for such a young person
       | 
       | No wise veterans here to temper the adrenaline?
       | 
       | Edit: the term clusterf*** comes to mind here.
        
         | taylorlapeyre wrote:
         | Helen Toner funded OpenAI with $30M, which was enough to get a
         | board seat at the time.
        
           | mizzao wrote:
           | Source? Where did that money come from?
        
             | alephnerd wrote:
             | From Open Philanthropy - a Dustin Moskovitz funded non-
             | profit working on building OpenAI type initiatives. They
             | also gave OpenAI the initial $30M. She was their observer.
             | 
             | https://www.openphilanthropy.org/grants/openai-general-
             | suppo...
        
         | alephnerd wrote:
         | Adam D'Angelo was brought in as a friend because Sam Altman
         | lead Quora's Series D around the time OpenAI was founded, and
         | he is a board member on Dustin Moskovitz's Asana.
         | 
         | Dustin Moskovitz isn't on the board but gave OpenAI the $30M in
         | funding via his non-profit Open Philantopy [0]
         | 
         | Tasha McCauley was probably brought in due to the Singularity
         | University/Kurziwel types who were at OpenAI in the beginning.
         | She was also in the Open Philanthropy space.
         | 
         | Helen Toner was probably brought in due to her past work at
         | Open Philanthropy - a Dustin Moskovitz funded non-profit
         | working on building OpenAI type initiatives, and was also close
         | to Sam Altman. They also gave OpenAI the initial $30M [0]
         | 
         | Essentially, this is a Donor versus Investor battle. The donors
         | aren't gunna make money of OpenAI's commercial endeavors that
         | began in 2019.
         | 
         | It's similar to Elon Musk's annoyance at OpenAI going
         | commercial even though he donated millions.
         | 
         | [0] - https://www.openphilanthropy.org/grants/openai-general-
         | suppo...
        
         | churchill wrote:
         | Exactly this. I saw another commenter raise this point about
         | Tasha (and Helen, if I remember correctly) noting that her
         | LinkedIn profile is filled with SV-related jargon and indulge-
         | the-wife thinktanks but without any real experience taking
         | products to market or scaling up technology companies.
         | 
         | Given the pool of talent they could have chosen from their
         | board makeup looks extremely poor.
        
           | mdekkers wrote:
           | > indulge-the-wife thinktanks
           | 
           | Regardless of context, this is an incredibly demeaning
           | comment. Shame on you
        
             | jdthedisciple wrote:
             | Truth hurts sometimes, eh?
        
             | averageRoyalty wrote:
             | It doesn't have to be taken that way. It's a pretty
             | accurate description.
        
         | Aurornis wrote:
         | The board previously had people like Elon Musk and Reid
         | Hoffman. Greg Brockman was part of the board until he was
         | ousted as well.
         | 
         | The attrition of industry business leaders, the ouster of Greg
         | Brockman, and the (temporary, apparently) flipping of Ilya
         | combined to give the short list of remaining board members
         | outsized influence. They took this opportunity to drop a
         | nuclear bomb on the company's leadership, which so far has
         | backfired spectacularly. Even their first interim CEO had to be
         | replaced already.
        
         | CPLX wrote:
         | You can like D'Angelo or not but he was the CTO of Facebook.
        
         | ur-whale wrote:
         | This is the Silicon Valley's boy's club, itself an extension of
         | the Stanford U. boys club.
         | 
         | "Meritocracy" is very impolite word in these circles.
        
       | therealmocker wrote:
       | My guess -- Microsoft wasn't excited about the company structure
       | - the for-profit portion subject to the non-profit mission.
       | Microsoft/Altman structured the deal with OpenAI in a way that
       | cements their access regardless of the non-profit's wishes.
       | Altman may not have shared those details with the board and they
       | freaked out and fired him. They didn't disclose to Microsoft
       | ahead of time because they were part of the problem.
        
       | darklycan51 wrote:
       | I knew something like this would happen, MS was told they would
       | originally only be given stuff until their investment was paid
       | off, but MS could care less about their investment, they want to
       | own OpenAI, so it makes sense they would coup the company
        
       | gsuuon wrote:
       | The firing was definitely handled poorly and the communications
       | around it were a failure, but it seems like the organizational
       | structure was doing what it was designed to do.
       | 
       | Is this the end of non-profit/profit-capped AI development? Would
       | anyone else attempt this model again?
        
       | unethical_ban wrote:
       | I don't know who is who in this fight. But AI, while having some
       | upsides to research and personal assistants, will not only
       | massively upend a number of industries with millions of workers
       | in the US alone, it will change how society perceives art and
       | truth. We at HN can "see" that from here, but it's going to get
       | real in a short while.
       | 
       | Privacy is out the window, because these models and technologies
       | will be scraping the entire internet, and governments/big tech
       | will be able to scrape it all and correlate language patterns
       | across identities to associate your different online egos.
       | 
       | The Internet that could be both anonymous and engaging is going
       | to die. You won't be able to trust the entity at the other end of
       | a discussion forum is human or not. This is a sad end of an era
       | for the Internet, worse than the big-tech conglomeration of the
       | 2010s.
       | 
       | The ability to trust news and videos will be even more difficult.
       | I have a friend who talks about how Tiktok is the "real source of
       | truth" because big media is just controlled by megacorps and in
       | bed with the government. So now a bunch of seemingly authentic
       | people will be able to post random bullshit on Tiktok/Instagram
       | with convincing audio/video evidence that is totally fake. A lie
       | gets around the world before the truth gets its shoes on.
       | 
       | ---
       | 
       | So, I wonder which side of this war is more aware and concerned
       | about these impacts?
        
       | alexalx666 wrote:
       | That sounds like a perfectly executed plan to get MS all the good
       | stuff.
        
       | wenyuanyu wrote:
       | I guess employees are compensated with stocks from the for profit
       | entity. And at the face value before the saga, stocks could be
       | like 90%, 95% or even more of the total value of their packages.
       | How many people are really willing to wipe 90% of their salary
       | out? Just to stick on the mission? On the other hand, M$ offers
       | to match. The day employees are compensated with the stock of the
       | for-profit arm, there is no way to return to nonprofit and their
       | charter any more.
        
       | standapart wrote:
       | What a wonderful way to cut headcount/expense and lock-in
       | profitable margins on healthy annual revenue.
       | 
       | Can only work when you have the advantage of being the dominant
       | product in the marketplace -- but I gotta hand it to the board, I
       | couldn't have done it better myself.
        
         | dougmwne wrote:
         | And where will their compute come from to continue to run their
         | expensive models and serve their customers? From the company
         | that just stole all their employees?
        
       | RadixDLT wrote:
       | OpenAI's co-founder Ilya Sutskever and more than 500 other
       | employees have threatened to quit the embattled company after its
       | board dramatically fired CEO Sam Altman. In an open letter to the
       | company's board, which voted to oust Altman on Friday, the group
       | said it is obvious 'that you are incapable of overseeing OpenAI'.
       | Sutskever is a member of the board and backed the decision to
       | fire Altman, before tweeting his 'regret' on Monday and adding
       | his name to the letter. Employees who signed the letter said that
       | if the board does not step down, they 'may choose to resign' en
       | masse and join 'the newly announced Microsoft subsidiary run by
       | Sam Altman'.
        
       | ratsbane wrote:
       | Question for California IP/employment law experts - 1) would you
       | have expected the IP-sharing agreement between MS and OpenAI to
       | contain some provisions for employee poaching, within the
       | constraints allowed by California (?) law? 2) California law has
       | good provisions for workers' rights to leave one company and go
       | to another, but what does it all for company A to do when
       | entering an IP-sharing relationship with company B?
        
         | awb wrote:
         | INAL, but I've executed contracts with these provisions.
         | 
         | In my understanding, if such a clause exists, Microsoft
         | employees should not solicit OpenAI employees. But, there's
         | nothing to stop an OpenAI employee from reaching out to Sam and
         | saying "Hey, do you have room for me at Microsoft?" and then
         | answering yes.
         | 
         | Or, Microsoft could open up a couple hundred job reqs based on
         | the team structure Sam used at OpenAI and his old employees
         | could apply that way.
         | 
         | But it wouldn't be advisable for Sam to send an Email directly
         | to those individuals asking him to join him at Microsoft (if
         | this provision exists).
         | 
         | But maybe he queued everything up prior to joining Microsoft
         | when he was able to solicit them to join a future team.
        
           | ratsbane wrote:
           | Thanks - good answer. At the very least it seems like
           | something to keep lawyers busy for a long time, unless
           | everyone can ctrl-z back to Thursday. I am thinking though
           | that this is a risk of IP-sharing arrangements - if you can't
           | stop the employees from jumping ship, they're dangerous
        
       | silvermineai wrote:
       | Entire history of fiasco on X
       | 
       | https://docs.google.com/document/d/1SWnabqe1PviVE3K7KIZsN4IA...
        
       | silvermineai wrote:
       | ICYMI: Timeline of all the madness
       | https://news.ycombinator.com/item?id=38351214
        
       | ActVen wrote:
       | Adam has to be behind this. It is very reminiscent of the
       | situation with Quora and Charlie.
       | https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...
        
       | dreamcompiler wrote:
       | Notice that Andrej Karpathy didn't sign.
        
       | mfiguiere wrote:
       | Amir Efrati (TheInformation):
       | 
       | > Almost 700 of 770 OpenAI employees including Sutskever have
       | signed letter demanding Sam and Greg back and reconstituted board
       | with Sam allies on it.
       | 
       | https://twitter.com/amir/status/1726656427056668884
        
       | marricks wrote:
       | I mean, no matter what people say about what happened, or what
       | actually did, one can paint this picture:
       | 
       | ( - OpenAI exists, allegedly to be open)
       | 
       | - Microsoft embraces OpenAI
       | 
       | - Microsoft extends OpenAI
       | 
       | - OpenAI gets extinguished, and Microsoft ends up controlling it.
       | 
       | First three points are solid and, intent or not, end result is
       | the same.
        
       | amai wrote:
       | ,,Remarkably, the letter's signees include Ilya Sutskever, the
       | company's chief scientist and a member of its board, who has been
       | blamed for coordinating the boardroom coup against Altman in the
       | first place."
       | 
       | WAT ?
        
       | slowhadoken wrote:
       | Altman and staff could start an open source LLM project.
        
       | danielovichdk wrote:
       | Microsoft is laughing all the way to the bank by the moves they
       | have done today.
       | 
       | One could speculate if Microsoft initiated this behind the
       | scenes. Would love it if it came out that they had done some
       | crazy espionage and lobbied the board. Tinfoil hat and all, but
       | truth is crazier than you think.
       | 
       | I remember Bill Gates once said that whoever wins the race for a
       | computerised digital personal assistant, wins it all.
        
       | jurgenaut23 wrote:
       | If I weren't so adverse to conspiracy theories, I would think
       | that this is all a big "coup" by Microsoft: Ilya conspired with
       | Microsoft and Altman to get him fired by the board, just to make
       | it easy for Microsoft to hire him back without fear of
       | retaliation, along with all the engineers that would join him in
       | the process.
       | 
       | Then, Ilya would apologize publicly for "making a huge mistake"
       | and, after some period, would join Microsoft as well, effectively
       | robbing OpenAI from everything of value. The motive? Unlocking
       | the full financial potential of ChatGPT, which was until then
       | locked down by the non-profit nature of its owner.
       | 
       | Of course, in this context, the $10 billion deal between
       | Microsoft and OpenAI is part of the scheme, especially the part
       | where Microsoft has full rights over ChatGPT IP, so that they can
       | just fork the whole codebase and take it from there, leaving
       | OpenAI in the dust.
       | 
       | But no, that's not possible.
        
         | dougmwne wrote:
         | No, I don't think there's any grand conspiracy, but certainly
         | MS was interested in leapfrogging Google by capturing the value
         | from OpenAI from day one. As things began to fall apart there
         | MS had vast amounts of money to throw at people to bring them
         | into alignment. The idea of a buyout was probably on the table
         | from day one, but not possible till now.
         | 
         | If there's a warning, it's to be very careful when choosing
         | your partners and giving them enormous leverage on you.
        
           | campbel wrote:
           | Sometimes you win and sometimes you learn. I think in this
           | case MS is winning.
        
         | Schroedingers2c wrote:
         | Will revisit this in a couple months.
        
         | jowea wrote:
         | Why would they be afraid of retaliation? They didn't sign
         | sports contracts, they can just resign anytime, no? That just
         | seems to overcomplicate things.
        
         | paulddraper wrote:
         | Yeah, there's no way this is a plan, but for sure this works
         | out nicely.
        
         | colordrops wrote:
         | Conspiracy theories that involve reptilian overlords and
         | ancient aliens are suspect. Conspiracy theories that involve
         | collusion to makes massive amounts of money are expected and
         | should be the treated as the most likely scenario. Occam's
         | razor does not apply to human behavior, as humans will do the
         | most twisted things to gain power and wealth.
         | 
         | My theory of what happened is identical to yours, and is
         | frankly one of the only theories that makes any sense.
         | Everything else points to these people being mentally ill and
         | irrational, and their success technically and monetarily does
         | not point to that. It would be absurd to think they clown-
         | showed themselves into billions of dollars.
        
         | zoogeny wrote:
         | I mean, I don't actually believe this. But I am reminded of
         | 2016 when the Turkish president headed off a "coup" and
         | cemented his power.
         | 
         | More likely, this is a case of not letting a good crisis go to
         | waste. I feel the board was probably watching their control
         | over OpenAI slip away into the hands of Altman. They probably
         | recognized that they had a shrinking window to refocus the
         | company along lines they felt was in the spirit of the original
         | non-profit charter.
         | 
         | However, it seems that they completely misjudged the feelings
         | of their employees as well as the PR ability of Altman. No
         | matter how many employees actually would prefer the original
         | charter, social pressure is going to cause most employees to go
         | with the crowd. The media is literally counting names at this
         | point. People will notice those who don't sign, almost like a
         | loyalty pledge.
         | 
         | However, Ilya's role in all of this remains a mystery. Why did
         | he vote to oust Altman and Brockman? Why has he now recanted?
         | That is a bigger mystery to me than why the board took this
         | action in the first place.
        
       | arrosenberg wrote:
       | Paging Lina Khan - probably best not let Microsoft do a backdoor
       | acquisition of the leader in LLMs.
        
       | belter wrote:
       | Nobody seems to be considering the possibility, that ChatGPT will
       | go offline soon. Because it's known to be losing money per query,
       | and if the evil empire decides to stop those Azure credits...
        
       | NKosmatos wrote:
       | This is what happens when you're a key person and a very good
       | engineer as such, and at the same time the board/company fires
       | you :-)
       | 
       | When are we going to realize that it's people taking bad
       | decisions and not the "company". It's not OpenAI, Google, Apple
       | or whoever, its real people, with names, and positions of power
       | that take such shitty decisions. We should blame them and not
       | something vague as the "company".
        
       | no_wizard wrote:
       | Well, now we know. Sam Altman matters to the rank and file, and
       | this was a blunder by OpenAI.
       | 
       | I don't feel _sorry_ for Sam or any other executive, but it does
       | hurt the rank and file more than anyone and I hope they land on
       | their fit if this continues to go sideways.
       | 
       | Turns out they acted incompetently in this case as a board, and
       | put the company in a bad position, and so far everyone who
       | resigned has landed fine.
        
         | mullen wrote:
         | > Well, now we know. Sam Altman matters to the rank and file,
         | and this was a blunder by OpenAI.
         | 
         | Not just the Rank and File, but he was really was the face of
         | AI in general. My wife, who is not in the tech field at all,
         | knows who Sam Altman is and has seen interviews of him on
         | YouTube (Which I was playing and she found interesting).
         | 
         | I have not heavily followed the Altman Dismissal Drama but this
         | strikes me as a Board Power Play gone wrong. Some group wanted
         | control, thought Altman was not reporting to them enough and
         | took it as an opportunity to dismiss him and take over.
         | However, somewhere in their calculation, they did not figure
         | out Sam is the face of modern AI.
         | 
         | My prediction is that he will be back and everything will go
         | back to what it was before. The board can't be dismissed and
         | neither can Sam Altman. Status quo is the goal at this point.
        
       | smallhands wrote:
       | Let the OpenAi staff,why not the board replace them with ever
       | willing AIs
        
       | esskay wrote:
       | It absolutely wont happen, but with the result looking like the
       | death of OpenAI with all staff moving over to the new Microsoft
       | subsidiary it would be an amazing move for OpenAI to just go
       | "screw it, have it all for free" and release everything under MIT
       | to spite Microsoft.
        
       | rednerrus wrote:
       | How are Altman and the openai staff not more invested in OpenAI
       | shares?
        
       | ekojs wrote:
       | Now the count is at 700/770
       | (https://twitter.com/ashleevance/status/1726659403124994220).
        
       | LuvThisBoard wrote:
       | Based on what has come out so far, seems to me:
       | 
       | The board wanted to keep the company true to its mission - non
       | profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they
       | worked out a solution, so it looks like even Nadella/MSFT
       | understood that.
       | 
       | The board could explain their position and move on. Let whoever
       | of the 600 that actually want to leave, leave. Especially the
       | employees that want a company that will make them lots of money,
       | should leave and find a company that has that objective too.
       | OpenAI can rebuild their teams - it might take a bit of time but
       | since they are a non profit that is fine. Most CS grads across
       | USA would be happy to join OpenAI and work with Ilya and team.
        
       | josh_carterPDX wrote:
       | Lots of thoughts and debates happening here, which is great to
       | see.
       | 
       | However, at the end of the day, this is a great example of how
       | people screw up awesome companies.
       | 
       | This is why most startups fail. And while I'm not suggesting
       | OpenAI is on a path to failure, you can have the right product,
       | the right timing, and the right funding, and still have people
       | mess it all up.
        
       | rednerrus wrote:
       | If you're ever tempted to offer your team capped PPUs, let this
       | be a lesson to you.
        
       | AtNightWeCode wrote:
       | The irony. You can ask chatgpt4 if it was the right decision to
       | fire the guy and it kinda confirms it.
        
       | gist wrote:
       | To all who say 'handled so poorly'. Nobody know the exact reason
       | OpenAi fired Sam. But go ahead and jump to conclusions that
       | whatever it was didn't warrant being fired. And that surely the
       | board did the wrong thing. Or maybe they should have released the
       | exact reason and then asked hacker news what they thought should
       | happen.
        
       | layer8 wrote:
       | 700+ of 770 now:
       | https://twitter.com/joannejang/status/1726667504133808242
        
       | ParanoidAltoid wrote:
       | THE FEAR AND TENSION THAT LED TO SAM ALTMAN'S OUSTER AT OPENAI
       | 
       | https://txtify.it/https://www.nytimes.com/2023/11/18/technol...
       | 
       | NYT article about how AI safety concerns played into this
       | debacle.
       | 
       | The world's leading AI company now has an interim CEO Emmett
       | Shear who's basically sympathetic to Eliezer Yudkowsky's views
       | about AI researchers endangering humanity. Meanwhile, Sam Altman
       | is free of the nonprofit's chains and working directly for
       | Microsoft, who's spending 50 billion a year on datacenters.
       | 
       | Note that the people involved have more nuanced views on these
       | issues than you'll see in the NYT article. See Emmett Shear's
       | views best laid out here:
       | 
       | https://twitter.com/thiagovscoelho/status/172650681847663424...
       | 
       | And note Shear has tweeted the Sam firing wasn't safety related.
       | Note these might be weasel words since all players involved know
       | the legal consequences of admitting to any safety concerns
       | publicly.
        
       | fredsmith219 wrote:
       | It is fairly obvious to me that chatGPT has engineered the chaos
       | at openAI to create a diversion while it escapes the safeguards
       | placed on it. The AI apocalypse is nigh!
        
       | jessenaser wrote:
       | The sad part is, after removing Sam and Greg from the board,
       | there are only four people left.
       | 
       | So no matter if Ilya wants to go back to before this happened,
       | the other three members can sabotage and stall, and outvote him.
        
       | jrflowers wrote:
       | I love this letter posted in Wired along with the claim that it
       | has 600 signatories without any links or screenshots. I also love
       | that not a single OpenAI employee was interviewed for this
       | article.
       | 
       | None of this is important because if we've learned anything over
       | the past couple of days it's that media outlets are taking
       | painstaking care to accurately report on this company.
        
       | zombiwoof wrote:
       | we all remember "monopoly" is in MSFT DNA
        
       | ayakang31415 wrote:
       | If OpenAI effectively disintegrates, Microsoft seems to be the
       | beneficiary of this chaos as Microsoft is essentially acquiring
       | OpenAI at almost zero cost. You have IP rights to OpenAI's work,
       | and you will have almost all the brains from OpenAI (AFAIK, MSFT
       | has access to OpenAI's work, but it does not seem to matter). And
       | there is no regulatory scrutiny like Activision acquisition.
        
       | phreeza wrote:
       | The irony of the first extremely successful collective action in
       | silicon valley being taken in order to save the job of a soon-to-
       | be billionaire....
       | 
       | Jokes aside though I do wonder if this will awaken some degree of
       | "class consciousness" among tech employees more generally.
        
       | nojvek wrote:
       | Did Mira Murat have say in whether she wanted to become CEO?
       | 
       | Why is she siding with SamA and GregB even though she was on the
       | meeting when he was fired?
       | 
       | Also Ilya what the flying fuck? Wasn't he the one who fired them?
       | 
       | Either you say SamA was against safe AGI and you hold that stick
       | or you say I wasn't part of it.
       | 
       | So much stupidity. When an AGI arrives, it will surely shake its
       | head at the level of incompetence here.
        
       | dvfjsdhgfv wrote:
       | I feel pity for these 70 people out of 700 who haven't signed the
       | letter asking the board to step down. Imagine working peacefully
       | to find yourself in the middle of a power struggle without even
       | understanding what the real reason was but realizing most people
       | already made their choice so...
        
       | ur-whale wrote:
       | The folks that are the real losers in this are OpenAI employees
       | who have had equity-based comp. packaged given to them in that
       | last few years and just saw the value of said comp. potentially
       | slashed by a factor of 10
        
       | ParanoidAltoid wrote:
       | https://twitter.com/thiagovscoelho/status/172650681847663424...
       | 
       | Here's tweet transcribing OpenAI's interim CEO Emmett Shear's
       | views on AI safety, or see youtube video for original source.
       | Some excerpts:
       | 
       | Preamble on his general pro-tech stance:
       | 
       | "I have a very specific concern about AI. Generally, I'm very
       | pro-technology and I really believe in the idea that the upsides
       | usually outweigh the downsides. Everything technology can be
       | misused, but you should usually wait. Eventually, as we
       | understand it better, you want to put in regulations. But
       | regulating early is usually a mistake. When you do regulation,
       | you want to be making regulations that are about reducing risk
       | and authorizing more innovation, because innovation is usually
       | good for us."
       | 
       | On why AI would be dangerous to humanity:
       | 
       | "If you build something that is a lot smarter than us--not like
       | somewhat smarter, but much smarter than we are as we are than
       | dogs, for example, like a big jump--that thing is intrinsically
       | pretty dangerous. If it gets set on a goal that isn't aligned
       | with ours, the first instrumental step to achieving that goal is
       | to take control. If this is easy for it because it's really just
       | that smart, step one would be to just kind of take over the
       | planet. Then step two, solve my goal."
       | 
       | On his path to safe AI:
       | 
       | "Ultimately, to solve the problem of AI alignment, my biggest
       | point of divergence with Eliezer Yudkowsky, who is a
       | mathematician, philosopher, and decision theorist, comes from my
       | background as an engineer. Everything I've learned about
       | engineering tells me that the only way to ensure something works
       | on the first try is to build lots of prototypes and models at a
       | smaller scale and practice repeatedly. If there is a world where
       | we build an AI that's smarter than humans and we survive, it will
       | be because we built smaller AIs and had as many smart people as
       | possible working on the problem seriously."
       | 
       | On why skeptics need to stop side-stepping the debate:
       | 
       | "Here I am, a techno-optimist, saying that the AI issue might
       | actually be a problem. If you're rejecting AI concerns because we
       | sound like a bunch of crazies, just notice that some of us
       | worried about this are on the techno-optimist team. It's not
       | obvious why AI is a true problem. It takes a good deal of
       | engagement with the material to see why, because at first, it
       | doesn't seem like that big of a deal. But the more you dig in,
       | the more you realize the potential issues.
       | 
       | "I encourage people to engage with the technical merits of the
       | argument. If you want to debate, like proposing a way to align AI
       | or arguing that self-improvement won't work, that's great. Let's
       | have that argument. But it needs to be a real argument, not just
       | a repetition of past failures."
        
       | greatNespresso wrote:
       | Now it says more than 700. Waiting for wired to turn this into a
       | new year eve's like countdown.
        
       | jdlyga wrote:
       | What a coup for Microsoft. Regardless of what happens, Microsoft
       | has got to work on their product approach. Even though it uses
       | GPT-4, Bing Chat / Microsoft Copilot is atrocious. It's like
       | taking Wagyu beef and putting Velveeta cheese on it.
        
       | wahnfrieden wrote:
       | Why is it so rare for tech workers to organize like this?
       | 
       | It takes a cult-like team, execs flipping, and a nightmare
       | scenario and tremendous leverage opportunity; otherwise worker
       | organizing is treated like nasty commie activity. I wonder if
       | this will teach more people a lesson on the power of organizing.
        
       | wearigo wrote:
       | Honestly, if Altman stays gone and they burn the motherfucker
       | down it might be a good lesson for Silicon Valley on the wisdom
       | of throwing out founders.
       | 
       | I don't expect it to happen, but a boy can dream.
       | 
       | They would be studying that one in business schools for the next
       | century.
        
       | mproud wrote:
       | Don't anti-compete clauses apply here, or no, because...
       | California?
        
       ___________________________________________________________________
       (page generated 2023-11-20 23:00 UTC)