[HN Gopher] OpenAI Boardroom Battle: Safety First
___________________________________________________________________
OpenAI Boardroom Battle: Safety First
Author : skmurphy
Score : 23 points
Date : 2023-11-19 22:00 UTC (1 hours ago)
(HTM) web link (www.fabricatedknowledge.com)
(TXT) w3m dump (www.fabricatedknowledge.com)
| skmurphy wrote:
| Thoughtful analysis of dynamics in play at OpenAI that may have
| led to his firing and likely will result in his reinstatement.
|
| Key take-aways (excerpts from article):
|
| "The board made a blunder. OpenAI's employees will likely get
| their CEO back by Monday, and Satya Nadella's 10 billion dollars
| in Azure credits will have some vote in the future of OpenAI.
|
| What's clear is that the board is grossly mismanaged. A non-
| profit board should not be running a critically important company
| like OpenAI. Just look at the turnover and lack of transparency
| on re-election.
|
| I think that Ilya [Sutskever] will leave OpenAI when Sam is
| reinstated. He has to be the player who initiated the power play.
| The entire current board will leave, and a new board with fewer
| AI safety people (sadly) will be reinstated.
|
| I would not be surprised to see the OpenAI charity and capped-
| profit structure flipped, with a formal board at the GP that
| becomes the real locus of power.
|
| The boardroom move was amateur and sudden. And as much as boards
| have technical legal power, so do the organizations they rule.
| It's all a construct, and the people of OpenAI will get their
| way. And hopefully, a better governance structure.
|
| I believe that the longer-term problem of safe AI is important,
| but you don't do that with sudden shakeups of the founder and an
| exodus of half of the employees.
|
| OpenAI has been doing something special for a long time, beating
| the likes of better-funded research organizations like Google or
| Microsoft. It's probably in everyone's best interest to keep the
| team together. "
| jjoonathan wrote:
| > A non-profit board should not be running a critically
| important company like OpenAI
|
| A for-profit board should not be running a critically important
| company like OpenAI.
| russellbeattie wrote:
| We'll see. Apparently they're meeting right now at OpenAI and
| they made Sam use a guest badge. The level of pettiness is mind
| blowing.
|
| https://twitter.com/sama/status/1726345564059832609
| layer8 wrote:
| What badge do you suggest they should have given him?
| lucubratory wrote:
| >What's clear is that the board is grossly mismanaged. A non-
| profit board should not be running a critically important
| company like OpenAI. Just look at the turnover and lack of
| transparency on re-election.
|
| This is an _insane_ thing to say. It 's an argument against
| OpenAI existing at all, and the alternatives (for-profit boards
| and a military research project) are both much, much worse.
| Davidzheng wrote:
| Strange to see this stated as if they were clear facts. Openai
| when founded had the goal of AGI in mind. That's why they chose
| this structure so that the corporation side would not have
| almost unbounded power when the technology matures and the
| total revenue becomes a sizable proportion of the GDP. Weird
| and idealistic? Maybe, but also possibly correct. If the only
| way to get there is through raw capitalism, society is in for a
| big shock. Bigger than it would be with its original model.
| j2bax wrote:
| Let's be honest... the cat is out of the bag and now it's a
| race. If Openai tries to regulate itself too much they will
| eventually be eclipsed by another player that only worries
| about gov regulations and possibly just accepts the fines
| like many capitalists. I say buckle up and get ready for some
| massive disruption! Better for the US (at least for us
| citizens) to lead the way and maintain the power of AGI than
| many other alternatives... let's start working on the post
| capitalist society!
| zarzavat wrote:
| Why would the board agree to all that? If they are worried
| about the direction Altman is taking OpenAI, then surely the
| price for Altman's return is a strengthening of the current
| structure and some guarantees from Altman to slow down, not a
| weakening and hollowing out of the structure.
| chatmasta wrote:
| > Why would the board agree to all that?
|
| Because if they don't agree, then all their best employees,
| and probably also their Azure cloud credits, will leave to
| join Sam and Greg at a new company picking up right where
| they left off.
| akomtu wrote:
| A soap opera for nerds.
| borissk wrote:
| One way or another the current OpenAI team will split into two
| teams - one headed by Sam and focused on making money and another
| headed by Ilya and focused on creating a safe AGI.
| ren_engineer wrote:
| pretty sure that's the reason Anthropic exists, so Ilya would
| probably just go there. This isn't the first clash over AI
| safety Altman has caused
| sgift wrote:
| Only if we define safe as "controlled by Ilya and/or some
| clique of 'great men'". But at least they are good at PR. After
| all, who could say something against safety?
|
| And no, I also don't believe that Sam A. has my best interests
| at heart, but at least he doesn't hide behind "safety" to
| pretend otherwise.
| naveen99 wrote:
| Ilya if he wants to fight will probably just hire Wachtell,
| Lipton, Rosen & Katz. the teams on both sides will be lawyers,
| not vc's or engineers.
___________________________________________________________________
(page generated 2023-11-19 23:01 UTC)