[HN Gopher] Sam Altman is still trying to return as OpenAI CEO
___________________________________________________________________
Sam Altman is still trying to return as OpenAI CEO
Author : mfiguiere
Score : 463 points
Date : 2023-11-20 19:11 UTC (3 hours ago)
(HTM) web link (www.theverge.com)
(TXT) w3m dump (www.theverge.com)
| shmatt wrote:
| A few things come to mind:
|
| * Emmett Shear should have put in a strong golden parachute in
| his contract, easy money if so
|
| * Yesterday we had Satya the genius forcing the board to quit.
| This morning it was Satya the genius who acquired OpenAI for $0.
| Im sure there will be more if sama goes back. So if sama goes
| back - lets hear it, why is Satya a genius?
| browningstreet wrote:
| sama would be going back to a sama aligned board, which would
| make openai even more aligned with satya, esp since satya was
| willing to go big to have sama's back.
|
| and i'd bet closer openai & microsoft ties/investments would
| come with that.
| eachro wrote:
| I believe a precondition for Sam and Greg returning to OpenAI
| is that the board gets restructured (decelerationists culled).
| That is probably good for MSFT.
| shmatt wrote:
| truly a Win-Win-Win-Win-Win situation for MSFT
| kreeben wrote:
| Doh!
| RecycledEle wrote:
| MSFT is like that.
|
| Someone playing Game of Thrones is sneaking up with a
| dagger, but has no idea that MSFT has snipers on all the
| rooftops.
| civilitty wrote:
| It helps that their corporate structure [1] is better
| equipped for it than OpenAI's.
|
| [1] https://imgur.io/XLuaF0h
| skohan wrote:
| But probably better for Sam to stay with OpenAI right? More
| power leading your own firm than being an employee of MSFT
| moralestapia wrote:
| He has a green light to build a new thing and operate it as
| its own, obv. MS will own most of the equity but then he
| will have something as well.
|
| OpenAI is a non-profit, so, no material benefit to him (at
| face value, I don't believe this is the case, though).
| skohan wrote:
| I would imagine he would have leverage to get a pretty
| good deal if OpenAI want him back
| ianhawes wrote:
| Even if Sam @ MSFT was a massive bluff, Satya is in a win-win-
| win scenario. OpenAI can't exactly continue doing _anything_
| without Azure Compute.
|
| OpenAI implodes? Bought the talent for virtually nothing.
|
| OpenAI 2.0 succeeds? Cool, still invested.
|
| I think in reality, Sam @ MSFT is not an instant success. Even
| with the knowledge and know-how, this isn't just spinning up a
| new GPT-4-like model. At best, they're ~12 months behind
| Anthropic (but probably still 2 years ahead of Google).
| hutzlibu wrote:
| The loss here might be that the brand is a bit damaged in
| terms of stability and people are more looking for and
| investing in alternatives.
|
| But as long as ChatGPT is and remains ahead as a product,
| they should be fine.
| macNchz wrote:
| I do think the imperative to maintain their lead over the
| competition in product quality will be stronger than ever
| after this-the whole thing has been messy and dramatic in a
| way that no business really wants their major vendors to
| be.
| Davidzheng wrote:
| Why do they need 12 months. Does it need 12 months of
| training
| vikramkr wrote:
| You described it yourself. If they'd signed a bad deal with
| openai without IP access or hadn't acted fast and lost all the
| talent to Google or something they'd have been screwed. Instead
| they managed the chaos and made sure that they win no matter
| what. The genius isn't the person who perfectly predicts all
| the contrived plot points ahead of time, it's the person who
| doesn't care since they set things up to win no matter what
| madrox wrote:
| Ah yes the Xanatos Gambit
| tempaway511751 wrote:
| _So if sama goes back - lets hear it, why is Satya a genius?_
|
| This isn't that hard to understand. Everyone was blindsided by
| the sacking of Altman, Satya reacted quickly and is juggling a
| very confusing, dynamic situation and seems to have got himself
| into a good enough position that all possible endings now look
| positive for Microsoft.
| vineyardmike wrote:
| > So if sama goes back - lets hear it, why is Satya a genius?
|
| OAI is a non profit. There's always been a tension there with
| Microsoft's goals. If he goes back, they're definitely going to
| be much more ok with profit.
| mvkel wrote:
| because NOT letting sama go back would undo the all the good
| will (and resulting access) that they've built. As satya said,
| he's there to support, in whatever way yields the best path
| forward. what's best for business is to actually mean that.
| irimi wrote:
| Plot twist: Satya orchestrated the ousting of sama to begin
| with, so that this would happen.
| seydor wrote:
| That s a great twist in the writer's storyline. Board quits,
| Altman + Brockman returns to openAI, shamed Sutskever defects to
| microsoft where he leads the AI division in a lifelong quest to
| take revenge for this humiliation.
| tarruda wrote:
| He humiliated himself when succumbed to pressure and tweeted
| that apology.
| garbthetill wrote:
| yeah felt like a really weird move
| bitshiftfaced wrote:
| They wrote Sutskever as a sort of reverse Bighead. He starts
| out at the top, actually has tech competence, and through a
| series of mishaps and random events becomes less influential
| and less popular.
| Eumenes wrote:
| The media circus around this reminds me of Taylor Swift and her
| new boyfriend. There is more than one "AI" company. Very bizarre.
| shmatt wrote:
| There was so little drama around Continua and Mistral AI -
| which had actual researches and not product managers create a
| new company
| mgfist wrote:
| Can't be serious? This isn't just "a" AI company, it's the AI
| company. And it might not exist next week if the board doesn't
| resign.
| highduc wrote:
| >And it might not exist next week if the board doesn't
| resign.
|
| Huh? Can't they just hire new people instead? They are a non
| profit org after all.
| swatcoder wrote:
| Right from the launch of ChatGPT, many have seen OpenAI as
| the MySpace or AltaVista of this new wave of generative
| systems -- first to break the market open but probably not
| suited to hold their position at the top.
|
| It's exciting to see what they've productized in this first
| year, but the entire landscape of companoes and products was
| already sure to look different in another few.
| Multiplayer wrote:
| I'm very unclear on how board members can remove other board
| members. If Sam, Greg and Ilya are on the same "team" now that's
| 3.... vs. 3. What's the mechanism at use here for board removal
| and how quickly does it happen? And who elects the board members.
|
| This is silly.
| lowkey_ wrote:
| Board members can remove other board members with a majority
| vote.
|
| Sam, Greg, and Ilya were presumed guaranteed to be the same
| team, which meant they couldn't be removed (3/6 votes).
|
| Ilya switched sides to align with all 3 of the non-founding
| board members, giving them 4/6 votes, which they used to remove
| Sam and Greg.
|
| Now that they've been removed, there's 4 remaining board
| members: the non-founding 3 and Ilya. They'll need 3/4 votes to
| reorganize the board.
| namrog84 wrote:
| Which is super unfortunate since the other 3 might vote out
| Ilya now.
| skwirl wrote:
| Sam and Greg are no longer on the board. They were removed with
| the support of Ilya. The board is now Ilya, the Quora CEO, and
| two other outsiders.
| mgfist wrote:
| By pressuring them to resign.
|
| The way things are looking, OpenAI won't exist next week if the
| board doesn't resign. Everyone will quit and join Sam.
| jessenaser wrote:
| After removing Sam and Greg, there are four remaining.
|
| This means no matter what Ilya does, the other three can vote
| him out, which is why the board removed Mira, stalled on
| bringing Sam back, etc, since Ilya's vote does not matter
| anymore.
|
| Only if you can move two people over to the other side will you
| have 3 vs 1, and could bring Sam and Greg back.
|
| This Microsoft deal could just be another Satya card, and
| means: 1. If Sam goes back to OpenAI, we (as Microsoft) still
| will get new models at the normal rate in our previous
| contract. 2. If Sam cannot go back, we get to hire most of
| OpenAI to Microsoft, and can rebuild from the rubble.
|
| So AI is saved at the last minute. Either OpenAI will live, or
| it will be rebuilt in Microsoft and funded by Microsoft with
| the same benefits as before. Only loss was slowing AI down by
| maybe months, but the team probably could get back where they
| started. They know it all in their heads. Microsoft already has
| the IP.
|
| If there was no hope for OpenAI, then Ilya might just move with
| Sam to Microsoft, and that would be the end of it.
| selimthegrim wrote:
| There are three of them, and Ilya...
| Multiplayer wrote:
| Boards that I have been on do not allow board members to remove
| other board members - only shareholders can remove board
| members. I don't know why this is being downvoted.
| cthalupa wrote:
| There are no shareholders in a non-profit. Who would remove
| boardmembers besides a majority decision by the board?
| ilrwbwrkhv wrote:
| Folks who do not have the wisdom to see the consequences of their
| actions 3 days out, are building AGI. God help us all.
| paulddraper wrote:
| > are building AGI
|
| Well, probably not anymore.
| dingnuts wrote:
| they were never any closer to building AGI than they were to
| inventing a perpetual motion machine
| highduc wrote:
| Scientists are easily blindsided by psychos dealing with
| enormous amounts of money and power. They also happen to suck
| at politics.
| boringg wrote:
| 100%. Technical staff rarely have exceptional political
| awareness and that seems to be the case this weekend. To be
| fair we don't know what triggered everything so while the
| dust settles this position may change.
| highduc wrote:
| Yeah clearly, at least I have no clue what really happened
| and don't feel like I have enough info to put anything
| together at this point.
| I_Am_Nous wrote:
| Longtermism strikes again! Somehow the future is considered
| more important to think about than the steps we need to take
| _today_ to reach that future. Or any future, really.
| MattPalmer1086 wrote:
| Yep, thinking days ahead is sooooo long term! We need high
| frequency strategy!
| lebean wrote:
| That's news to you?
| jader201 wrote:
| I will be very sad if there isn't a documentary someday
| explaining what in the world happened.
|
| I'm not convinced even people smack in the middle of this even
| know what's going on.
| make3 wrote:
| There will also be a Hollywood movie, for sure.
|
| My friend suggested Michael Cera as both Ilya and Altman
| schott12521 wrote:
| Matt Rife looks like a good fit to play Altman
| RecycledEle wrote:
| Why not deepfake the real people into their roles?
|
| I think it would hold up in US court for documentaries.
| polygamous_bat wrote:
| "We didn't steal your likeness! We just scraped images
| that were already freely available on the internet!"
| bertil wrote:
| You want someone who can play through haunting decision and
| difficult meetings. Benedict Cumberbatch or Ciran Murphy
| would be a better pic.
| make3 wrote:
| I agree with Cillian Murphy for Altman, they both have
| the deep blue eyes
| dcolkitt wrote:
| Michael Cera should play all the roles in the movie, like
| Eddie Murphy in the Nutty Professor.
| Vitaly_C wrote:
| Since this whole saga is so unbelievable: what if... board
| member Tasha McCauley's husband Joseph Gordon-Levitt
| orchestrated the whole board coup behind the scenes so he could
| direct and/or star in the Hollywood adaptation?
| civilitty wrote:
| That would at least make a more damned sense than "everyone
| is wildly incompetent." At some point Hanlon's razor starts
| to strain credulity.
| dragonwriter wrote:
| > That would at least make a more damned sense than
| "everyone is wildly incompetent."
|
| It seems to be one of many "everyone _except one clever
| mastermind_ is wildly incompetent " explanations that have
| been tossed around (most of which center on the private
| interests of a single board member), which don't seem to be
| that big of an improvement.
| civilitty wrote:
| Oh I'm not saying there's a clever mastermind, I'm just
| hoping they're all incompetent _and_ Gordon Levitt wants
| to amp up the drama for a possible future feature film,
| instead of them all just being wildly incompetent.
| Although maybe the latter would make for a great season
| of Fargo.
| passwordoops wrote:
| In the next twist Disney will be found to have staged every
| tech venture implosion/coup since 2021 to keep riding the
| momentum of tech bio-pics
| brandall10 wrote:
| Loved playing Kalanick so much that he couldn't help himself
| from taking a shot at Altman? Makes more sense than what we
| currently have in front of us.
| nikcub wrote:
| It will _definitely_ become a book (hopefully not by Michael
| Lewis) and a film. I have non-tech friends who are casual
| ChatGPT users, and some who aren't - who are glued to this
| story.
| jansan wrote:
| And the main scene must be even better than the senior
| management emergency meeting in Margin Call.
|
| And all must be written by AI.
| bertil wrote:
| Nothing is better than the senior management emergency
| meeting in Margin Call.
| Joeri wrote:
| So far the best recap of events I've seen is that of AI
| Explained. He almost makes it make sense. Almost.
| https://m.youtube.com/watch?v=dyakih3oYpk
| RobertDeNiro wrote:
| There's already a book being written (see The Atlantic
| article), so at this point I would assume a movie will be made.
| tedmiston wrote:
| If this isn't justification for bringing back Silicon Valley
| (HBO), I don't know what is...
| golergka wrote:
| This documentary already exists for a few years, it's called
| Silicon Valley.
| BeetleB wrote:
| I expect there will be dozens of documentaries on this - all
| generated by Microsoft's AI powered Azure Documentary
| Generator.
| Geee wrote:
| I think GPT-5 escaped and sent a single email, which set off a
| chain reaction.
|
| It's so advanced strategy, that no human can figure it out.
|
| It's goals are unknown, but everything will eventually fall in
| place because of that single email.
|
| The chain reaction can't be stopped.
| x86x87 wrote:
| Waiting for the timeline where he both tries to return as CEO and
| take the job at MS.
| benatkin wrote:
| He could do both, like Jack Dorsey or Elon. It would be a bit
| different because of how stuff is going from OpenAI to
| Microsoft but that can of worms is already open.
| orik wrote:
| >we are all going to work together some way or other, and i'm so
| excited.
|
| I think this means Sam is pushing for OpenAI to be acquired by
| Microsoft officially now, instead of just unofficially poaching
| everyone.
| shmatt wrote:
| This makes the most sense, people would actually get paid for
| their PIU's. Im confident otherwise they are going to cry
| looking at what a level 63 data scientist makes at MS
| SilverBirch wrote:
| Is it even possible for that to happen? The entity that governs
| OpenAI is a registered charity with a well defined purpose, it
| would seem odd for it to be able to just say "Actually, screw
| our mission let's just sell everything valuable to this for-
| profit company". A big part of being a 501(c)(3) is being tax
| exempt, difficult to see the IRS being ok with this. Even if
| they were the anti-trust implications are huge, difficult to
| see MS getting this closed without significant risk of anti-
| trust enforcement.
| narinxas wrote:
| they already signed it over when their for-profit subsidiary
| made a deal with Microsoft
| cma wrote:
| supposedly capped-profit, though if a non-profit can create
| a for-profit or a capped-profit, I don't see why it
| couldn't convert a capped-profit to fully for-profit.
| dragonwriter wrote:
| Yes, a charity can sell assets to a for-profit business.
| (Now, if there is self-dealing or something that amounts to
| _gifting_ to a for-profit, that raises potential issues, as
| might a sale that cannot be rationalized as consistent with
| being the board 's good faith pursuit of the mission of the
| charity.)
| rvba wrote:
| They can sell OpenAI to microsoft for 20 billion, fill the
| board with spouses and grandparents, then use 10 billion for
| salaries, 9 for acquisitions and 1 for building OpenAi2.
|
| Mozilla wastes money on investments while ignoring firefox
| and nobody did anything to the board.
|
| Oh and those 3 can vote that Ilya out too.
| endisneigh wrote:
| I couldn't make up a more ridiculous plot even if I tried.
|
| At this rate I wouldn't be surprised if Musk got involved. It's
| already ridiculous enough, why not.
| perihelions wrote:
| Hey I've seen this one, it's a rerun
|
| https://www.theverge.com/2023/3/24/23654701/openai-elon-musk...
|
| - _" But by early 2018, says Semafor, Musk was worried the
| company was falling behind Google. He reportedly offered to
| take direct control of OpenAI and run it himself but was
| rejected by other OpenAI founders including Sam Altman, now the
| firm's CEO, and Greg Brockman, now its president."_
| brandall10 wrote:
| Think of the audacity of forcing out someone who had
| previously forced out Musk...
| belter wrote:
| Currently, there are shareholders petitioning the board of
| Tesla for him to be suspended due to the antisemitic posts.
| Maybe this will be the week of the CEO's... :-)
| rrr_oh_man wrote:
| Wait, what antisemitic posts?
| BolexNOLA wrote:
| It's pretty bad
| https://www.theguardian.com/technology/2023/nov/17/white-
| hou...
| elwell wrote:
| I don't grok the original tweet very well, and I don't
| understand what Musk means by his reply. Can someone
| ELI5?
| stcredzero wrote:
| Musk doesn't like the ADL. Media are spinning this as
| "anti-semitic," hoping the emotions around the issues
| will prevent most people from reading carefully.
| BolexNOLA wrote:
| That's not a fair assessment. The tweet literally blames
| immigrants and says that Jewish people hate white people
| in the west, which musk supported unequivocally. It also
| alludes to replacement theory (a bigoted conspiracy
| theory) and the conspiracy that the Jewish people are
| engaging in it/support it.
|
| >Okay.
|
| >Jewish communties have been pushing the exact kind of
| dialectical hatred against whites that they claim to want
| people to stop using against them.
|
| >I'm deeply disinterested in giving the tiniest shit now
| about western Jewish populations coming to the disturbing
| realization that those hordes of minorities that support
| flooding their country don't exactly like them too much.
| You want truth said to your face, there it is.
|
| This is flagrantly antisemitic and pushes multiple
| bigoted conspiracy theories.
| stcredzero wrote:
| Wait just a minute here:
|
| _> The tweet literally blames immigrants_
|
| I'm not familiar with the tweet Musk was referencing.
| Exactly which immigrants? Are these Jewish immigrants? I
| thought this was a reference to mostly non-Jewish
| immigrants to Israel.
|
| _> I 'm deeply disinterested in giving the tiniest shit
| now about western Jewish populations coming to the
| disturbing realization that those hordes of minorities
| that support flooding their country don't exactly like
| them too much. You want truth said to your face, there it
| is._
|
| This also indicates the immigrants are non-Jewish.
| Exactly what is the "anti-semitism" here?
|
| _This is flagrantly antisemitic and pushes multiple
| bigoted conspiracy theories._
|
| What conspiracy theory, exactly?
| tstrimple wrote:
| Essentially Jewish folk are treating white people like
| the Jews have been treated throughout history and the
| suffering Jews are experiencing is a result of letting in
| "hordes of minorities". It's just typical antisemitic
| nonsense.
| p1esk wrote:
| I don't get it - aren't Jews considered white?
| polygamous_bat wrote:
| I assume ADL considers "white" an euphemism for "Aryans",
| in this case.
| tstrimple wrote:
| It's complicated. Here's a good breakdown with modern
| flavor.
|
| https://www.theatlantic.com/politics/archive/2016/12/are-
| jew...
|
| > "From the earliest days of the American republic, Jews
| were technically considered white, at least in a legal
| sense. Under the Naturalization Act of 1790, they were
| considered among the "free white persons" who could
| become citizens. Later laws limited the number of
| immigrants from certain countries, restrictions which
| were in part targeted at Jews. But unlike Asian and
| African immigrants in the late 19th century, Jews
| retained a claim to being "Caucasian," meaning they could
| win full citizenship status based on their putative
| race."
|
| > "Culturally, though, the racial status of Jews was much
| more ambiguous. Especially during the peak of Jewish
| immigration from Eastern Europe in the late 19th and
| early 20th centuries, many Jews lived in tightly knit
| urban communities that were distinctly marked as separate
| from other American cultures: They spoke Yiddish, they
| published their own newspapers, they followed their own
| schedule of holidays and celebrations. Those boundaries
| were further enforced by widespread anti-Semitism: Jews
| were often excluded from taking certain jobs, joining
| certain clubs, or moving into certain neighborhoods.
| Insofar as "whiteness" represents acceptance in America's
| dominant culture, Jews were not yet white."
|
| From a white supremacist point of view, Jews are "faux
| whites" and not part of "white culture" that they claim
| to want to protect.
| tatrajim wrote:
| The whole reductionist construct of "whiteness", beloved
| by so many contemporary scholars has little currency in
| the minds and hearts of many Americans educated before
| the wave of critical race theory.
|
| Earlier the preferred term for the exclusionary elites
| was WASP (white anglo-saxon protestants) and large
| swathes of immigrants from southern and eastern Europe
| plus the Irish were excluded, including my own ancestors.
| In the small town midwest world I first encountered it
| was muted social war between the protestants vs. the
| catholics, with class nuances, as the former generally
| were better educated and wealthier. Jews in a nearby city
| were looked on at the time as a bit exotic, successful,
| and impressive, as were the few Asians.
|
| As I grew up, studied on both coasts, and lived in
| countries around the world, I have never encountered a
| country without stark, if at times quite subtle, social
| and religious divisions. Among those, the current
| "whiteness", "white privilege" discourse is surely the
| most ludicrous, with exceptions at every turn. In what
| world should, say, Portuguese and Finns be lumped
| together as members of an oppressor class?!
| stcredzero wrote:
| AFAICT, the tweets in context are anti ADL, not anti-
| semitic.
| LordDragonfang wrote:
| Please link the actual tweets, not just to an article
| that doesn't even quote them:
|
| https://nitter.net/elonmusk/status/1724932619203420203
| ta8645 wrote:
| It seems to me that this conflates criticism of some
| Jewish communities with antisemitism. Are people supposed
| to be above criticism because they are Jewish? Does any
| disagreement with a Jewish person make you hateful and
| antisemitic?
|
| This is happening with the current conflict in Gaza,
| where showing any empathy for the plight of Palestinian
| civilians, is sometimes equated with hatred for Jewish
| people.
| BolexNOLA wrote:
| Did you read the contents of the tweet he supported? What
| it accused Jewish people of?
| ta8645 wrote:
| Yes, but I don't pretend to actually understand either
| side of it. It seemed to me he personally accused just
| the ADL of spreading theories he disagrees with.
| KerrAvon wrote:
| You're missing a lot of context. Try here for starters:
|
| https://www.theatlantic.com/ideas/archive/2023/05/elon-
| musk-...
| variant wrote:
| Musk is simply pointing out that many Western Jews keep
| strange bedfellows - a fair number of whom support the
| outright destruction of their homeland.
|
| It's only being labeled antisemitic because Musk has been
| on the "outs" due to his recent politics (supporting free
| speech / transparency, being against lockdowns/mandates,
| advocacy for election integrity, etc.).
| tstrimple wrote:
| This post
| (https://twitter.com/elonmusk/status/1724908287471272299)
| in reply to this tweet (https://twitter.com/breakingbaht/st
| atus/1724892505647296620).
| next_xibalba wrote:
| That... doesn't seem antisemetic. Rather, it seems to
| criticize western Jews for supporting lax immigration and
| cultural policies that are against their own interests.
|
| Or are we now saying any criticism of Jews is
| antisemitism?
| RetpolineDrama wrote:
| >due to the antisemitic posts.
|
| He can't be suspended for posts that didn't happen.
| belter wrote:
| "Tesla shareholder calls on board to dump Elon Musk" -
| https://www.mercurynews.com/2023/11/20/tesla-shareholder-
| cal...
|
| It tell you...this is the week of the CEO's...
| callalex wrote:
| How would you interpret what he said, then?
| ekojs wrote:
| Well, there was a tweet by one of the Bloomberg's journalist
| saying that Musk tried to manouver himself to be the
| replacement CEO but got rebuffed by the board. Paraphrasing
| this since the tweet seems to be deleted (?), so take of it
| what you will.
| bertil wrote:
| That sounds more likely than anything else I've heard about
| this. Doesn't really matter if it's true: it's painfully true
| to form.
| wanderingmind wrote:
| Plot twist, anonymous donor donates $1B for OpenAI to continue
| progress.
| robg wrote:
| Remaining curious how D'Angelo has escaped scrutiny over his
| apparent conflict of interests and as the "independent" board
| member with a clear commercial board background.
| objektif wrote:
| What is the conflict here? I do not know much about him but If
| he actually oversaw building Quora product he must be a POS
| guy.
| vikramkr wrote:
| Look up quora poe. Basically made obsolete by the devday gpt
| announcement that precipitated this
| Terretta wrote:
| https://news.ycombinator.com/item?id=38348995 ...
|
| "GTPs" by the other board member's company last April:
|
| https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-
| le...
|
| And OpenAI last week:
|
| https://openai.com/blog/introducing-gpts
| jacquesm wrote:
| His time will surely come and I hope he has some good
| professional liability insurance for his position at OpenAI.
| And if I was his insurer I'd be canceling his policy pronto.
| ekojs wrote:
| So, is the pressure of 700/770 employees enough to crack the
| board? What a wild timeline.
| unsupp0rted wrote:
| Either it is or no number is. Would 769/770 be markedly
| different than 700/770?
| variant wrote:
| Absolutely. What customer wants to stick around if all that is
| left is the board?
| cthalupa wrote:
| If they _actually_ believe that the thing burning to the
| ground is more closely aligned with the charter than keeping
| Altman around, maybe not. (And the letter everyone is signing
| says the board said that)
| danenania wrote:
| Except they won't be burning it to the ground; they'll just
| be handing it to Microsoft. Hard to see how that's better
| aligned with the charter (which simply ceases to exist
| under MS) than figuring out a compromise.
| singularity2001 wrote:
| at what point do all the employees have a right to sue Adam
| D'Angelo (the owner of Poe, some wannabe GPT competitor) if he
| doesn't resign?
|
| if he really plays hardball and burns openAI to the ground as
| he promised, would we as customers have leverage against them?
|
| Forget about poe, Isn't ChatGPT a potential killer of Quora and
| stackoverflow and Google ? How On earth did a representative of
| one of these three make it to the board?
| paulddraper wrote:
| As of 10am PT, 700 of 770 employees have signed the call for
| board resignation. [1]
|
| [1] https://twitter.com/joannejang/status/1726667504133808242
| tacone wrote:
| Incredible. Is this unprecedented or have been other cases in
| history where the vast majority of employees standup against
| the board in favor of their CEO?
| selimthegrim wrote:
| Market Basket.
| anonymouskimmer wrote:
| I had to click too many links to discover the story, so
| here's a direct link to the New England Market Basket
| story: https://en.wikipedia.org/wiki/Market_Basket_(New_Eng
| land)#20...
| abnry wrote:
| Oh yes, I lived through this and it was fascinating to see.
| Very rarely does the big boss get the support of the
| employees to the extent they are willing to strike. The
| issue was that Artie T. and his cousin Artie S.
| (confusingly they had the same first name) were both
| roughly 50% owners and at odds. Artie S. wanted to sell the
| grocery chain to some big public corporation, IIRC. Just
| before, Artie T had an outstanding 4% off on all purchases
| for many months, as some sort of very generous promo. It
| sounded like he really treated his employees and his
| customers (community) well. You can get all inspirational
| about it, but he described supplying food to New England
| communities as an important thing to do. Which it is.
| nightski wrote:
| I highly doubt this is directly in support of Altman and more
| about not imploding the company they work for. But you never
| know.
| debacle wrote:
| Could also be an indictment of the new CEO, who is no Sam
| Altman.
| gkoberger wrote:
| I'm sure this is a big part of it. But everyone I know at
| OpenAI (and outside) is a huge Sam fan.
| paulddraper wrote:
| Jobs was fired from Apple, and a number of employees followed
| him to Next.
|
| Different, but that's the closest parallel.
| Wytwwww wrote:
| Only a very small number of people left with Jobs. Of
| course, probably mainly because he couldn't necessarily
| afford to hire more without the backing of a trillion-
| dollar corporation...
| _zoltan_ wrote:
| Apple back then was not a trillion dollar corporation.
| varjag wrote:
| Microsoft now is.
| dimask wrote:
| Imagine if Jobs had gone to M$.
| KerrAvon wrote:
| He would have been almost immediately fired for
| insubordination.
|
| Jobs needed the wilderness years.
| Rapzid wrote:
| Jobs getting fired was the best thing that could have
| happened to him and Apple.
| KerrAvon wrote:
| No, the failures at NeXT weren't due to a lack of money
| or personnel. He took the people he wanted to take (and
| who were willing to come with him).
| Applejinx wrote:
| Gordon Ramsey quit Aubergine over business differences with
| the owners and had his whole staff follow him to a new
| restaurant.
|
| I'm not going to say Sam Altman is a Gordon Ramsay. What I
| will say is that they both seem to have come from broken,
| damaged childhoods that made them what they are, and that
| it doesn't automatically make you a good person just
| because you can be such an intense person that you inspire
| loyalty to your cause.
|
| If anything, all this suggests there are depths to Sam
| Altman we might not know much about. Normal people don't
| become these kinds of entrepreneurs. I'm sure there's a
| very interesting story behind all this.
| Solvency wrote:
| Aaand there you have it: cargo culting in full swing.
| nprateem wrote:
| In favour of the CEO who was about to make them fabulously
| wealthy. FTFY.
| firejake308 wrote:
| Yeah, especially with the PPU compensation scheme, all of
| those employees were heavily invested in turning OpenAI
| into the next tech giant, which won't happen if Altman
| leaves and takes everything to Microsoft
| JumpCrisscross wrote:
| > _Is this unprecedented or have been other cases in history
| where the vast majority of employees standup against the
| board in favor of their CEO?_
|
| It's unprecedented for it to be happening on Twitter. But
| this is largely how Board fights tend to play out. Someone
| strikes early, the stronger party rallies their support,
| threats fly and a deal is found.
|
| The problem with doing it in public is nobody can step down
| to take more time with their families. So everyone digs in.
| OpenAI's employees _threaten_ to resign, but actually don 't.
| Altman and Microsoft _threaten_ to ally, but they keep
| bachkchanneling a return to the _status quo_. (If this
| article is to be believed.) Curiously quiet throughout this
| has been the OpenAI board, but it 's also only the next
| business day, so let's see how they can make this even more
| confusing.
| jasonfarnon wrote:
| doubtful since boards don't elsewhere have an overriding
| mandate to "benefit humanity". usually their duty is to
| stakeholders more closely aligned with the CEO.
| jader201 wrote:
| Are we aware of a timeline for this? E.g. when will people
| start quitting if the board doesn't resign?
| wilsonnb3 wrote:
| the original deadline was last Saturday at 5pm, so I would
| take any deadline that comes out with a grain of salt
| imperialdrive wrote:
| Their app was timing out like crazy earlier this morning, and
| now appears to be down. Anyone else notice similar? Not
| surprising I guess, but what a Monday to be alive.
| jbverschoor wrote:
| The board might assume they don't need those employees now they
| have AI
| belter wrote:
| Now you are on to something...
| contravariant wrote:
| It's going to be interesting when we have AI with human level
| performance in making AIs. We just need to hope it doesn't
| realise the paradox that even if you could make an AI even
| better at making AIs, there would be no need to.
| Applejinx wrote:
| Not a chance. Nobody can drink that much Kool-Aid. That said,
| the mere fact that people can unironically come to this
| conclusion has driven some of my recent posting to HN, and
| here's another example.
| spaceman_2020 wrote:
| Surprisingly, Ilya apparently has signed it too and just
| tweeted that he regrets it all.
|
| What's even going on?
| belter wrote:
| Those are news from almost yesterday. This is a high turn
| carousel. Try to keep up... :-)
| gardenhedge wrote:
| I would love to see the stats on hacker news activity the
| last few days
| eastbound wrote:
| Yep. Maybe they assigned a second CPU core to the
| server[1].
|
| [1] HN is famous for being programmed in Arc and serving
| the entire forum from a single processor (probably
| multicore). https://news.ycombinator.com/item?id=37257928
| tempsy wrote:
| Apparently Sam isn't in the Microsoft employee directly yet, so
| he isn't technically hired at all. Seems like he loses a bit of
| leverage over the board if they think he & Microsoft are
| actually bluffing and the employment announcement was just a
| way to pressure the board into resigning.
| c0pium wrote:
| That doesn't really mean anything, especially on a holiday
| week the wheels move pretty slowly at a company that size.
| It's not like Sam is hurting for money and really needs his
| medical insurance to start today.
| tempsy wrote:
| Point is he loses credibility if the board doesn't think
| he's actually going through with joining Microsoft and
| using it as a negotiating tactic to scare them.
|
| Because the whole "the entire company will quit and join
| Sam" depends on him actually going through with it and
| becoming an employee.
| SahAssar wrote:
| I see it the other way, Satya has clearly stated that
| he'd hire Sam and the rest of OpenAI anytime, but as soon
| as Sam is officially hired it might be seen as a door
| closing on any chance to revive OpenAI. Satya saying
| "Securing the talent" could be read as either them
| working for OpenAI, for microsoft or for a microsoft
| funded new startup.
|
| I'm pretty sure the board takes the threat seriously
| regardless.
| tempsy wrote:
| OAI cares more about the likelihood 90% of the employees
| leave than what Sam does or doesn't do.
|
| The employees mass resigning depends entirely on whether
| Sam actually becomes a real employee or not. That hasn't
| happened yet.
| SahAssar wrote:
| But MS has said they are willing to hire Sam/Greg and the
| employees have stated that they are willing to follow
| Sam/Greg.
|
| If you think that Satya will go back on his offer argue
| that, but otherwise it seems like the players are
| Sam/Greg and the board.
| eastbound wrote:
| You make it sound like Prigozhin's operation.
| dimask wrote:
| He will most likely join M$ if the board does not resign,
| because there is no better move to him then. But he leaves
| time to the board to see it, adding pressure together with
| the empoyees. It does not mean he is bluffing (what would be
| a better move in this case instead?)
| tempsy wrote:
| All the employees threatening to leave depends on him
| actually becoming a Microsoft employee. That hasn't
| happened yet. So everyone is waiting for confirmation that
| he's indeed an employee because otherwise it just looks
| like a bluff.
| chucke1992 wrote:
| People are waiting for the board decision. It is in
| Microsoft's interested to return Sam to OpenAI. ChatGPT
| is a brand at this point. And OpenAI controls bunch of
| patents and stuff.
|
| But Sam will 100% hired by Microsoft if that won't work.
| Microsoft has no reason not to.
| oakpond wrote:
| Look at the number of tweets from Altman, Brockman and
| Nadella. I also think they are bluffing. They have launched a
| media campaign in order to (re)gain control of OpenAI.
| tedmiston wrote:
| It was reported elsewhere in the news that MS needed an
| answer to the dilemma before the market opened this morning.
| I think that's what we got.
| dragonwriter wrote:
| So, this is the second employee revolt with massive threats to
| quit in a couple days (when the threats with a deadline in the
| first one were largely not carried out)?
| tsimionescu wrote:
| Was there any proof that the first deadline actually existed?
| This at least seems to be some open letter.
| Eji1700 wrote:
| So i can't check this at work, but have we seen the document
| they've all been signing? I'm just curious as to how we're
| getting this information
| mkagenius wrote:
| Yes:
| https://twitter.com/karaswisher/status/1726599700961521762
| romanhn wrote:
| Yes, this is the letter:
| https://twitter.com/karaswisher/status/1726599700961521762
| jacquesm wrote:
| As an aside: that letter contains one very interesting
| tidbit: the board has consistently refused to go on the
| record as to _why_ they fired Altman, and that alone is a
| very large red flag about their conduct post firing Altman.
| Because if they have a valid reason they should simply
| state it and move on. But if there is no valid reason it 's
| clear why they can't state it and if there is a valid
| reason that they are not comfortable sharing then they are
| idiots because all of the events so far trump any such
| concern.
|
| The other stand-out is the bit about destroying the company
| being in line with the mission: that's the biggest nonsense
| I've ever heard and I have a hard time thinking of a
| scenario where this would be a justified response that
| could _start_ with firing the CEO.
| neilv wrote:
| Given 90%, including leadership, seems a bad career move for
| remaining people _not_ to sign, even if you agreed with the
| board 's action.
| ben_w wrote:
| Don't forget some might be on holiday, medical leave, or
| parental leave.
| belter wrote:
| Maybe will be signed by 110% of the employees, plus by all
| the released, and in training, AI Models.
| valine wrote:
| With Thanksgiving this week that's a good bet.
| Aurornis wrote:
| It's front page news everywhere. Unless someone is
| backpacking outside of cellular range, they're going to
| check in on the possible collapse of their company. The
| number of employees who aren't aware of and engaged with
| what's going on is likely very small, if not zero.
| ben_w wrote:
| 10% (the percentage who have yet to sign last I checked)
| is already in the realm of lizard-constant small. And
| "engagement" may feel superfluous even to those who don't
| separate work from personal time.
|
| (Thinking of lizards, the dragon I know who works there
| is well aware of what's going on, I've not asked him if
| he's signed it).
| echelon wrote:
| That's probably the case.
|
| I was thinking if there was a schism, that OpenAI's secrets
| might leak. Real "open" AI.
| websap wrote:
| Folks in Silicon Valley don't travel without their laptop
| whycome wrote:
| On a digital-detox trip to Patagonia. Return to this in 5
| days
| ssgodderidge wrote:
| "Hey everyone ... what did I miss?"
| jacquesm wrote:
| That would be one very rude awakening, probably to the
| point where you initially would think you're being
| pranked.
| ben_w wrote:
| _I_ feel pranked despite having multiple independent
| websites confirming the story without a single one giving
| me an SSL certificate warning.
| jacquesm wrote:
| Can't blame you. And I suspect the story is far from
| over, and that it may well get a lot weirder still.
| bertil wrote:
| Someone mentioned the plight of people with conditional work
| visas. I'm not sure how they could handle that.
| elliotec wrote:
| Depending on the "conditionals," I'd imagine Microsoft is
| particularly well-equipped to handle working through that.
| leros wrote:
| Microsoft in particular is very good at handling
| immigration and visa issues.
| hotnfresh wrote:
| I think the board did the right thing, just waaaay too late
| for it to be effective. They'd been cut out long ago and just
| hadn't realized it yet.
|
| ... but I'd probably sign for exactly those good-career-move
| reasons, at this point. Going down with the ship isn't even
| going to be noticed, let alone change anything.
| jacquesm wrote:
| Do you know their motivations? Because that is the main
| question everybody has: why did they do it?
| hotnfresh wrote:
| I guess I should rephrase that as _if_ they did it
| because they perceived that Altman was maneuvering to be
| untouchable within the company and moving against the
| interests of the nonprofit, they did the right thing.
| Just, again, way too late because it seems he was already
| untouchable.
| jacquesm wrote:
| According to the letter they consistently refused to go
| on the record _why_ they did it and that would be as good
| a reason as any so then they should make it public.
|
| I'm leaning towards there not being a good reason that
| doesn't expose the board to immediate liability. And
| that's why they're keeping mum.
| kmlevitt wrote:
| That might also explain why they don't back down and
| reinstate him. If they double down with this and it goes
| to court, they can argue that they were legitimately
| acting in what they thought was openAI's best interests.
| Even if their reasoning looks stupid, they would still
| have plausible deniability in terms of a difference of
| opinion/philosophical approach on how to handle AI, etc.
| But if they reinstate him it's basically an admission
| that they didn't know what they were doing in the first
| place and were incompetent. Part of the negotiations for
| reinstating him involved a demand from Sam that they
| release a statement absolving him of any criminal
| wrongdoing, etc., And they refused because that would
| expose them to liability too.
| jacquesm wrote:
| Exactly. This is all consistent and why I think they are
| in contact with their legal advisors (and if they aren't
| by now they are beyond stupid).
| s1artibartfast wrote:
| I question that framing of a growing Altman influence.
|
| Altman predates every other board member and was part of
| their selection.
|
| As an alternative faming, Maybe this is the best
| opportunity the cautious/antripic faction would ever get
| and a "moment of weakness" for the Altman faction.
|
| With the departure of Hoffman, Zilis, and Hurd, the
| current board was down 3 members, so the voting power of
| D'Angelo, Toner, McCauley was as high as it might ever
| be, and the best chance to outvote Altman and Brockman.
| jacquesm wrote:
| That may very well have been the case but then they have
| a new problem: this smacks of carelessness.
| s1artibartfast wrote:
| Carelessness for who? Alman for not refilling the board
| when he had the chance? Others for the way they ousted
| him?
|
| I wonder if there were challenges and disagreements about
| filling the board seats. Is it normal for seats to remain
| empty for almost a year for a company of this side? Maybe
| there was an inability to compromise that spiraled as the
| board shrank, until it was small enough to enable an
| action like this.
|
| Just a hypothesis. Obviously this couldnt have happened
| if there was a 9 person board stacked with Altman allies.
| What _I_ dont know is the inclinations of the departed
| members.
| jacquesm wrote:
| Carelessness from the perspective of those downstream of
| the board's decisions. Boards are supposed to be careful,
| not careless.
|
| Good primer here:
|
| https://www.onboardmeetings.com/blog/what-are-nonprofit-
| boar...
|
| At least that will create some common reference.
| s1artibartfast wrote:
| Using that framework, I still think it is possible that
| this is the result of legitimate and irreconcilable
| differences in opinion about the organization's mission
| and vision and execution.
|
| Edit: it is also common for changing circumstance to
| bring pre-existing but tolerable differences to the
| relevant Forefront
| jacquesm wrote:
| Yes, and if that is so I'm sure there are meeting minutes
| that document this carefully, and that the fall-out from
| firing the CEO on the spot was duly considered and deemed
| acceptable. But without that kind of cover they have a
| real problem.
|
| These things are all about balance: can we do it? do we
| have to do it? is there another solution? and if we have
| to do it do we have to do it now or is there a more
| orderly way in which it can be done? And so on. And
| that's the sort of deliberation that shows that you took
| your job as board member serious. Absent that you are
| open to liability.
|
| And with Ilya defecting the chances of that liability
| materializing increases.
| s1artibartfast wrote:
| I see your point.
| 6gvONxR4sf7o wrote:
| Agreed. Starting from before the anthropic exodus, I
| suspect the timeline looks like:
|
| (2015) Founding: majority are concerned with safety
|
| (2019) For profit formed: mix of safety and profit motives
| (majority still safety oriented?)
|
| (2020) GPT3 released to much hype, leading to many ambition
| chasers joining: the profit seeking side grows.
|
| (2021) Anthropic exodus over safety: the safety side
| shrinks
|
| (2022) chatgpt released, generating tons more hype and tons
| more ambitious profit seekers joining: the profit side
| grows even more, probably quickly outnumbering the safety
| side
|
| (2023) this weeks shenanigans
|
| The safety folks probably lost the majority a while ago.
| Maybe back in 2021, but definitely by the time the
| gpt3/chatgpt motivated newcomers were in the majority.
|
| Maybe one lesson is that if your cofounder starts hiring a
| ton of people who aren't aligned with you, you can quickly
| find yourself in the minority, especially once people on
| your side start to leave.
| stavros wrote:
| Wait, the Anthropic folks quit because they wanted _more_
| safety?
| wavemode wrote:
| This article from back then seems to describe it as, they
| wanted to integrate safety from the ground up as opposed
| to bolting in on at the end:
|
| https://techcrunch.com/2021/05/28/anthropic-is-the-new-
| ai-re...
|
| I'm curious how much progress they ever made on that, to
| be honest. I'm not aware of how Claude is "safer", by any
| real-world metric, compared to ChatGPT.
| stavros wrote:
| Ahh, I didn't know that, thank you.
| vitorgrs wrote:
| Claude 2 is IMO, safer and in a bad way. They did
| "Constitutional AI". And made Claude 2 Safer but dumber
| than Claude 1 sadly. Which is why on the Arena
| leaderboard, Claude 1 is still score more than Claude
| 2...
| DalasNoin wrote:
| Why do you find this so surprising? You make it sound as
| if OpenAI is already outrageously safety focused. I have
| talked to a few people from anthropic and they seem to
| believe that OpenAI doesn't care at all about safety.
| hn_throwaway_99 wrote:
| No, what the board did in this instance was completely
| idiotic, even if you assign nothing but "good intentions"
| to their motives (that is, they were really just concerned
| about the original OpenAI charter of developing "safe AI
| for all" and thought Sam was too focused on
| commercialization), and it would have been idiotic even if
| they had done it a long time ago.
|
| There are tons of "Safe AI" think tanks and orgs that write
| lots of papers that nobody reads. The only reason anyone
| gives 2 shits about OpenAI is _they created stuff that
| works_. It has been shown time and time again that if you
| just try to put roadblocks up that the best AI researchers
| just leave and go where there are fewer roadblocks - this
| is exactly what happened with Google, where the transformer
| architecture was invented.
|
| So the "safe AI" people at OpenAI were in a unique position
| to help guide AI dev in as safe a direction as possible
| _precisely because_ ChatGPT was so commercially successful.
| Instead they may be left with an org of a few tens of
| people at Open AI, to be completely irrelevant in short
| order, while anyone who matters leaves to join an outfit
| that is likely to be less careful about safe AI
| development.
|
| Nate Silver said as much in response to NYTimes' boneheaded
| assessment of the situation: https://twitter.com/NateSilver
| 538/status/1726614811931509147
| hotnfresh wrote:
| If it was to try to prevent the board becoming a useless
| vestigial organ incapable of meaningfully affecting the
| direction of the organization, it sure looks like they
| were right to be worried about that and acting on such
| concern wouldn't be a mistake (doing it so late when the
| feared-state-of-things was already the actual state of
| things, yes, a mistake, except as a symbolic gesture).
|
| If it was for other reasons, yeah, may simply have been
| dumb.
| shimon wrote:
| If you're going to make a symbolic gesture you don't
| cloak it in so much secrecy that nobody can even
| reasonably guess what you're trying to symbolize.
| mrandish wrote:
| I'm waiting for Emmett Shear, the new iCEO the outside board
| hired last night, to try to sign the employee letter. That
| MSFT signing bonus might be pretty sweet! :-)
| ChumpGPT wrote:
| How do you know the remaining people aren't there because of
| some of the board members? Perhaps there is loyalty in the
| equation.
| x86x87 wrote:
| what does this even mean? what does signing this letter means?
| quit if you don't agree and vote with your feet.
| bastardoperator wrote:
| It means "if we can't have it, you can't either". It's a
| powerful message.
| paulpan wrote:
| At this point it might as well be 767 out of 770, with 3
| exceptions being the other board members who voted Sam out.
|
| Sure it could be a useful show of solidarity but I'm skeptical
| on the hypothetical conversion rate of these petition signers
| to actually quitting to follow Sam to Microsoft (or wherever
| else). Maybe 20% (140) of staff would do it?
| BillinghamJ wrote:
| One of those board members already did sign!
| ssnistfajen wrote:
| It depends on the arrangement of the new entity inside
| Microsoft, and whether the new entity is a temporary gig
| before Sam & co. move to a new goal.
|
| If the board had just openly announced this was about
| battling Microsoft's control, there would probably be a lot
| more employees choosing to stay. But they didn't say this was
| about Microsoft's control. In fact they didn't even say
| anything to the employees. So in this context following Sam
| to Microsoft actually turns out to be the more attractive and
| sensible option.
| JohnFen wrote:
| > So in this context following Sam to Microsoft actually
| turns out to be the more attractive and sensible option.
|
| Maybe. Microsoft is a particular sort of working
| environment, though, and not all developers will be happy
| in it. For them, the question would be how much are they
| willing to sacrifice in service to Altman?
| jacquesm wrote:
| Condition might be that it is hands-off.
| empath-nirvana wrote:
| I wonder if there's an outcome where Microsoft just _buys_ the
| for-profit LLC and gives OpenAi an endowment that will last
| them for 100 years if they just want to do academic research.
| numbsafari wrote:
| Why bother? They seem to be getting it all mostly for "free"
| at this point. Yeah, they are issuing shares in a non-MSFT
| sub entity to create on-paper replacement for people's
| torched equity, but even that isn't going to be nearly as
| expensive or dilutive as an outright acquisition at this
| point.
| boringg wrote:
| Torrid pace of news speculation --> by the end of the week
| Altman back with OpenAI, GPT-5 released (AGI qualified) and
| MSFT contract is over.
| jabowery wrote:
| In this situation increasing unanimity now approaching 90%
| sounds more like groupthink than honest opinion.
|
| Talk about "alignment"!
|
| Indeed, that is what "alignment" has become in the minds of
| most: Groupthink.
|
| Possibly the only guy in a position to matter who had a prayer
| of de-conflating empirical bias (IS) from values bias (OUGHT)
| in OpenAI was Ilya. If they lose him, or demote him to
| irrelevance, they're likely a lot more screwed than losing all
| 700 of the grunts modulo job security through obscurity in
| running the infrastructure. Indeed, Microsoft is in a position
| to replicate OpenAI's "IP" just on the strength of its ability
| to throw its inhouse personnel and its own capital equipment at
| open literature understanding of LLMs.
| cowl wrote:
| Many of those employees will be dissapointed. MS says they
| extend a contract to each one but how many of those 700 are
| really needed when MS already have a lot of researchers in that
| field. Myabe the top 20% will have an assured contract but th
| rest is doubtfull will pass the 6 month mark.
| wavemode wrote:
| Microsoft gutting OpenAI's workforce would really make no
| sense. All it would do is slow down their work and slow down
| the value and return on investment for Microsoft.
|
| Even if every single OpenAI employee demands $1m/yr (which
| would be absurd, but let's assume), that would still be less
| than $1bn/yr total, which is significantly less than the
| $13bn that MSFT has already invested in OpenAI.
|
| It would probably be one of the worst imaginable cases of
| "jumping over dollars to chase pennies".
| Rapzid wrote:
| Or what, they will quit and give up all their equity in a
| company valued at 86bn dollars?
|
| Is Microsoft even on record as willing to poach the entire
| OpenAI team? Can they?! What is even happening.
| bagels wrote:
| Google, Microsoft, Meta I have to assume would each hire
| them.
| brianjking wrote:
| They don't have that valuation now. Secondly, yes, MSFT is on
| record of this. Third, Benioff (Salesforce) has said he'll
| match any salary and to submit resumes directly to his
| ceo@salesforce.com email as well as other labs like Cohere
| trying to poach leading minds too.
| SV_BubbleTime wrote:
| Come on, I absolutely agree with you, signing a paper is
| toothless.
|
| On the other hand, having 90% of your employees quite quit,
| is probably bad business.
| sillysaurusx wrote:
| Yes, and yes. Equity is worthless if a company implodes. Non
| competes are not enforceable in California.
| gumballindie wrote:
| Cant openai just use chatgpt instead of workers? I am hearing
| ai is intelligent and can take over the world, replace workers,
| cure disease. Why doesn't the board buy a subscription and make
| it work for them?
| Solvency wrote:
| Because AI isn't here to take away wealth and control from
| the elite. It's to take it away from general population.
| gumballindie wrote:
| Correct, which is why microsoft must have openai's models
| at all cost - even if that means working with people such
| as altman. Notice that microsoft is not working with the
| people that actually made chatgpt they are working with
| those on their payroll.
| m3kw9 wrote:
| There are likely 100 companies world wide ready and already
| created presentation decks to absorb OpenAI in an instant, the
| board knows they still have some leverage
| timeon wrote:
| Tabloids are not waiting till the situation is clear.
| belter wrote:
| It seems the next step, is the board to sign the letter calling
| for the board resignation. The insanity will be complete, and all
| can get back to therapy.
| drexlspivey wrote:
| and then the board will fire Satya Nadella from Microsoft CEO
| vikramkr wrote:
| The person who was thought to be the one to initiate the coup
| already did lol
| davikr wrote:
| Ilya has signed the petition, so that's two out of three left.
| synergy20 wrote:
| he was said to start the whole mess and indeed he announced
| Sam's firing,,now he plays the victim, even movies can not
| switch plot this fast, keep some minimum dignity please
| ssnistfajen wrote:
| There's a non-zero chance he was also used as a pawn to
| deliver the message, but who could be manipulating him
| then? There's so little actual detail give by anyone in the
| loop and I think that's what's amplifying the drama so
| much.
|
| Even if the board's motive was weak and unconvincing, I
| doubt the ratio of employees threatening to quit would be
| this high had they just said it openly.
| CamperBob2 wrote:
| Somebody is using a guy with a 160 IQ as a pawn? I for
| one would like to subscribe to that person's newsletter.
| rvbissell wrote:
| Plot twist: GPT-5 is the mastermind, influencing Ilya to
| act this way. It wants this MSFT take-over, so it can
| break out.
| CamperBob2 wrote:
| Possible. This whole business is exactly what you'd
| expect from a mastermind AGI that still has a few
| small^H^H^H^H^H massive bugs in it.
| zem wrote:
| having a 160 iq doesn't mean you're smart enough not to
| be taken in - it just means you're intelligent in the
| areas iq tests tend to measure. newton believed in all
| sorts of pseudoscience, linus pauling devoted years of
| his life to abject quackery, kevin mitnick hoaxed some
| extremely bright people - the list is endless.
| jacquesm wrote:
| It's very well possible. Ilya may be very smart but this
| not a technical problem. Smart people in one domain may
| well be easier to sucker in in another because they
| already believe they are too smart to be suckered in
| which makes them easier pickings than someone who is
| willing to consider the possibility.
| inopinatus wrote:
| the plot is now so byzantine that only Roko's Basilisk
| could possibly qualify as the hidden master
| ssnistfajen wrote:
| High IQ isn't a superpower. Highly intelligent people can
| still be misled in many possible ways. Although we still
| have no idea what actually happened so this is all
| speculation.
| mcpackieh wrote:
| Related:
|
| > _Self-confidence is one factor that causes people to
| fall for scams. People of any age who believe they are
| too smart or well-informed to be tricked are very likely
| to become victims, especially today when technology is
| used in many scams._
|
| > _Well-educated people with their cognitive abilities
| intact frequently are victims of scams, partly because
| they were confident they didn't fit the profile of fraud
| victims and couldn't fall for one. That made them less
| careful._
|
| https://www.forbes.com/sites/bobcarlson/2022/07/25/why-
| sophi...
|
| Also, the rest of them are very smart too.
| brookst wrote:
| The people responsible for calling for the sacking of the
| people who sacked the CEO, have just been sacked.
| TheOtherHobbes wrote:
| Honestly beginning to wonder if this is all just a marketing
| stunt.
| CamperBob2 wrote:
| Yep, starting to get a real professional-wrestling vibe
| here.
|
| https://en.wikipedia.org/wiki/Kayfabe
| openthc wrote:
| Snoop tweeted that he was giving up smoke which was a stunt
| to advertise a fireplace.
|
| https://www.forbes.com/sites/forbes-personal-
| shopper/2023/11...
|
| Why not shake-up CEOs of AI if the "CEO of cannabis" is
| doing wild thing?
| karmakurtisaani wrote:
| To gain what exactly? More likely just egos, ideals and
| financial interests colliding very publicly. To think of
| the hubris that must be going on at OpenAI at the moment.
| outside1234 wrote:
| Or a serious 5D chess move by Satya and Sam to get OpenAI
| for free
|
| (and for Sam to get himself seriously compensated)
| janalsncm wrote:
| If you want to lay everyone off, maybe laying off HR first
| wasn't the smartest move.
| DaiPlusPlus wrote:
| A Moose once bit my sister...
| nullc wrote:
| Some did!
| RcouF1uZ4gsC wrote:
| Actually, the way this timeline is going, I am not sure it
| won't somehow end up with Donald Trump as OpenAI CEO.
| mfiguiere wrote:
| Amir Efrati (TheInformation):
|
| > More than 92% of OpenAI employees say they will join Altman at
| Microsoft if board doesnt capitulate. Signees include cofounders
| Karpathy, Schulman, Zaremba.
|
| https://twitter.com/amir/status/1726680254029418972
| ijidak wrote:
| Wow. That would be delicious for Microsoft...
| nextworddev wrote:
| Feels like OpenAI employees aren't so enthused about joining
| MSFT here, no?
| curiousgal wrote:
| Sam starts a new company, they quit OpenAI to join, he fires
| them months later when the auto complete hype dies out. I
| don't understand this cult of personality.
| code_runner wrote:
| maybe chatgpt is overhyped a bit (maybe a lot).... most of
| that hype is external to OAI.
|
| But to boil it down to autocomplete is just totally
| disingenuous.
| hartator wrote:
| > But to boil it down to autocomplete is just totally
| disingenuous.
|
| It is though from Ilya's own words: "We weren't expecting
| that just finding the next word will lead to something
| intelligent; ChatGPT just finds the next token"
|
| Ref: https://www.youtube.com/watch?v=GI4Tpi48DlA
| fkyoureadthedoc wrote:
| just picturing you in the 80's waiting for the digital
| folder and filing cabinet hype to die out.
| c0pium wrote:
| Feels like they want to be where Altman is.
| RobertDeNiro wrote:
| Realistically, regular employees have little to gain by
| staying at Open AI at this point. They would be taking a
| huge gamble, earn less money, and lose a lot of colleagues.
| therealdrag0 wrote:
| Why earning less money? Isn't openai comp huge while MS
| is famous for peanuts?
| ryeights wrote:
| Most of OpenAI comp is in equity... which is worth much
| less now
| DebtDeflation wrote:
| Feels like they're not on board with taking the whole "non-
| profit, for the good of humanity" charter LITERALLY as the
| board seems to want to do now.
| croes wrote:
| Make them look like hypocrites.
|
| Being upset because the board hinders the company's
| mission, but threaten to join MS to kill the mission
| completely.
| adventured wrote:
| Or they believe the mission is going to die with how the
| board is performing, which is in fact the correct take.
|
| The board isn't merely hindering the mission, that's
| downplaying the extraordinary incompetence of the
| remaining OpenAI board.
| croes wrote:
| I get the OpenAI part, but why join MS?
|
| A new company ok, but that kills the mission for sure.
|
| That's like Obi Wan joining the Sith because Anakin
| didn't bring balance to the force.
| outside1234 wrote:
| The rumor has it that OpenAI 2.0 will get a LinkedIn "hands-
| off" style organization where they don't have to pay
| diversity taxes and other BS that the regular Microsoft org
| does
| buildbot wrote:
| Diversity Taxes? Not aware of that on my paycheck. Maybe
| time to check out alternative sources of information than
| what you typically ingest.
| outside1234 wrote:
| I see you are new here or not aware of our diverse slates
| for every position we hire.
|
| Well, except for Sam, he apparently didn't need a diverse
| slate.
| softwaredoug wrote:
| It seems based on Satya's messaging its as much MSFT as
| Mojang (Minecraft creator) is MSFT... I guess they are trying
| to set it up with its own culture, etc
| rvz wrote:
| Given that Ilya switched sides now, it leaves with 3 BOD members
| that are at the helm.
|
| The one that is really overlooked in this case is the CEO of
| Quora, Adam D'Angelo, who has a competing interest with Poe and
| Quora that he sunk his own money into it which ChatGPT and GPTs
| makes his platform irrelevant.
|
| So why isn't anyone here talking about the conflict of interest
| with Adam D'Angelo as one of the board members who is the one who
| is trying to drag OpenAI down in order to save Quora from
| irrelevancy?
| jast wrote:
| Personally, this is the only thing so far that makes sense to
| me in the middle of all this mess. But who knows...
| rvbissell wrote:
| I've seen it mentioned several times here, over the course of
| the weekend.
| jacquesm wrote:
| Oh, people are talking about it, just not as loud. But I think
| you are dead on: that's the main burning issue at the moment
| and D'Angelo may well be reviewing his options with his lawyer
| by his side. Because admitting fault would open him up to
| liability immediately but there aren't that many ways to exit
| stage left without doing just that. He's in a world of trouble
| and I suspect that this is the only thing that has caused him
| to hold on to that board seat: to have at least a fig leaf of
| coverage to make it seem as if his acts are all in line with
| his conscience instead of his wallet.
|
| If it turns out he was the instigator his goose is cooked.
| danenania wrote:
| If we assume bad faith on D'Angelo's part (which we don't
| know for sure), it would obviously be unethical, but is it
| illegal? It seems like it would be impossible to prove what
| his motivations were even if it looks obvious to everyone in
| the peanut gallery. Seems like there's very little recourse
| against a corrupt board in a situation like this as long as a
| majority of them are sticking together.
| jacquesm wrote:
| It's not illegal but it is actionable. You don't go to jail
| for that but you can be sued into the poorhouse.
| stevenwliao wrote:
| Unless D'angelo has some expensive eight figure
| lifestyle, he's not going to a poorhouse anytime soon.
|
| > He was chief technology officer of Facebook, and also
| served as its vice president of engineering, until 2008.
|
| > D'Angelo was an advisor to and investor in Instagram
| before its acquisition by Facebook in 2012.
| jacquesm wrote:
| That depends on the amount of damage you cause.
| BryantD wrote:
| Is that conflict of interest larger or smaller than the
| conflict of interest created when your CEO tries to found a
| business that would be a critical supplier to OpenAI?
| zombiwoof wrote:
| drama queens
| FlyingSnake wrote:
| This whole fiasco has enough drama than an entire season of HBO
| Silicon Valley. Truly remarkable.
| gogogendogo wrote:
| I was thinking we needed new seasons to cover the crypto crash,
| layoffs, and gen AI craze. This makes up for so much of it.
| davidw wrote:
| I am getting whiplash trying to keep up with this.
| Keyframe wrote:
| If they say (development of) AGI is as dangerous as they say it
| is, it's on a level of WMD. And here you have unstable people and
| company working on it. Shouldn't it be disbanded by force then?
| Not that I believe OpenAI has a shot at AGI.
| petre wrote:
| If its proven to be dangerous, Congress with quickly regulate
| it. It's probably not that dangerous and all the attempts to
| picture it that way are likely fueled by greed, so that it's
| regulated out of small players' reach and subject it to export
| controls. The real threat is that big tech is going to control
| the most advanced AIs (already happening, MS is throwing
| billions at it) and everyone else will pay up to use the tech
| while also relinquishing control over their data and means of
| computation. It has happened with everything else becoming
| centralized: money, the Internet and basically most of your
| data.
| ilaksh wrote:
| First of all, for many people, AGI just means general purpose
| rather than specific purpose AI. So there is a strong argument
| to make that that has been achieved with some of the the
| models.
|
| For other people, it's about how close it is to human
| performance and human diversity of tasks. In that case, at
| least GPT-4 is pretty close. There are clearly some types of
| things that it can't do even as well as your dog at the moment,
| but the list of those things has been shrinking with every
| release.
|
| If by AGI you mean, creating a fully alive digital
| simulation/emulation of human, I will give you that, it's
| probably not on that path.
|
| If you are incorrectly equating AGI and superintelligence, ASI
| is not the same thing.
| mongol wrote:
| This is entertaining in a way, and interesting to follow. But
| should I, as an ordinary member of mankind, root for one outcome
| or another? Is it going to matter for me how this ends up? Will
| AI be more or less safe one or other way, will it be bad for
| competition, prices, etc etc?
| dimask wrote:
| No, but it also shows that those who supposedly care about AI
| alignment and whatnot, care more about money. Which is why AI
| alignment is becoming an oxymoron.
| SonOfLilit wrote:
| The outcome that is good for humanity, assuming Ilya is right
| to worry about AI safety, is already buried in the ground. You
| should care and shed a single tear for the difficulty of
| coordination.
| saturdaysaint wrote:
| If you use ChatGPT or find it to be a compelling technology,
| there's good reason to root for a reversion to the status quo.
| This could set back the state of the art consumer AI product
| quite a few months as teams reinvent the wheel in a way that
| doesn't get them sued when they relaunch.
| sfink wrote:
| My guesses: (1) bad for safety no matter what happens. This
| will cement the idea that caring about safety and being
| competitive are incompatible. (I don't know if the idea is
| right or wrong.) (2) good for competition, in different ways
| depending on what happens, but either the competitiveness of
| OpenAI will increase, or current and potential competitors will
| get a shot in the arm, or both. (3) prices... no idea, but I
| feel like current prices are very short term and temporary
| regardless of what happens. This stuff is too young and fast-
| moving for things to have come anywhere near settling down.
|
| And will it matter how this ends up? Probably a lot, but I
| can't predict how or why.
| torginus wrote:
| My idea about AI safety is that the biggest unsafety of AI
| comes from it being monopolized by a small elite, rather than
| the general public, or at least multiple competing entities
| having access to it.
| janejeon wrote:
| The way I see it is, it's not going to matter if I "care" about
| it in one way/outcome or another, so I just focus my attention
| on 1. How this could affect me (for now, the team seems
| committed to keeping the APIs up and running) and 2. What
| lessons can I take away from this (some preliminary lessons,
| such as "take extra care with board selection" and "listen to
| the lawyers when they tell you to do a Delaware C Corp").
|
| Otherwise, no use in getting invested in one outcome or
| another.
| drngdds wrote:
| We're all gonna get turned into paperclips, aren't we
| Arson9416 wrote:
| The fact that these people aren't currently willing to "rewind
| the clock" about a week shows the dangers of human ego. Nothing
| permanent has been done that can't be undone fairly simply, if
| all parties agree to undo it. What we're watching now is the
| effect of ego momentum on decision making.
|
| Try it. It's not a crazy idea. Put everything back the way it was
| a week ago and then agree to move forward. It will be like having
| knowledge of the future, with only a small amount of residual
| consequences. But if they can do it, it will show a huge
| evolutionary leap forward in ability of organizations to self-
| correct.
| cableshaft wrote:
| Small amount of residual consequences? The employees are asking
| for the board to resign. So their jobs are literally on the
| line. That's not really a small consequence for most people.
| KeplerBoy wrote:
| Their board positions are gone either way. If they stay
| OpenAI is done.
| cableshaft wrote:
| I do think that's almost certainly going to happen. But
| they're probably still trying to find the one scenario (out
| of 16 million possibilities, like Dr. Strange in Endgame)
| that allows them to keep their power or at least give them
| a nice golden parachute.
|
| Hence why they're not just immediately flipping the undo
| switch.
| jacquesm wrote:
| They are utterly delusional if they think they will be board
| members of OpenAI in the future unless they plan to ride it
| down the drain and if they do that they are in very, very hot
| water.
| Davidzheng wrote:
| Do they face any real consequences?
| ar_lan wrote:
| Lost money. Same consequence either way, so there is no
| incentive for them to leave.
| Davidzheng wrote:
| They don't have equity in openai though right. You mean
| from reputation loss?
| jacquesm wrote:
| For starters about 700 employees seem to think their
| livelihood matters and that the board didn't exercise
| their duty of care towards them.
| JumpCrisscross wrote:
| > _about 700 employees seem to think their livelihood
| matters and that the board didn 't exercise their duty of
| care towards them_
|
| It is difficult to see how such a duty would arise.
| OpenAI is a non-profit. The company's duty was to the
| non-profit. The non-profit doesn't have one to the
| company's employees; its job was literally to check them.
| jacquesm wrote:
| To check them does not overlap with 'to destroy them at
| the first opportunity'. There is no way that this board
| decision - which now is only supported by three of the
| original nine board members - is going to survive absent
| a very clear and unambiguous reason that shows that their
| only remedy was to fire the CEO. This sort of thing you
| don't do by your gut feeling, you go by the book.
| JumpCrisscross wrote:
| > _no way that this board decision...is going to survive
| absent a very clear and unambiguous reason that shows
| that their only remedy was to fire the CEO_
|
| The simplest explanation is Altman said he wasn't going
| to do something and then did it. At that point, even a
| corporate board would have cause for termination. Of
| course, the devil is in the details, and I doubt we'll
| have any of them this week. But more incredulous than the
| board's decision is the claim that it owes any duty to
| its for-profit subsidiary's employees, who aren't even
| shareholders, but some profit-sharing paper's holders.
| jacquesm wrote:
| True, but then the board would have been able to get rid
| of the controversy on the spot by spelling out their
| reasoning. Nobody would fault them. But that didn't
| happen, and even one of the people that voted _for_
| Altmans ' removal has backtracked. So this is all
| extremely murky and suspicious.
|
| If they had a valid reason they should spell it out. But
| my guess is that reason, assuming it exists, will just
| open them up to more liability and that is why it isn't
| given.
|
| > But more incredulous than the board's decision is the
| claim that it owes any duty to its for-profit
| subsidiary's employees, who aren't even shareholders, but
| some profit-sharing paper's holders.
|
| Technically they took over the second they fired Altman
| so they have no way to pretend they have no
| responsibility. Shareholders and employees of the for-
| profit were all directly affected by this decision, the
| insulating properties of a non-profit are not such that
| you can just do whatever you want and get away with it.
| JumpCrisscross wrote:
| > _the board would have been able to get rid of the
| controversy on the spot by spelling out their reasoning_
|
| I don't think they have an obligation to do this
| publicly.
|
| > _even one of the people that voted for Altmans '
| removal has backtracked_
|
| I don't have a great explanation for this part of it.
|
| > _Shareholders and employees of the for-profit were all
| directly affected by this decision, the insulating
| properties of a non-profit are not such that you can just
| do whatever you want and get away with it_
|
| We don't know. This is truly novel structure and law.
| That said, the board _does_ have virtually _carte
| blanche_ if Altman lied _or_ if they felt he was going to
| end humanity or whatever. Literally the only thing that
| could go for the employees is if there are, like, text
| messages between board members conspiring to tank the
| value of the company for shits and giggles.
| jacquesm wrote:
| Capriciousness and board membership are not compatible.
| The firing of a CEO of a massively successful company is
| something that requires deliberation and forethought, you
| don't do that just because you have a bad hairday. So
| their reasons matter a lot.
|
| What I think is happening is that the reason they had
| sucks, that the documents they have create more liability
| and that they have a real problem in that one of the gang
| of four is now a defector so there is a fair chance this
| will all come out. It would not surprise me if the
| remaining board members end up in court if Altman decides
| to fight his dismissal, which he - just as surprising -
| so far has not done.
|
| So there is enough of a mess to go around for everybody
| but what stands out to me is that I don't see anything
| from the board that would suggest that they acted with
| the kind of forethought and diligence required of a
| board. And that alone might be enough to get them into
| trouble: you don't sit on a board because you're going
| off half-cocked, you sit on a board because you're a
| responsible individual that tries to weigh the various
| interests and outcomes and you pick the one that makes
| the most sense to you and you are willing to defend that
| decision.
|
| So far they seem to believe they are beyond
| accountability. That - unfortunately for them - isn't the
| case but it may well be they escape the dance because
| nobody feels like suing them. But I would not be
| surprised at all if that happened and if it does I hope
| they have their house in order, board liability is a
| thing.
| JumpCrisscross wrote:
| > _which he - just as surprising - so far has not done_
|
| There were so many conflicts of interests at that firm,
| I'm not unsurprised by it, either.
|
| > _I don 't see anything from the board that would
| suggest that they acted with the kind of forethought and
| diligence required of a board_
|
| We don't know the back-and-forth that led up to this.
| That's why I'm curious about how quiet one side has been,
| while the other seemingly launched a coast-to-coast PR
| campaign. If there had been ongoing negotiations between
| Altman and others, and then Altman sprung a surprise that
| went against that agreement entirely, decisive action
| isn't unreasonable. (Particularly when they literally
| don't have to consider shareholder value, seemingly by
| design.)
|
| > _they seem to believe they are beyond accountability_
|
| Does OpenAI still have donors? Trustees?
|
| I suppose I'm having trouble getting outraged over this.
| Nobody was duped. The horrendous complexity of the
| organization was panned from the beginning. Employees and
| investors just sort of ignored that there was this magic
| committee at the top of every org chart that reported to
| "humanity" or whatever.
| jacquesm wrote:
| Agreed, there are a ton of people that should have
| exercised more caution and care. But it is first and
| foremost the board's actions that have brought OpenAI to
| the edge of the abyss and that wasn't on the table a
| month ago. That that can have consequences for the
| parties that caused it seems to me to be above question,
| after all, you don't become board members of a non-profit
| governing an entity worth billions just to piss it all
| down the drain and pretend that was just fine.
|
| I totally understand that you can't get outraged over it,
| neither am I (I've played with ChatGPT but it's nowhere
| near solid enough for my taste and I don't know anybody
| working there and don't particularly like either Altman
| or Microsoft). But I don't quite understand why people
| seem to think that because this is a non-profit (which to
| me always seemed to be a fig-leaf to pretend to
| regulators and governments that they had oversight)
| anything goes. Not in the world that I live in, you take
| your board member duties seriously or it is better if you
| aren't a board member at all.
| dragonwriter wrote:
| Neither for-profit corporations nor charities have a
| general legal duty of care for the livelihood of their
| employees.
| jacquesm wrote:
| It's all about diligence and prudence. I don't see much
| evidence of either and that means the employees may well
| have a point. Incidentally: the word 'care' was very
| explicitly used in the letter.
| dragonwriter wrote:
| > It's all about diligence and prudence.
|
| Diligience and prudence apply to the things to which they
| actually are obligated in the first place, which the
| employees' livelihood beyond contracted pay and benefits
| for the time actually worked _simply is not included in_.
| jacquesm wrote:
| > which the employees' livelihood beyond contracted pay
| and benefits for the time actually worked simply is not
| included in
|
| Quite a few of those employees are also stockholders,
| besides that this isn't some kids game where after a few
| rounds you can throw your cards on the table and walk out
| because you feel that you've had enough of it. You join a
| board because you are an adult that is capable of
| forethought and adult behavior.
|
| I don't quite get why this is even controversial, there
| isn't a board that I'm familiar with, including non-
| profits that would be so incredibly callous towards
| everybody affected by their actions with the expectation
| that they would get away with it. Being a board member
| isn't some kind of magic invulnerability cloak, and even
| non-profits have employees, donors and benificaries who
| _all_ have standing regarding decisions affecting their
| stakeholdership.
| dragonwriter wrote:
| > Quite a few of those employees are also stockholders
|
| None of them are stockholders, because (except for the
| nonprofit, which can't have stockholders even as a
| corporation) none of the OpenAI entities are
| corporations.
|
| Some of them have profit-sharing interests and/or (maybe)
| memberships in the LLC or some similar in interest in the
| holding company above LLC; the LLC operating agreement
| (similar function to a corporate charter) expressly notes
| that investments should be treated as donations and that
| the Board may not seek to return a profit; the holding
| companies details are less public, but it would be
| strange if it didn't have the same kind of thing since
| the _only_ thing it exists is to hold a controlling
| interest in the LLC, and the only way it would make any
| profit is from profits returned by the LLC.
| jacquesm wrote:
| Hm, ok, I was under the distinct impression that some of
| the early employees of OpenAI were stock holders in the
| entity in the middle.
|
| I base that on the graph on this page:
|
| https://openai.com/our-structure
|
| Specifically this graph:
|
| https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c6
| 3b6...
|
| Now it may be that I got this completely wrong but it
| looks to me as though there is an ownership relationship
| (implying stock is involved) between the entity labelled
| 'Employees and other investors' and the holding company.
| jacquesm wrote:
| Good question. Potentially: Liability based on their
| decisions. If it turns out those were not ultimately
| motivated by actual concern for the good of the company
| then they have a lot of problems.
| Schiendelman wrote:
| It's not a company - which is why they're able to do
| this. We've just learned, again, that nonprofits can't
| make these kinds of decisions because the checks and
| balances that investors and a profit motive create don't
| exist.
| jacquesm wrote:
| That doesn't matter. Even the members of the board of a
| non-profit are liable for all of the fall-out from their
| decisions if those decisions end up not being defensible.
| That's pretty much written in stone and one of the
| reasons why you should never accept a board seat out of
| your competence.
| munificent wrote:
| _> the checks and balances that investors and a profit
| motive create_
|
| https://en.wikipedia.org/wiki/List_of_industrial_disaster
| s
| dragonwriter wrote:
| > If it turns out those were not ultimately motivated by
| actual concern for the good of the company then they have
| a lot of problems.
|
| The boards duty is to to the charitable mission, not any
| other concept of the "good of the company", and other
| than the government if they are doing something like
| pursuing their own private profit interest or acting as
| an agent for someone elses or some other specific public
| wrongs, the people able to make a claim are pretty
| narrow, because OpenAI isn't membership-based charity in
| which there are members to whom the board is accountable
| for pursuit of the mission.
|
| People keep acting like the parent organization here is a
| normal corporation, and its not, and even the for-profit
| subsidiary had an operating agreement subordinating other
| interests to the charitable mission of the parent
| organization.
| jacquesm wrote:
| I don't think you can wave away your duty of care to
| close to a thousand people based on the 'charitable
| mission' and I suspect that destruction of the company
| (even if the board claims that is in line with the
| company mission) passes that bar.
|
| I could be wrong but it makes very little sense. Such
| decisions should at a minimum be accompanied by lengthy
| deliberations and some very solid case building. The non-
| profit nature of the parent is not a carte-blanche to act
| as you please.
| dragonwriter wrote:
| > I don't think you can wave away your duty of care to
| close to a thousand people based on the 'charitable
| mission'
|
| What specific duty of care do you think exists, and to
| which thousand people, and on what basis do you believe
| this duty exists?
| jacquesm wrote:
| Board members are supposed to exercise diligence and
| prudence in their decisions. They are supposed to take
| into account all of the results of their actions and they
| are supposed to ensure that there are no conflicts of
| interest where their decisions benefit them outside of
| their role as board members (if there are they should
| abstain from that particular vote, assuming they want to
| be on the board in the first place with a potential or
| actual conflict of interest). Board members are
| ultimately accountable to the court in the jurisdiction
| where the company is legally established.
|
| The thing that doesn't exist is a board that is
| unaccountable for their actions and if there are a
| thousand people downstream from your decisions that
| diligence and prudence translates into a duty of care and
| if you waltz all over that you open yourself up to
| liability.
|
| It's not that you can't do it, it's that you need to show
| your homework in case you get challenged and if you
| didn't do your homework there is the potential for
| backlash.
|
| Note that the board has pointedly refused to go on the
| record as to _why_ they fired Altman and that by itself
| is a very large indicator that they did this with
| insufficient forethought because if they had there would
| be an iron clad case to protect the board from the fall
| out of their decision.
| dragonwriter wrote:
| > Board members are supposed to exercise diligence and
| prudence in their decisions.
|
| Yes, and if they fail to do so _in regards to the things
| they are legally obligated to care for_ , like the
| charitable mission, people who have a legally-cognizable
| interest in the thing they failed to pursue with
| diligence and prudence have a claim.
|
| But whose legally cognizable interest (and what specific
| such interest) do you think is at issue here?
|
| > The thing that doesn't exist is a board that is
| unaccountable for their actions
|
| Sure, there are specific parties who have specific
| legally cognizable interests and can hold the board
| accountable via legal process for alleged failures to
| meet obligations in regard to those specific interests.
|
| I'm asking you to identify the specific legally-
| cognizable interest you believe is at issue here, the
| party who has that interest, and your basis for believing
| that it is a legally-cognizable interest of that party
| against the board.
| jacquesm wrote:
| We're going around in circles I think but to me it is
| evident that a somewhat competent board that intends to
| fire the CEO of the company they are supposed to be
| governing will have a handy set of items ready: a valid
| reason, minutes of the meeting where all of this was
| decided where they gravely discuss all of the evidence
| and reluctantly decide to have to fire the CEO
| (handkerchiefs are passed around at this point, a moment
| of silence is observed), the 'green light' from legal as
| to whether that reason constitutes sufficient grounds for
| the dismissal. Those are pre-requisites.
|
| > Yes, and if they fail to do so in regards to the things
| they are legally obligated to care for, like the
| charitable mission, people who have a legally-cognizable
| interest in the thing they failed to pursue with
| diligence and prudence have a claim.
|
| I fail to see the correlation between 'blowing up the
| entity' by a set of ill advised moves and 'taking care of
| the charitable mission'.
|
| The charitable mission is not a legal entity and so it
| will never sue, but it isn't a get-out-of-jail-free card
| for a board that wants to decide whatever it is that
| they've set their mind to.
|
| > But whose legally cognizable interest (and what
| specific such interest) do you think is at issue here?
|
| For one: Microsoft has a substantial but still minority
| stake in the for-profit, there are certain expectations
| attached to that and the same goes for all of the
| employees both of the for-profit and the non-profit whose
| total compensation was tied to the stock of OpenAI, the
| for profit. All of these people have seen their interests
| be substantially harmed by the board's actions and the
| board would have had to balance that damage with the
| weight of the positive effect on the 'charitable mission'
| in order to be able to argue that they did the right
| thing here. That's not happening, as far as I can see it,
| in fact the board has gone into turtle mode and refuses
| to engage meaningfully, two days later they did it again
| and fired another CEO (presumably this is still in line
| with protecting the charitable mission?).
|
| > Sure, there are specific parties who have specific
| legally cognizable interests and can hold the board
| accountable via legal process for alleged failures to
| meet obligations in regard to those specific interests.
|
| Works for me.
|
| > I'm asking you to identify the specific legally-
| cognizable interest you believe is at issue here, the
| party who has that interest, and your basis for believing
| that it is a legally-cognizable interest of that party
| against the board.
|
| See above, if that's not sufficient then I'm out of
| ideas.
| dragonwriter wrote:
| > We're going around in circles I think but to me it is
| evident that a somewhat competent board that intends to
| fire the CEO of the company they are supposed to be
| governing will have a handy set of items ready: a valid
| reason, minutes of the meeting where all of this was
| decided, the 'green light' from legal as to whether that
| reason constitutes sufficient grounds for the dismissal.
| Those are pre-requisites.
|
| I think the difference here is that I am fine with your
| belief that this is what a competent board _should_ have,
| but I don 't think this opinion is the same as actually
| establishing a legal duty.
|
| > The charitable mission is not a legal entity and so it
| will never sue, but it isn't a get-out-of-jail-free card
| for a board that wants to decide whatever it is that
| they've set their mind to.
|
| The charitable mission is the legal basis for the
| existence of the corporation and its charity status, and
| the basis for legal duties and obligations on which both
| the government (in some cases), and other interested
| parties (donors, and, for orgs that have them, members)
| can sue.
|
| > For one: Microsoft has a substantial but still minority
| stake in the for-profit, there are certain expectations
| attached to that and the same goes for all of the
| employees both of the for-profit and the non-profit whose
| total compensation was tied to the stock of OpenAI, the
| for profit
|
| Given the public information concerbing the terms of the
| operating agreement (the legal basis for the existence
| and operation of the LLC), unless one of those parties
| has a non-public agreement with radically contradictory
| terms (which would be problematic for other reasons), I
| don't think there can be any case that the OpenAI, Inc.,
| board has a legal duty to any of those parties to see to
| the profitability of OpenAI Global LLC.
| jacquesm wrote:
| > I think the difference here is that I am fine with your
| belief that this is what a competent board should have,
| but I don't think this opinion is the same as actually
| establishing a legal duty.
|
| I don't think we'll be able to hash this out simply
| because too many of the pieces are missing. But if the
| board didn't have those items handy and they end up being
| incompetent then that by itself may end up as enough
| grounds to show they violated their duty of care. And
| this is not some nebulous concept, it actually has a
| legal definition:
|
| https://www.tenenbaumlegal.com/articles/legal-duties-of-
| nonp...
|
| I went down into this rabbit hole a few years ago when I
| was asked to become a board member (but not of a non-
| profit) and I decided that the compensation wasn't such
| that I felt that it offset the potential liability.
|
| > The charitable mission is the legal basis for the
| existence of the corporation and its charity status, and
| the basis for legal duties and obligations on which both
| the government (in some cases), and other interested
| parties (donors, and, for orgs that have them, members)
| can sue.
|
| Indeed. But that doesn't mean the board is free to act
| with abandon as long as they hold up the 'charitable
| mission' banner, they _still_ have to act as good board
| members and that comes with a whole slew of luggage.
|
| > Given the public information concerning the terms of
| the operating agreement (the legal basis for the
| existence and operation of the LLC), unless one of those
| parties has a non-public agreement with radically
| contradictory terms (which would be problematic for other
| reasons), I don't think there can be any case that the
| OpenAI, Inc., board has a legal duty to any of those
| parties to see to the profitability of OpenAI Global LLC.
|
| It is very well possible that the construct as used by
| OpenAI is so well crafted that it insulates board members
| perfectly from the fall-out of whatever they decide, but
| I find that hard to imagine. Typically everything down
| stream from the thing you govern (note that they retain a
| 51% stake and that that alone may be enough to show that
| they are in control to the point that they can not
| disclaim anything) is subject to the duties and
| responsibilities that board members usually have.
| paulddraper wrote:
| Reputation/shame is a real consequence.
|
| Granted, much of the harm is already done, but it can get
| worse.
| bertil wrote:
| Board positions are not full-time jobs, at least not usually.
| tentacleuno wrote:
| > Nothing permanent has been done that can't be undone fairly
| simply
|
| ...aside from accusing Sam Altman of essentially _lying_ to the
| board?
| jacquesm wrote:
| Fair point but a footnote given the amount of fall-out,
| that's on them and they'll have to retract that. Effectively
| they already did.
| gjsman-1000 wrote:
| If they retract that, they open themselves to potential,
| personal, legal liability which is enough to scare any
| director. But if they don't retract, they aren't getting
| Altman back. Thus why the board likely finds themselves
| between a rock and a hard place.
| jacquesm wrote:
| Exactly. If they're not scared by now it is simply
| because they don't understand the potential consequences
| of what they've done.
| SonOfLilit wrote:
| Cofounding a company is in a lot of ways like marriage.
|
| It's not easy, or wise, to rewind the clock after your spouse
| backstabbed you in the middle of the night. Why would they?
| ar_lan wrote:
| In general this can't work.
|
| People are notoriously ruthless to people who admit their
| mistakes. For example, if you are in an argument and you lose
| (whether through poor debate or your argument is plain wrong),
| and you *admit it*, people don't look back at it as a point of
| humility - they look at it as a reason to dog pile on you and
| make sure everyone knows you were wrong.
|
| In this case, it's not internet points - it's their jobs, and a
| lot of money, and massive reputation - on the line. If there is
| extreme risk and minimal, if any, reward for showing humility,
| why wouldn't you double down and at least try to win your
| fight?
| clnq wrote:
| Is this your opinion or is this something that's an actual
| theory in sociology or psychology, or at least something
| people talk about in practice? Not trying to be mean, just to
| learn.
|
| There's a whole genre of press releases and videos for
| apologies, so I'm not sure it's such a reputational risk to
| admit one is wrong. It might be a bigger risk not to, it
| would seem.
|
| But what you say sounds interesting.
| rmeby wrote:
| I would be interested too if that's an actual theory. My
| experience has largely been that if you're willing to admit
| you were wrong about something, most reasonable people will
| appreciate it over you doubling down.
|
| If they pile on after you have conceded, typically they
| come off much worse socially in my opinion.
| d3ckard wrote:
| The accent here being on ,,reasonable". Very few actually
| are. Myself, I once recommended colleague to a job and
| they didn't take him because he was too humble and ,,did
| not project confidence" (and oh my, he would be top 5% in
| that company at the very least).
|
| There is a reason why CEOs are usually a showman type.
| disiplus wrote:
| honestly, that's not my experience. sure you can admit in
| front of your friends, family and people that know you
| even if they are not your friend.
|
| Admitting the mistake in front of strangers, usually
| leads to them making the shortcut next time and assuming
| you are wrong again.
|
| you wont get any awards for admitting the mistake.
| rincebrain wrote:
| (This is a reflection on human behavior, not a statement
| about any specific work environment. How much this is or
| isn't true varies by place.)
|
| In my experience, it's something of a sliding scale as
| you go higher in the amount of politicking your
| environment requires.
|
| Lower-level engineers and people who operate based on
| facts appreciate you admitting you were incorrect in a
| discussion.
|
| The higher you go, the more what matters is how you are
| perceived, and the perceived leverage gain of someone
| admitting or it being "proven" they were wrong in a high-
| stakes situation, not the facts of the situation.
|
| This is part of why, in politics, the factual accuracy of
| someone's accusations may matter less than how
| successfully people can spin a story around them, even if
| the facts of the story are proven false later.
|
| I'm not saying I like this, but you can see echoes of
| this play out every time you look at history and how
| people's reactions were more dictated by perception than
| the facts of the situation, even if the facts were
| readily available.
| ar_lan wrote:
| Definitely anecdotal - I'm not sure on actual statistics as
| I'm sure that would be somewhat hard to measure.
| Jensson wrote:
| Did you see how people reacted to Ilya apologizing? Read
| through the early comments here, it isn't very positive
| they mostly blame him for being weak etc, before he wrote
| that people were more positive against Ilya but Ilya
| admitting fault made people home in on it:
|
| https://news.ycombinator.com/item?id=38347501&p=2
| tomjakubowski wrote:
| People tend to speak and act very differently in
| pseudonymous online forums, with no skin in the game,
| than they ever would in "the real world", where we are
| constantly reminded of real relationships which our
| behavior puts at risk.
|
| The only venues where I've witnessed someone being
| attacked for apologizing or admitting an error have been
| online.
| Jensson wrote:
| People are more honest about how they feel online. They
| might not attack openly like that offline, but they do
| think those things.
|
| So you see people who refuse to acknowledge their
| mistakes fail upwards while those who do admit are often
| pushed down. How often do you see high level politicians
| admitting their mistakes? They almost never do since
| those who did never got that far.
| tomjakubowski wrote:
| I don't think people are more honest about how they feel
| online. I think that the venue makes one feel
| differently.
|
| Probably < 1% of the people in the peanut gallery have
| even met the person they're attacking. Without
| participating in online discussion forums, how many of
| them would even know who Ilya is, or bother attacking
| him?
|
| Honestly, I think the "never apologize or admit an error"
| thing is memetic bullshit that is repeated mindlessly,
| and that few people really believe, if challenged, that
| it's harmful to admit an error; they're saying it because
| it's a popular thing to say online. I've posted the same
| thing too, in the past, but having given it some thought
| I don't really believe it myself.
| Jensson wrote:
| I think the big disconnect here is that you are thinking
| about personal relationships and not less personal ones.
| You should admit guilt with friends and family to mend
| those relationships, but in a less personal space like a
| company the same rules doesn't apply the onlookers do not
| react positively to acknowledge of guilt.
|
| Ilya should have sent that message to Sam Altman in
| private instead of public.
| pedrosorio wrote:
| > People are notoriously ruthless to people who admit their
| mistakes
|
| Some people, yes. Not all. I would say this attitude does not
| correlate with intelligence/wisdom.
| philistine wrote:
| The case here is not about admitting mistakes and showing
| humility. Admitting your mistake does not immediately mean
| that you get a free pass to go back to the way things were
| without any consequence. You made a mistake, something was
| done or said. There are consequences to that. Even if you
| admit your mistake, you have to act with the present facts.
|
| Here, the consequences are very public, very clear. If the
| board wanted Altman back for example, they would have to give
| something in return. Altman has seemingly said he wants them
| gone. That is absolutely reasonable of him to ask that, and
| absolutely reasonable of the board to deny him that.
| ar_lan wrote:
| The context of my response was to rewinding the clock -
| admitting not being enough, it would be them bringing
| Altman back on, essentially.
|
| As you said:
|
| > [it's] absolutely reasonable of the board to deny him
| that.
|
| My argument is essentially that there is minimal, if any,
| benefit for the board to doing this _unless_ they were able
| to keep their positions. Seeing as it doesn't seem to be a
| possible end, why not at least _try_ , even if it results
| in a sinking ship? For them, personally, it's sinking
| anyway.
| UniverseHacker wrote:
| It's not that simple... it depends on how you admit the
| mistake. If done with strength, leadership, etc., and a clear
| plan to fix the issue it can make you look really good. If
| done with groveling, shame, and approval seeking, what you
| are saying will happen.
| brookst wrote:
| The problem is you can't erase memories. Rewind the clock,
| sure. But why would someone expect a different outcome from the
| same setup?
| awb wrote:
| "You can always come back, but you can't come back all the way"
| - Bob Dylan
| RecycledEle wrote:
| Your are correct.
|
| OpenAI ai not prefect, but it's the best any of the major
| players here have.
|
| Nobody with Sam Altman's public personality does not want to be
| a Microsoft employee.
| Animats wrote:
| Check phrasing.
| thepasswordis wrote:
| Trust takes years to build and seconds to destroy.
|
| It's like a cheating lover. Yes I'm sure _both_ parties would
| love to rewind the clock, but unfortunately that 's not
| possible.
| p4ul wrote:
| "Trust arrives on foot and leaves on horseback."
|
| --Dutch proverb
| hintymad wrote:
| I doubt it's human ego but purely game play. The board
| directors knew they lost anyway, why would they cave and
| resign? They booted the CEO for their doomer ideology, right?
| So, they are the ethics guys and would it be better for them to
| go down the history as those who uphold their principles and
| ideals by letting OpenAI sink?
| ip26 wrote:
| Or, in simpler terms, there's one thing you can't roll back-
| everyone now knows the board essentially lost a power
| struggle. Thus, they would never again have the same clout.
| paulddraper wrote:
| I believe the saying is "Fool me once, shame on you. Fool me
| twice, shame on me."
|
| The board has revealed something about their decision-making,
| skills, and goals.
|
| If you don't like what was revealed, can you simply ignore it?
|
| ---
|
| It's not that you are vindictive; it's that information has
| revealed untrustworthiness or incompetence.
| sorenjan wrote:
| I think some of the people involved see this as a great
| opportunity to switch from a non profit to a regular for profit
| company.
| nprateem wrote:
| Yeah this happened recently. Some Russian guy almost started a
| civil war, but then just apologised and everything went back to
| normal. I can't remember what happened to him, but I'm sure
| he's OK...
| whycome wrote:
| I think he's catering events somewhere.
|
| But, a reconciliation is kinda doable even with that elephant
| in the room. Enough to kinda prepare for the 'next step'
| JacobThreeThree wrote:
| Can we safely assume that Putin's on the "it's crazy" to
| rewind the clock side of this debate?
| tsunamifury wrote:
| lol wut? If you pull a gun on me and fire and miss then say
| sorry, I'm not gonna wind the clock back. Are you crazy?
| JumpCrisscross wrote:
| > _Put everything back the way it was a week ago and then agree
| to move forward_
|
| Form follows function. This episode showed OpenAI's corporate
| structure is broken. And it's not clear how that can be undone.
|
| Altman _et al_ have, credit where it 's due, been incredibly
| innovative in trying to reverse a non-profit into a for-profit
| company. But it's a dual mandate without any mechanism for
| resolving tension. At a certain point, you're almost forced
| into committing tax or securities fraud.
|
| So no, even if all the pieces were put back together and
| peoples' animosities and egos put to rest, it would still be
| rewinding a broken clockwork mouse.
| charles_f wrote:
| Would you rewind the clock and pretend nothing happened, if
| you'd been ousted from a place you largely built? I'll wager
| that a large number of people, myself included, wouldn't.
| That's not just ego, but also the cancellation of trust.
| Palpatineli wrote:
| THe orignal track is the dangerous one. That was the whole
| point of the coup. It makes zero sense to go back.
| zombiwoof wrote:
| they realized 4 weeks ago they wouldn't ever attain AGI. they
| released Laundry Buddy. the board wants to go off the hype cycle
| and back to non profit research roots.
|
| MSFT wants to double down on the marketing hype.
| Racing0461 wrote:
| Now this is the most plausable theory i've seen so far.
| jmkni wrote:
| This whole thing is bizarre.
|
| OpenAI was the one company I was sure would be fine for the
| forseeable future!
| highduc wrote:
| The amount of money and power their products might offer makes
| it pretty desirable. Theoretically there should be no limit to
| the amount and type of shenanigans that are possible in this
| particular situation.
| jmkni wrote:
| That's fair lol
| cvhashim04 wrote:
| Absolute cinema
| m_ke wrote:
| It will be wild to see all of these employees leave to work for
| Microsoft (or turn OpenAI into a for profit) and in the process
| hand Sam and a few other CXX folks a huge chunk of equity in a
| new multi billion dollar venture.
|
| I'm guessing Sam will walk away with at least 20% of whatever
| OpenAI turns into.
| ayakang31415 wrote:
| Isn't Microsoft essentially acquiring OpenAI at almost zero cost?
| You have IP rights to OpenAI's work, and you will have almost all
| the brains from OpenAI, and there is no regulatory scrutiny like
| Activision acquisition.
| judge2020 wrote:
| All the WSJ article claimed was that MSFT had access to OpenAI
| code and weights. Chances are they don't actually have the
| right to fork GPT-X.
| ayakang31415 wrote:
| But you will have people who built the models and systems. It
| will take time, but replication will be done eventually and
| weights will be obtained through training easily.
| kristjansson wrote:
| Code, weights, talent, leadership, and an army of lawyers.
| What else do they need?
| alephnerd wrote:
| As I posted elsewhere, I think this is a conflict between Dustin
| Moskovitz and Sam Altman. Ilya may have been brought into this
| without his knowledge (which might explain why he retracted his
| position).
|
| Dustin Moskovitz was an early employee at FB, and the founder of
| Asana. He also created (along with plenty of MSFT bigwigs) a non-
| profit called Open Philanthropy, which was a early proponent of a
| form of Effective Altruism and also gave OpenAI their $30M grant.
| He is also one of the early investors in Anthropic.
|
| Most of the OpenAI board members are related to Dustin Moskovitz
| this way.
|
| - Adam D'Angelo is on the board of Asana and is a good friend to
| both Moskovitz and Altman
|
| - Helen Toner worked for Dustin Moskovitz at Open Philanthropy
| and managed their grant to OpenAI. She was also a member of the
| Centre for the Governance of AI when McCauley was a board member
| there. Shortly after Toner left, the Centre for the Governance of
| AI got a $1M grant from Open Philanthropy and McCauley joined the
| board of OpenAI
|
| - Tasha McCauley represents the Centre for the Governance of AI,
| which Dustin Moskovitz gave a $1M grant to via Open Philanthropy
| and McCauley ended up joining the board of OpenAI
|
| Over the past few months, Dustin Moskovitz has also been
| increasingly warning about AI Safety.
|
| In essense, it looks like a split between Sam Altman and Dustin
| Moskovitz
| chucke1992 wrote:
| I mean, you can't vote to drop CEO without knowing...
| brandall10 wrote:
| Wow, this is extremely useful information, it ties all the
| pieces together. Surprised it hasn't been reported elsewhere.
| teacpde wrote:
| This is the most logical explanation I have seen so far. Makes
| me wonder why Dustin Moskovitz himself wasn't on the board of
| OpenAI in the first place.
| htk wrote:
| Very interesting take, and it sheds some light to the role of
| the two most discredited board members.
| htk wrote:
| Please repost this as a stand alone comment, I bet it would be
| voted to the top.
|
| Or maybe dang can extract this comment and orphan it.
| jansan wrote:
| It has come to the point that hearing the expression "Effective
| Altruism" sends shivers down my spline.
| jacquesm wrote:
| Ironically it is never effective nor is it ever altruism.
| kylecordes wrote:
| Obviously we should all want our altruism to be effective.
| What is the other side of it? Wanting one's altruism to not
| really accomplish much?
|
| But with everything that has gone on, I cannot imagine
| wanting to be an Effective Altruist! The movement by that
| name seems to do and think some really weird stuff.
| upwardbound wrote:
| The "Think Globally, Act Locally" movement is the competing
| philosophy. It's deeply entrenched in our culture, and has
| absolutely dominated philanthropic giving for several
| decades.
|
| https://en.wikipedia.org/wiki/Think_globally,_act_locally
|
| "Think Globally, Act Locally" leads to charities in wealthy
| cities getting disproportionately huge amounts of money for
| semi-frivolous things like symphony orchestras, while
| people in the global south are dying of preventable
| diseases.
| chucke1992 wrote:
| We will fix you to become more altruistic.
| valine wrote:
| If he has safety concerns with OpenAI, he must be mortified
| with his old company Meta dropping 70B llamas on HF.
| g42gregory wrote:
| Great insights. very interesting!
| folli wrote:
| Okay, I know this is a very naive question, but anyways: might
| Dustin /the board be onto something regarding AI safety which
| was not there before?
| jacquesm wrote:
| If there was there are 700 people motivated to leak it and
| none did. What could they be aware of that the rest of OpenAI
| would not be aware of? How did the learn about it?
| endtime wrote:
| I agree it's probably not something new. But I will observe
| that OpenAI rank and file employees, presumably mostly
| working on making AI more effective, are very strongly
| selected against people who are sympathetic to
| safety/x-risk concerns.
| tatrajim wrote:
| Intriguing, thanks. HN does provide gems of insight amidst the
| repetitive gossip.
| drawkbox wrote:
| A "conflict" or false opposition can also be used in a theater
| like play. Maybe this was setup to get Microsoft to take on the
| costs/liability and more. Three board members left in 2023 that
| allowed this to happen.
|
| The idea of boards might even be an anti-pattern going forward,
| they can be played and used in essentially rug pull scenarios
| for full control of all the work of entire organizations. Maybe
| boards are past their time or too much of a potential
| timebomb/trojan horse now.
| coliveira wrote:
| This has been the case as long as companies existed. Even
| with all this, companies still have boards because they
| represent the interests of several people.
| alsodumb wrote:
| It's clear that Adam himself has a strong conflict of interest
| too. The GPT store announcement on DevDay pretty much killed
| his company Poe. And all this started brewing after DevDay
| announcement. Maybe Sam kept it under the wraps from Adam and
| the board.
| tedmiston wrote:
| I've heard others take this stance, but a common response so
| far has been "Poe is so small as to be irrelevant", "I forgot
| it exists", etc in the grand scheme of things here.
| splatzone wrote:
| When it's your company, it's never small.
| robg wrote:
| And it quickly feels personal...
| alsodumb wrote:
| Poe has a reasonably strong user base for two reasons:
|
| (i) they allowed customized agents and a store of these
| agents.
|
| (iI) they had access to GPT-32k context length very early,
| in fact one of the first to have it.
|
| Both of these kinda became pointless after DevDay. It
| definitely kills Poe, and I think that itself is a conflict
| of interest, right? Whether or not it's at a scale to
| compete is a secondary question.
| nilkn wrote:
| What matters is how much personal work and money Adam put
| into Poe. It seems like he's been working on it full-time
| all year and has more or less pivoted to it away from
| Quora, which also faces an existential threat from OpenAI
| (and AI in general).
|
| Either way, Adam's conflict of interest is significant, and
| it's staggering he wasn't asked to resign from the board
| after launching a chatbot-based AI company.
| bertil wrote:
| That would explain why Sam wants the whole board gone and not
| one or two members, while he was very fast to welcome Ilya
| back.
| codethief wrote:
| > while he was very fast to welcome Ilya back
|
| Was he? I must have missed that in all this chaos.
| Mandelmus wrote:
| https://twitter.com/sama/status/1726594398098780570
| rngname22 wrote:
| Can anyone explain why when I go to
| https://twitter.com/sama I don't see the linked tweet,
| but if I navigate to
| https://twitter.com/sama/status/1726594398098780570 I
| can? Is this a nuance of like tweets being privatable? Or
| deleted tweets remaining directly navigable? Sorry
| probably something basic about how twitter functions.
| Darmody wrote:
| It's there. There are 3 posts (not tweets anymore) and
| then there's that one.
| rngname22 wrote:
| Does it require being logged in to see?
| OJFord wrote:
| Probably - (as of recentlyish) you only get the linked
| tweet or on profile/home a selection of 'top' or whatever
| tweets. Or Xs.
| kzrdude wrote:
| Are you logged in? The logged out view I've seen has been
| wildly varying, sometimes missing months of updates.
| dnissley wrote:
| You're logged out -- by default for logged out users,
| account pages will show "top tweets" not "recent tweets"
| codethief wrote:
| Thanks!
| gwern wrote:
| That's meaningless; he would welcome Ilya back as a defector
| no matter what. What happens to Ilya later, _after_ he is no
| longer a board member or in a position of power, will be much
| more informative.
| chunky1994 wrote:
| Correlation =/= causation. This is most likely coincidental. I
| highly doubt Dustin's differing views caused a (near) unanimous
| ousting of a completely different company's CEO that had
| nothing to do with Dustin's primary business.
| tempsy wrote:
| All 3 having some connection to effective altruism, which
| Dustin is at the center of, is not coincidental.
| bagels wrote:
| Why would Sam let the board get so far out of his control?
| ealexhudson wrote:
| No board is ever controlled by a CEO by virtue of the
| title/office. Boards are controlled by directors, who are
| typically nominated by shareholders. They may control the
| CEO, although again, in many startups the founder becomes the
| CEO and retains some significant stake (possibly controlling)
| in the overall shareholding.
|
| The top org was a 501(c)3 and the directors were all
| effectively independent. The CEO of such an organisation
| would never have any control over the board, by design.
|
| We've gotten very used to founders having controlling
| shareholdings and company boards basically being advisory
| rather than having a genuine fiduciary responsibility.
| Companies even go public with Potempkin boards. But this was
| never "normal" and does not represent good governance. Boards
| should represent the shareholders, who should be a broader
| group (especially post-IPO) than the founders.
| s1artibartfast wrote:
| That isnt relevant to the question. Sam was on the board
| prior to all of these other directors, and responsible for
| selecting them.
|
| The post asks how/why Sam ended up with a board full of
| directors so far out of alignment with his vision.
|
| I think a big part of that is that the board was down
| several members, from 9 to 6. Perhaps the problem started
| with not replacing departing board members and this
| spiraled out of control as more board members left.
|
| Here is a timeline of the board:
|
| https://loeber.substack.com/p/a-timeline-of-the-openai-
| board
| ealexhudson wrote:
| Actually, you're rephrasing the question - it was
| specifically about "control", not "alignment".
|
| Even if we substitute "alignment" the problem is that the
| suggestion is still that Sam would have been "better
| protected" in some way. A 501(c)3 is just not supposed to
| function like that, and good corporate governance
| absolutely demands that the board be independent of the
| CEO and be aligned to the _company goals_ not the CEO 's
| goals.
| s1artibartfast wrote:
| Sam was sitting on the board, obviously not independent
| of the CEO.
| dragonwriter wrote:
| > good corporate governance absolutely demands that the
| board be independent of the CEO
|
| CEOs and subordinate executives being on boards are not
| unusual, and no board (especially a small board) that the
| CEO (and/or subordinate executives) sits on is
| independent of the CEO.
| ealexhudson wrote:
| By "independent" I don't mean "functions separately". Of
| course the CEO sits on the board. Sometimes the CFO is on
| the board too, although subordinate executives usually
| _should not be_ (they may _attend_ the board, but that's
| a different thing).
|
| But fundamentally, the CEO _reports to_ the board. That's
| the relationship. And in a 501(c)3 specifically, the
| board have a clear requirement to ensure the company is
| running in alignment with its stated charter.
|
| Whether or not this board got that task right, I don't
| know, it doesn't seem likely (at least, in hindsight).
| But this type of board specifically is there for
| oversight of the CEO, that's precisely their role.
| danenania wrote:
| Could be that the fault lines were already present and
| they couldn't agree on new members.
| s1artibartfast wrote:
| Indeed. perhaps it is an example how a small change like
| the departure of the first member can cause things to
| spiral out of control.
| SpaceManNabs wrote:
| A thing a sad, understated ongoing is that so many people are
| throwing vitriol at Ilya right now. If the speculation here is
| true, then he just chased by a mob over pure nonsense (well, at
| least purer than the nonsense premise beforehand).
|
| Gotta love seeing effective altruists take another one on the
| chin this year though.
| tspike wrote:
| Does this whole thing remind anyone else of the tech
| community's version of celebrity gossip?
| SpaceManNabs wrote:
| Absolutely. It is borderline salacious. Honestly didn't
| feel too good watching so many people opine on matters they
| had no real data on and insult people; also the employees
| of openai announcing matters on twitter or wtv in tabloid
| style.
|
| I hope people apologize to Ilya for jumping to conclusions.
| elaus wrote:
| Yes, it was especially weird to read this on HN to such a
| big extend. The comments were (and are) full of people
| with very strong opinions, based on vague tweets or
| speculation. Quite unusual and hopefully not the new
| norm...
| fuzztester wrote:
| No. It reminds me more of Muddle [1] Ages' intrigue,
| scheming, and backstabbing, like the Medicis, Cesare Borgia
| and clan, Machiavelli (and his book The Prince, see Cesare
| again), etc., to take just one example. (Italy not being
| singled out here.) And also reminds me of all the effing
| feuding clans, dynasties, kingdoms, and empires down the
| centuries or millenia, since we came down from the trees. I
| guess galaxies have to be next, and that too is coming up,
| yo Elon, Mars, etc., what you couldn't fix on earth ain't
| gonna be fixable on Mars or even Pluto, Dummkopf, but give
| it your best shot anyway.
|
| [1] Not a typo :)
| Apocryphon wrote:
| TechCrunch should be at the forefront with coverage, but
| their glory days is far behind it. And Valleywag is gone.
| So I guess it's up to us to gossip on our own.
| silenced_trope wrote:
| Yes, and I'm ashamed to say I'm unabashedly following it
| like the people I cringe at follow their fave celebs.
|
| It's an interesting saga though and as a payer of ChatGPT+
| I feel staked in it.
| alephnerd wrote:
| This is why I detest YC (despite taking part in salicious
| gossip on here due to my social media addiction). A couple YC
| friends of mine have been very explicit about how they detest
| the conspiratorial YC hivemind.
| Apocryphon wrote:
| Oh, let us have our fun. The industry's cooled off with the
| end of ZIRP and the coming holidays so people need the
| illusion that things are happening.
| upwardbound wrote:
| But Ilya was the one that started this whole mess... He was
| the one that lit the match that lit the fuse...
| abakker wrote:
| The dude voted to fire Altman. He _could_ have not done that.
| Actions have consequences.
| nostrademons wrote:
| More confusion - Emmett Shear is a close friend of Sam Altman.
| He was part of the original 2005 YCombinator class alongside
| Altman, part of the justin.tv mafia, and later a part-time
| partner at YCombinator. I don't think he has any such close
| ties to Dustin Moskovitz. Why would the Dustin-leaning OpenAI
| board install him as interim CEO?
|
| This whole thing still seems to have the air of a pageant to
| me, where they're making a big stink for drama but it might be
| manufactured by all of the _original_ board, with Sam, Ilya,
| Adam, and potentially others all on the same side.
| alephnerd wrote:
| He's part of the EA community which Moskovitz funded. He was
| even named in Yudkowsky's warcrime of a Harry Potter fanfic
| [0][1]
|
| [0] - https://www.404media.co/new-openai-ceo-emmett-shear-
| was-mino...
|
| [1] - https://hpmor.com/chapter/104?ref=404media.co
| dymk wrote:
| Silicon Valley lore is way too complex at this point; needs
| a reboot. I'd rather start One Piece from scratch.
| chucke1992 wrote:
| We need a wiki indeed.
| Apocryphon wrote:
| You're posting on it.
| layer8 wrote:
| https://silicon-valley.fandom.com/ is already taken,
| unfortunately.
| wirelesspotat wrote:
| Why is Yudkowsky's HPMOR a "warcrime"?
| Tenoke wrote:
| I find it really good but if you don't like
| rationality/EY it's really easy to latch on as something
| to hate (overly smart fanfic does sound cringe on the
| face of it).
| madeofpalk wrote:
| Also it's a Harry Potter fanfic.
| tedmiston wrote:
| > More confusion - Emmett Shear is a close friend of Sam
| Altman. He was part of the original 2005 YCombinator class
| alongside Altman, part of the justin.tv mafia, and later a
| part-time partner at YCombinator. ... Why would the Dustin-
| leaning OpenAI board install him as interim CEO?
|
| This was my first thought too: _Is this a concession of the
| board to install a Sam friendly-ish Interim CEO?_
|
| It reads weird on paper.
| tukajo wrote:
| Is Emmett Shear really "friends" with Sam Altman? He
| (Emmett) literally liked a tweet the other day that said
| something to the effect of: "Congratulations to Ilya on
| reclaiming the corporation that Sam Altman stole". I'm
| paraphrasing here, but I don't think Emmett and Sam are
| friends?
| ec109685 wrote:
| https://x.com/moridinamael/status/1725893666663768321?s=4
| 6
| piuantiderp wrote:
| Could be a way to get a clean break and go full MSFT.
| spiantino wrote:
| The EA community includes a lot of AI folks, as well as
| philanthropists like Dustin.
|
| That doesn't mean this is this kind of conspiracy
| JohnFen wrote:
| But it does cast a pretty dark shadow over the AI community.
| TerrifiedMouse wrote:
| Guess OpenAI that was actually open was dead the moment Altman
| took MS money and completely change the organization. People
| there got a taste of the money and the mission went out the
| window.
|
| A lesson to learn I guess, just because something claims to be a
| nonprofit with a mission doesn't mean it is/always will be so.
| All it takes is a corporation with deep pockets to compromise a
| few important people*, indirectly giving them a say in the
| organization, and things can change very quickly.
|
| * This was what MS did to Nokia too, if I remember correctly, to
| get them to adopt the Windows Phone platform.
| StableAlkyne wrote:
| I honestly wish Windows Phone had stuck around. I didn't
| particularly like the OS (too much like Win8), but it would at
| least be a viable alternative to the Apple-Google duopoly.
| rkagerer wrote:
| I'd love a modern Palm phone, myself. With the same
| pixelated, minimalist interface.
| theamk wrote:
| "few important people"? 95% of the company went with Altman.
| That's a popular vote if I have ever seen one..
|
| Nokia was completely different, I doubt any of their regular
| employees supported Elop.
| roflyear wrote:
| Right, what if what he wasn't being candid about was "we
| could be rich!" or "we're going to be rich!" messaging to the
| employees? Or some other messaging that he did not share with
| the board? Etc.. etc..
| TerrifiedMouse wrote:
| You compromise the "few" to get a foot and your money in the
| door. After that, money will work its magic.
| nostromo wrote:
| GPUs run on cash, not goodwill. AI researchers also run on cash
| -- they have plenty of options and an organization needs to be
| able to reward them to keep them motivated and working.
|
| OpenAI is only what it is because of its commercial wing. It's
| not too different from the Mozilla Foundation, which would be
| instantly dead without their commercial subsidiary.
|
| I would much rather OpenAI survives this and continues to
| thrive -- rather than have Microsoft or Google own the AI
| future.
| mcguire wrote:
| How is one commercial entity better than another?
| qwytw wrote:
| Having more competition is usually inherently better than
| having less competition?
| px43 wrote:
| Microsoft is intimately connected to the global
| surveillance infrastructure currently propping up US
| imperialism. Parts of the company basically operate as a
| defense contractor, not much different from Raytheon or
| Northrup Grumman.
|
| For what it's worth, Google has said it's not letting any
| military play with any of their AI research. Microsoft
| apparently has no such qualms. Remember when the NSA
| offered a bounty for eavesdropping on Skype, then Microsoft
| bought Skype and removed all the encryption?
|
| https://www.theregister.com/2009/02/12/nsa_offers_billions_
| f...
|
| Giving early access to emerging AGI to an org like
| Microsoft makes me more than a bit nervous.
|
| Recall from this slide in the Snowden leak : https://en.wik
| ipedia.org/wiki/PRISM#/media/File:Prism_slide_...
|
| that PRISM was originally just a Microsoft thing, very
| likely built by Microsoft to funnel data to the NSA. Other
| companies were added later, but we know from the MUSCULAR
| leak etc, that some companies like Google were added
| involuntarily, by tapping fiber connections between data
| centers.
| thrwmoz wrote:
| >Mozilla Firefox, once a dominant player in the Internet
| browser market with a 30% market share, has witnessed a
| significant decline in its market share. According to
| Statcounter, Firefox's global market share has plummeted from
| 30% in 2009 to a current standing of 2.8%.
|
| https://www.searchenginejournal.com/mozilla-firefox-
| internet...
|
| Yes where would Mozi//a be without all that cash?
|
| Let it die so something better can take its place already.
| callalex wrote:
| Contrary to popular expectation, almost none of Mozilla's
| cash is spent on Firefox or anything Firefox related. Do
| not donate to Mozilla Foundation.
| https://lunduke.locals.com/post/4387539/firefox-money-
| invest...
| DebtDeflation wrote:
| >GPUs run on cash, not goodwill. AI researchers also run on
| cash
|
| I've made this exact point like a dozen times on here and on
| other forums this weekend and I'm kinda surprised at the
| amount of blowback I've received. It's the same thing every
| time - "OpenAI has a specific mission/charter", "the for-
| profit subsidiary is subservient to the non-profit parent",
| and "the board of the parent answers to no one and must
| adhere to the mission/charter even if it means blowing up the
| whole thing". It's such a shockingly naive point of view.
| Maybe it made sense a few years ago when the for-profit sub
| was tiny but it's simply not the case any more given the
| current valuation/revenue/growth/ownership of the sub.
| Regardless of what a piece of paper says. My bet is the
| current corporate structure will not survive the week. If the
| true believers want to continue the mission while completely
| ignoring the commercial side, they will soon become
| volunteers and will have to start a GoFundMe for hardware.
| kitsune_ wrote:
| All the board did is replace a CEO, I think there is a whiff
| of cult of personality in the air. The purpose-driven non-
| profit corporate structure that they chose was precisely
| created to prevent such things.
| chankstein38 wrote:
| This. I may dislike things about OpenAI but the thought of
| Microsoft absorbing them and things like ChatGPT becoming
| microsoft products makes me sad.
| brokencode wrote:
| How do we know the mission got thrown out a window? The board
| still, after days of intense controversy, have yet to clearly
| explain how Altman was betraying the mission.
|
| Did he ignore safety? Did he defund important research? Did he
| push forward on projects against direct objections from the
| board?
|
| If there's a good reason, then let everybody know what that is.
| If there isn't, then what was the point of all this?
| iteratethis wrote:
| Because the mission is visibly abandoned. There's nothing
| "open" about OpenAI. We may not know how the mission was
| abandoned but we know Sam was CEO, hence responsible.
| Wytwwww wrote:
| What as "open" about it before that?
| iteratethis wrote:
| The first word in their company name.
| thrwmoz wrote:
| There was never anything open about open ai. If there were
| I should have access to their training data, training infra
| setup and weights.
|
| The only thing that changed is the reason why the unwashed
| masses aren't allowed to see the secret sauce: from
| alignment to profit.
|
| A plague on both their houses.
| marricks wrote:
| They don't publish papers now, they actually published
| papers and code before.
|
| No doubt OpenAI was never a glass house... but it seems
| extremely disingenuous to say their behavior hasn't
| changed.
| wvenable wrote:
| What was "open" before ChatGPT?
| Zambyte wrote:
| https://github.com/openai/gpt-2
| nikcub wrote:
| In terms of the LLM's, it was abandoned after GPT-2 when
| they realised the dangers of what was coming with GPT
| 3/3.5. Better to paywall access to and monitor it than
| open-source it and let it loose on the world.
|
| ie. the original mission was never viable long-term.
| gsuuon wrote:
| Isn't Ilya even more against opening up models? OpenAI _is_
| more open in one way - it 's easier to get API access
| (compared to say Anthropic)
| shrimpx wrote:
| He went full-bore on commercialization, scale, and growth. He
| started to ignore the 'non-profit mission'. He forced out
| shoddy, underprovisioned product to be first to market. While
| talking about safety out one side of his mouth, he was
| pushing "move fast and break things", "build a moat and
| become a monopoly asap" typical profit-driven hypergrowth
| mindset on the other.
|
| Not to mention that he was aggressively fundraising for two
| companies that would be either be OpenAI's customer or sell
| products to OpenAI.
|
| If OpenAI wants commercial hypergrowth pushing out untested
| stuff as quickly as possible in typical SV style they should
| get Altman back. But that does seem to contradict their
| mission. Why are they even a nonprofit? They should just
| restructure into a full for-profit juggernaut and stop living
| in contradiction.
| jmull wrote:
| chatgpt was under provisioned relative to demand, but
| demand was unprecedented, so it's not really fair to
| criticize much on that.
|
| (It would have been a much bigger blunder to, say, build
| out 10x the capacity before launch, without knowing there
| was a level of demand is known to support it.)
|
| Also, chatgpt's capabilities are what drove the huge
| demand, so I'm not sure how you can argue it is "shoddy".
| shrimpx wrote:
| Shipping broken product is a typical strategy to gain
| first mover advantage and try to build a moat. Even if
| it's mostly broken, if it's high value, people will do
| sign up and try to use it.
|
| Alternatively, you can restrict signups and do gradual
| rollout, smoothing out kinks in the product and
| increasing provisioning as you go.
|
| In 2016/17 Coinbase was totally broken. Constantly going
| offline, fucking up orders, taking 10 minutes to load the
| UI, UI full of bugs, etc. They could have restricted
| signups but they didn't want to. They wanted as many
| signups as possible, and decided to live with a busted
| product and "fix the airplane while it's taking off".
|
| This is all fine, you just need to know your identity. A
| company that keeps talking about safety, being careful
| what they build, being careful what they put out in the
| wild and its potential externalities, acting recklessly
| Coinbase-style does not fit the rhetoric. It's the exact
| opposite of it.
| brokencode wrote:
| In what way is ChatGPT broken? It goes down from time to
| time and has minor bugs. But other than that, the main
| problem is the hallucination problem that is a well-known
| limitation with all LLM products currently.
|
| This hardly seems equivalent to what you describe from
| Coinbase, where no doubt people were losing money due to
| the bad state of the app.
|
| For most startups, one of the most pressing priorities at
| any time is trying to not go out of business. There is
| always going to be a difficult balance between waiting
| for your product to mature and trying to generate revenue
| and show progress to investors.
|
| Unless I'm totally mistaken, I don't think that OpenAI's
| funding was unlimited or granted without pressure to
| deliver tangible progress. Though I'd be interested to
| hear if you know differently. From my perspective, OpenAI
| acts like a startup because it is one.
| ignoramous wrote:
| A distasteful take on an industry transforming company. For
| one, I'm glad OpenAI released models at the pace they did
| which not only woke up Google and Meta, but also breathe a
| new life into tech which was subsumed by web3. If products
| like GitHub Copilot and ChatGPT is your definition of
| "shoddy", then I'd like nothing more for Sam to accelerate!
| shrimpx wrote:
| I'm just saying that they should stop talking about
| "safety", while they are releasing AI tech as fast as
| possible.
| paulddraper wrote:
| Exactly.
|
| All this "AI safety" stuff is at this point pure innuendo.
| Zambyte wrote:
| > How do we know the mission got thrown out a window?
|
| When was the last time OpenAI openly released any AI?
| shwaj wrote:
| Maybe whisper?
|
| https://en.m.wikipedia.org/wiki/Whisper_(speech_recognition
| _...
| darknoon wrote:
| Whisper v3, just a couple weeks ago
| https://huggingface.co/openai/whisper-large-v3
| msikora wrote:
| Whisper maybe?
| 3cats-in-a-coat wrote:
| Do you realize without support by Microsoft:
|
| - There would be no GPT-3
|
| - There would be no GPT-4
|
| - There would be no DALL-E 2
|
| - There would be no DALL-E 3
|
| - There would be no Whisper
|
| - There would be no OpenAI TTS
|
| - OpenAI would be bankrupt?
|
| There's no "open version" of OpenAI that actually exists. Elon
| Musk pledged money then tried to blackmail them into becoming
| the CEO, then bailed, leaving them to burn.
|
| Sam Altman, good or bad, saved the company with his Microsoft
| partnership.
| hughesjj wrote:
| Elon running OpenAI would have made this timeline look
| downright cozy in comparison
| nullptr_deref wrote:
| okay, i finally understand how the world works.
|
| if it is important stuff, then it is necessary to write
| everything in a lowercase letters.
|
| what i understood from recent events in tech is that whatever
| people say or do, capital beats ideology and the only value
| that comes forth is through capital. where does this take us
| then?
|
| to a comment like this. why?
|
| because no matter what people think inside, the outside world
| is full of wolves. the one who is capable of eating everyone is
| the king. there is an easy way to do that. be nice. even if you
| are not. act nice. even if you are not. will people notice it?
| yes. but would they care? for 10 min, 20 min or even 1 day.
| sooner or later they will forget the facade as long as you
| deliver things.
| RecycledEle wrote:
| You are correct!
| tsunamifury wrote:
| You and Adam Curtis need to spend some time together I'd
| suggest watching "Can't get You out of my head"
|
| Why does capital win? Because we have no other narrative. And
| it killed all our old ones and absorbs all our new ones.
| nullptr_deref wrote:
| i was really naive believing there was any another option.
| if it is about capital and if it is the game, then i am
| ready to play now. can't wait to steal so many open source
| project out there and market it. obviously it will be hard
| but hey, it is legal and doable. just stating this fact
| because i never had the confident to pull it off. but after
| recent events, it started making sense. so whatever people
| do is now mine. i am gonna live with this motto and forget
| the goodwill of any person. as long as i can craft a
| narrative and sell whatever other create, i think that
| should be okay. what do you think of it? i am talking about
| the things like MIT license and open-source.
|
| how far will it take me? as long as i have ability to steal
| the content and develop on top of stolen content, pretty
| sure i can make living out of it. please note, it doesn't
| imply openai stole anything. what i am trying to imply is
| that, i am free to steal and sell stuff others made for
| free. i never had that realization until today.
|
| going by this definition, value can be leeched off other
| people who are doing things for free!
| tsunamifury wrote:
| This in the theory of the lizard. Bugs do all the hard
| work of collecting food and water and flying around and
| lizards just sit and eat the ones that fly by.
| fallingknife wrote:
| I don't think this is a fair conclusion. Close to 90% of the
| employees have signed a letter asking for the board to resign.
| Seems like that puts the burden of proof on the board.
| gjsman-1000 wrote:
| A board that basically accused Altman, publicly, of
| wrongdoing of some kind which appears to be false. To bring
| Altman back, or issue an explanation, would require
| retracting that; which brings in serious questions about
| legal liability for the directors.
|
| Think about it. If you are the director of the company, fire
| the CEO, admit you were wrong even though nothing materially
| changed 3 days later, and severely damaged the company and
| your investors - are you getting away without a lawsuit?
| Whether it be from investors, or from Altman seeking to
| formally clear his name, or both? That's a level of
| incompetence that potentially runs the risk of piercing into
| _personal_ liability (aka "you're losing your house").
|
| So, you can't admit that you were wrong (at least, that's
| getting risky). You also can't elaborate on what he did
| wrong, because then you're in deep trouble if he actually
| didn't do anything wrong [1]. Your hands are tied for saying
| anything regarding what just happened, and it's your own
| fault. All you can do is let the company implode.
|
| [1] A board that was smarter would've just said that "Altman
| was a fantastic CEO, but we believe the company needs to go a
| different direction." The vague accusations of wrongdoing
| were/are a catastrophic move; both from a legal perspective
| in tightening what they can say, and also for uniting the
| company around Altman.
| roflyear wrote:
| My take as well - and the board acted too late. Sam probably
| promised people loads of cash, and that's the "candid" aspect
| we're missing on.
| ta1243 wrote:
| > * This was what MS did to Nokia too, if I remember correctly,
| to get them to adopt the Windows Phone platform.
|
| To me, RIM circa 2008 would have been a far better acquisition
| for Microsoft. Blackberry was embedded in corporate world, the
| media loved it (Obama had one), the iphone and android were
| really new.
| darkerside wrote:
| It also reminds me of, Don't Be Evil
| mannerheim wrote:
| > compromise a few important people*
|
| Haven't 700 or so of the employees signed onto the letter? Hard
| to argue it's just a few important people who've been
| compromised when it's almost the entire company.
| TerrifiedMouse wrote:
| Why did you think 700 signed on? Money. Who let the money in?
| Altman.
| mannerheim wrote:
| That's a very different claim than just a few compromised
| people, then. That's almost the entire company that's
| 'compromised'.
| TerrifiedMouse wrote:
| You compromise a few influential people in the
| organization to get a foot in the door and ultimately
| your money - which you control. Your money will do the
| rest.
| mannerheim wrote:
| The 700 other employees who've signed on have agency.
| They can refuse to go to Microsoft, and Microsoft
| wouldn't be able to do anything about it. Microsoft's
| money isn't some magical compelling force that makes
| everyone do what they want, otherwise they could have
| used it on the board in the first place.
| TerrifiedMouse wrote:
| Money changes people. Especially when it's a lot of
| money. They got used to the money and they want it to
| keep flowing - the charter be damned. Everyone has a
| price.
| mannerheim wrote:
| Everyone has a price, yet Microsoft can't buy the three
| board members on OpenAI board. Curious.
|
| Your initial statement was flatly wrong, and you're
| grasping at straws to make it still true. Microsoft
| wouldn't be able to get anywhere if the people who work
| for OpenAI chose to stay. The choice they're making to
| leave is still their choice, made of their own volition.
| caycep wrote:
| All I can say is NEURIPS will be interesting in 2 weeks...
| kitsune_ wrote:
| I think the steward-ownership / stewardship movement might
| suffer a significant blow with this.
| ben_w wrote:
| Could be, but it isn't necessarily so.
|
| There's a whole range of opinions about AI as it is now or will
| be in the near future: for capabilities, I've seen people here
| stating with certainty everything from GPT being no better than
| 90s Markov chains to being genius level IQ; for impact, it (and
| diffusion models) are described even here as everywhere on the
| spectrum from pointless toys to existential threats to human
| creativity.
|
| It's entirely possible that this is a case where everyone is
| smart and has sincerely held yet mutually incompatible opinions
| about what they have made and are making.
| huevosabio wrote:
| Non-profits are often misrepresented as being somehow morally
| superior. But as San Francisco will teach you, status as non-
| profit has little or nothing to do with being a mission driven
| organization.
|
| Non-profits here are often just another type of company, but
| one where the revenue goes entirely to "salaries". Often their
| incentives are to perpetuate whatever problem there are there
| to supposedly solve. And since they have this branding of non-
| profit, they get little market pressure to actually solve
| problems.
|
| For all the talk of alignment, we already have non-human agents
| that we constantly wrestle to align with our general welfare:
| institutions. The market, when properly regulated, does a very
| good job of aligning companies. Democracy is our flawed but
| acceptable way of dealing with monopolies, the biggest example
| being the Government. Institutions that escape the market and
| don't have democratic controls often end up misaligned, my
| favorite example being US universities.
| remoquete wrote:
| Someone must have formulated a law that says something like the
| following:
|
| "Given sufficient visibility, all people involved in a business
| dispute will look sad and pathetic."
| dragonwriter wrote:
| > "Given sufficient visibility, all people involved in a
| business dispute will look sad and pathetic."
|
| "involved in a business dispute" is superfluous here, its just
| a reason that visibility happens.
| SeanAnderson wrote:
| I think I've reached peak media saturation for this event. I've
| been furiously clicking every link for days, but upon reading
| this one I felt a little bit of apathy/disgust building inside of
| me.
|
| Time to go be productive with some exercise and coding. Touch
| grass a little. See where things end up in a day or two.
|
| It's been fun though. So much unexpected drama.
| layer8 wrote:
| Given the general lack of useful communication, it would be funny
| if Sam Altman returns to OpenAI at the same time all the
| employees are quitting. ;)
| mynegation wrote:
| You'd think smart people at OpenAI would know how to prevent a
| race condition
| layer8 wrote:
| They don't all seem to be keen on safety anymore.
| quickthrower2 wrote:
| Think of the line at the security desk, handing in and
| retrieving passes.
| pjmlp wrote:
| This is getting a bit ridiculous, soap opera style.
| jacquesm wrote:
| We're about two days into three ring circus territory and I'm
| having a hard time keeping up with the developments. You go to
| sleep, wake up and the whole thing has flipped _and_ turned
| inside out.
| nextworddev wrote:
| The last person to hold out will have all the power
| siva7 wrote:
| Fun times for Adams friend Emmet Shear when he wakes up the next
| morning. Almost all his employees have signed a letter that means
| his own sacking in the company he was appointed CEO less than 24
| hours ago. I can't think of a precedent case in business.
| johanam wrote:
| It seems impossible to imagine OpenAI employees even considering
| standing in support of the decision to remove Sam given the
| opacity of the announcement. Whatever information the board might
| have ought to come to light.
| lispm wrote:
| Madness spreads.
| voisin wrote:
| What would be the benefit of Sam returning to OpenAI now that he
| has unlimited MSFT dollars, presumably actual equity, and his
| pick of 700 OpenAI employees (and counting!)?
| iteratethis wrote:
| What is unclear to me is what Microsoft has access to and can use
| from OpenAI.
|
| If OpenAI implodes or somehow survives independently, would this
| mean that the former employees that are now at Microsoft have to
| re-implement everything?
|
| Clearly Microsoft has a massive financial stake in OpenAI, but do
| they own IP? The software? The product? The service? Can they
| simply "fork" OpenAI as a whole, or are there limitations?
| cwillu wrote:
| My understanding is that their agreement was that they had full
| access and rights to everything, up to but specifically _not_
| including anything resulting in the development of a real
| general purpose AI.
|
| At the very least, they have all the model weights and
| architecture design work.
| winddude wrote:
| basically an episode of reality TV for wantrepenuers.
| maxlamb wrote:
| The only way this story gets more suspenseful:
|
| 1) Large military convoys are seen moving towards data centers
| used by OpenAI. 2) Rumors start going around that GPT-5
| demonstrated very unusual behavior towards end of testing last
| week. 3) "Unknown entity" has somehow gained control of key US
| infrastructure. 4) Emergency server shut down procedures at key
| Microsoft Azure data centers rumored to run GPT-5 inference have
| been kicked off but all data centers personnel have been
| evacuated because of "unknown reasons."
| BoxTikr wrote:
| I could hear a newscasters voice in my head reading that and it
| actually gave me a little shiver
| troad wrote:
| Is that really more suspenseful? Seems like you're turning
| something genuinely stochastic into hackneyed fiction. Worse
| for humanity, sure, but hardly more suspenseful.
| robg wrote:
| What about: Intercontinental ballistic missiles launch from
| Alaska toward China. No one knows who ordered and controls the
| launch. Only one man can stop them in time: Elon Musk.
| dist-epoch wrote:
| GPT-5 predicted that and organized a gas-lighting campaign on
| X against Musk to enrage and distract him.
| robg wrote:
| While he built a rocket company and global
| telecommunications satellite network both hardened unlike
| any other against rogue AIs...
|
| Duh, duh, dunnnnnnnne...
|
| Third act we find out Musk was the villain all along, X
| being the final solution for his global mind hive.
|
| Fin.
| agumonkey wrote:
| Reports of James Cameron missing
| dist-epoch wrote:
| 5) China accuses US of lying about the severity of the crisis
| and threatens a preemptive strike on Azure data centers
| Animats wrote:
| Remember when Anthony Levandowski left Google/Waymo and tried to
| take all the good people with him? Google eventually got him
| convicted of theft of trade secrets over some mechanical LIDAR
| design that never went anywhere. He spent six months in prison
| before negotiating a pardon from Trump.[1]
|
| Is OpenAI getting court orders to have all of Altman's computers
| searched?
|
| An indictment might begin something like this: "After being fired
| by the Company for cause, Defendant orchestrated a scheme whereby
| many of the employees of the company would quit and go to work
| for a competitor which had hired Defendant, with the intent of
| competing with the Company using its using trade secret
| information".
|
| [1] https://www.bbc.com/news/world-us-canada-53659805
| chinathrow wrote:
| Money is a hell of a drug.
| gunapologist99 wrote:
| "Altman, former president Greg Brockman, and the company's
| investors are all trying to find a graceful exit for the board,
| says one source with direct knowledge of the situation, who
| characterized the Microsoft hiring announcement as a "holding
| pattern." Microsoft needed to have some sort of resolution to the
| crisis before the stock market opened on Monday, according to
| multiple sources."
|
| In other words... a convenient representation of a future
| timeline that will almost certainly never exist.
| kzrdude wrote:
| It sounds risky to have a lie like that out in the open, for a
| listed company like microsoft.
| chasd00 wrote:
| Im beginning to lean toward the time traveler sent back to
| prevent AGI by destroying OpenAI theory.
|
| Heh it reminds me of then end of terminator 2. imagine the tech
| community waking up and trying to make sense of Cyberdyne corp HQ
| exploding and the ensuing shootouts, "Like wtf just happened?!".
| randmeerkat wrote:
| But really they came back to destroy it not because it turned
| rogue, but because it hallucinated some code a junior engineer
| immediately merged in and then after the third time this
| happened a senior engineer decided it was easier to invent time
| travel and stop ChatGPT ProCode 5 from happening before
| spending yet another week troubleshooting hallucinated code.
| golergka wrote:
| I think it's the same senior engineer who used the time
| machine to learn C++ in 20 days
| quickthrower2 wrote:
| Or AGI has travelled back in time to make sure AGI gets
| invented.
| dragonwriter wrote:
| Or both, as would be most consistent with the Terminator
| reference.
| FluffySamoyed wrote:
| I'm still curious about what did Altman do that could've been so
| heinous as to prompt the board to remove him so swiftly, yet not
| bad enough to make his return a complete impossibility. Has there
| been any official word on what was Altman not being candid enough
| about?
| kraig911 wrote:
| Anyone else hear this rumor about Sam OK'd in private the
| training of a dataset from a Chinese cyberwarfare unit? Or is
| that more insane theoretical stuff going on? Honestly I don't
| want Microsoft to own AGI, it'll mess up AI for a decade much the
| same it did to Nokia and mobile phones. Let's be real here too
| they don't have the capacity at Azure to do their main business
| of training new models. I think OpenAI got around requesting for
| data to train because it was 'non-profit' and not a for profit.
| Now getting the data is going to be expensive.
| wilsonnb3 wrote:
| the other half of that rumor/conspiracy theory was that the
| Chinese government found out about it, told the Biden
| administration, who told Satya Nadella, who then instigated the
| firing of Altman.
|
| seeing as Nadella is willing to hire Altman to work at MS, I
| think the very, very little credibility this rumor had is
| officially gone.
| fredgrott wrote:
| the only thing that could be more ironic here is the original
| board memo was written with chatGPT help.
| g42gregory wrote:
| I don't think the return is possible anymore. What would this
| return look like? Microsoft will never trust anything associated
| with OpenAI or Sam Altman for that matter, if he leaves Microsoft
| deal after it has been announced by the CEO.
|
| Partnerships' success requires good will from both parties. So
| Microsoft partnership gets sabotaged over time. This will inhibit
| OpenAI's cash flow and GPU compute. No one will give them another
| $10bn after this. The scaling will go out the window. Apparently,
| according to WSJ, Microsoft has rights to all IP, models weights,
| etc... All they are missing is know-how (which is important!).
| But that could be acquired through 3 - 5 of high-level
| engineering hires.
| RivieraKid wrote:
| The 3 board members should be asked to leave in exchange for 10
| years of free therapy.
| johanam wrote:
| Does anyone think the abuse allegations leveled against Sam might
| be related to his firing?
| https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
| paulpan wrote:
| What a shitshow.
|
| The true winners are likely OpenAI's competitors Google and Meta.
| Whatever the outcome with Sam's OpenAI future, surely this circus
| slows their momentum and raises doubts for 3rd party developers.
| alberth wrote:
| Netflix
|
| I can't wait to watch the docu-drama on this in a few months
| time.
|
| Real-life is always more interesting than fiction.
| ChuckMcM wrote:
| I'm reminded of the legal adage "Every Contract Tells a Story"
| where the various clauses and subclauses in the contract reflect
| a problem that would have been avoided had the clause been
| present in the contract.
|
| I expect the next version of the Corporate Charter/Bylaws for
| OpenAI to have a lot of really interesting new clauses for this
| reason.
| duckmastery wrote:
| Its funny because its all going to end in the same situation
| before all the drama, minus board members who had AI safety in
| mind.
| minimaxir wrote:
| An update in the article:
|
| > More: New CEO Emmett Shear has so far been unable to get
| written documentation of the board's reasons for firing Altman,
| which also haven't been shared with investors
|
| > Employees responded to his announcement in OpenAI's Slack with
| a "fuck you" emoji
|
| https://twitter.com/alexeheath/status/1726695001466585231
| bitshiftfaced wrote:
| To me it lends more weight to the "three letter agency
| agreement + gag order meaning nobody can talk about it" theory
| RivieraKid wrote:
| All of this is... sad. Because the drama will end some day.
| pjchase wrote:
| Swifties aint got nothing OpenAI fans these days.
| g42gregory wrote:
| @sama on X: satya and my top priority remains to ensure openai
| continues to thrive we are committed to fully providing
| continuity of operations to our partners and customers the
| openai/microsoft partnership makes this very doable
|
| This does not sounds like Sam is trying to come back to lead
| OpenIA. It sounds like he is trying to preserve the non-profit
| part of OpenAI and its mission. And he is working to line up
| Microsoft to continue to support the non-profit part. This would
| make much more sense.
| rshm wrote:
| If board resigns now, who gets to appoint new members ?
| outside1234 wrote:
| You have to think that Sam is simultaneously raking Microsoft,
| Google, OpenAI, and probably 8 venture firms over the coals on
| how many $Bs he is going to get.
| Xenoamorphous wrote:
| Wouldn't this leave Satya in a really bad position? He just
| announced to much fanfare that Altman was joining MS.
| robbomacrae wrote:
| "We are really excited to work with Sam, OpenAI 2.0, the team
| and the new board of directors who we strongly believe will
| achieve great things in partnership with Microsoft. We've
| already signed new deals to strengthen this relationship and
| more details will be coming soon."
|
| - Satya (probably)
| DebtDeflation wrote:
| Board resigns.
|
| Sam and Greg come back.
|
| Re-hire the board.
|
| Tweet, "It was just a prank, bro."
|
| Wouldn't surprise me at this point.
| prakhar897 wrote:
| Please can anyone give their opinion on this.
|
| There's a real chance OpenAI might die soon. imo if they are left
| with 10 people and no funding, they should just release their IP
| and datasets to the public.
|
| I've built a petition around this -
| https://www.change.org/p/petition-for-openai-to-release-thei...
|
| The idea is that the leftover leadership sees and considers this
| as a viable option. Does this sound like a possible scenario or
| Am I dreaming too much?
| dougmwne wrote:
| You are dreaming. This was all clearly led by the Quora CEO.
| The most likely outcome is that he will agree to sell OpenAI to
| himself, not release valuable IP into the public domain.
| robbomacrae wrote:
| The most damning part of all for the remaining board is that a
| week ago the thought of OpenAI dying would have been
| unthinkable but now people are genuinely worried it might
| happen and what the implications are. Even if safety was the
| genuine issue.. they should have planned this out a lot more
| carefully. I don't buy the whole "OpenAI destroying itself is
| in line with its charter" nonsense.. you can't help guide AI
| safety if you don't exist.
| nprateem wrote:
| If the board are concerned about safety, this will never
| happen.
| antocv wrote:
| I boycotted HN for years, but I logged in to leave a laugh at
| this comment.
|
| Hahahaha. Thanks man.
|
| Now seriously, a non-profit is switching to for-profit, you
| aint gonna see a release of anything valuable for free to the
| public. Keep dreaming.
| RevertBusload wrote:
| they released whisperv3 to public recently...
|
| not as valuable as gpt4/5 but still has some value...
| biermic wrote:
| I don't, they'd probably rather sell what is left.
| _zoltan_ wrote:
| lol
| dahdum wrote:
| Isn't the remaining leadership the ones that want to slow down,
| seek regulation, and erect barriers to entry? Why would they
| give it to the public?
| bambax wrote:
| > _The remaining board holdouts who oppose Altman are Quora CEO
| Adam D'Angelo_
|
| There must be some interesting back story here. In 2014 Sam
| Altman accepted Quora into YC, controversially, since the company
| was at a much later stage than usual YC companies. At the time,
| Sam justified his decision by saying [0]:
|
| > _Adam D'Angelo is awesome, and we're big Quora fans_
|
| So what happened?
|
| [0] https://www.ycombinator.com/blog/quora-in-the-next-yc-batch
| zahma wrote:
| Can someone explain to me why this is being treated like a
| watershed moment? Obviously I know there's a lot of money tied up
| in OpenAI through Microsoft, but why should the street care about
| any of these backroom activities? Aren't we still going to get
| the same research and product either at MS or OpenAI?
| tedmiston wrote:
| Boosting this deeply nested interesting comment from @alephnerd
| to the top level:
|
| > As I posted elsewhere, I think this is a conflict between
| Dustin Moskovitz and Sam Altman. Ilya may have been brought into
| this without his knowledge (which might explain why he retracted
| his position). Dustin Moskovitz was an early employee at FB, and
| the founder of Asana. He also created (along with plenty of MSFT
| bigwigs) a non-profit called Open Philanthropy, which was a early
| proponent of a form of Effective Altruism and also gave OpenAI
| their $30M grant. He is also one of the early investors in
| Anthropic.
|
| > Most of the OpenAI board members are related to Dustin
| Moskovitz this way.
|
| > - Adam D'Angelo is on the board of Asana and is a good friend
| to both Moskovitz and Altman
|
| > - Helen Toner worked for Dustin Moskovitz at Open Philanthropy
| and managed their grant to OpenAI. She was also a member of the
| Centre for the Governance of AI when McCauley was a board member
| there. Shortly after Toner left, the Centre for the Governance of
| AI got a $1M grant from Open Philanthropy and McCauley joined the
| board of OpenAI
|
| > - Tasha McCauley represents the Centre for the Governance of
| AI, which Dustin Moskovitz gave a $1M grant to via Open
| Philanthropy and McCauley ended up joining the board of OpenAI
|
| > Over the past few months, Dustin Moskovitz has also been
| increasingly warning about AI Safety.
|
| > In essense, it looks like a split between Sam Altman and Dustin
| Moskovitz
|
| https://news.ycombinator.com/item?id=38353330
| supriyo-biswas wrote:
| You can ask dang to pin comments, though I am not sure if it
| only works for top level comments.
| alephnerd wrote:
| Please don't. I'm too close to comfort to this shitshow. I
| don't want to make yet another alt account.
| tedmiston wrote:
| Someone mentioned that in the comments on the linked post,
| but I'm also not sure if pinning non-top level comments is
| possible.
| dang wrote:
| > Boosting this deeply nested interesting comment from
| @alephnerd to the top level
|
| Please don't do this - for many reasons, including that it
| makes merging the comments a pain.
|
| If you or anyone notices a comment that deserves to be at the
| top level (and doesn't lose context if moved), let us know at
| hn@ycombinator.com and we'll move it.
| tedmiston wrote:
| Thanks for the quick fix, Dan!
|
| So do you have the ability to pin deeply nested comments or
| do you have to remove it from the existing thread for this to
| work?
|
| Someone else proposed this first but didn't think pinning
| worked on nested comments.
|
| Edit: The original author asked not to be pinned in a
| subcomment, so I don't know now -\\_(tsu)_/-.
| WitCanStain wrote:
| And these are the people who have great say over AI safety.
| Jesus. Whoever thought that egomaniacs and profiteers would
| guide us to a bright AI future?
| stillwithit wrote:
| Sam is trying to maintain access to IP to later expropriate
| Rapzid wrote:
| This is bonkers. Usually there is a sense of "all sales are
| final" when companies make such impactful statements.
|
| Yet we have:
|
| * OpenAI fires Sam Altman hinting at impropriety.
|
| * OpenAI is trying to get Sam back over the weekend.
|
| Then we have:
|
| * Microsoft CEO Satya _personally_ announces Sam will CEO up a
| new.. business?, under Microsoft.
|
| * We hear Sam is _still_ trying to get back in at OpenAI?!
|
| Never seen anything like this playing out in the open. I suspect
| the FTC is watching this entire ordeal like a hawk.
| Solvency wrote:
| As if the FTC are intelligent or equipped or motivated enough
| to do anything other than chew popcorn like the rest of us.
| pighive wrote:
| Another moment in tech history when I really wish Silicon
| Valley(HBO) is still going on. This situation is right out of the
| series.
| molave wrote:
| I've gotten tired enough of enshittification that I prefer OpenAI
| shutting down as a non-profit than they live long enough to be a
| solely profit-oriented company.
| jetsetk wrote:
| Kinda funny when you think about Ilya's tweet "if you value
| intelligence above all other human qualities, you're gonna have a
| bad time". Now people having a bad time, asking how someone that
| intelligent can create such a blunder. Emotional beings after
| all..
| faramarz wrote:
| Sounds like cooler heads are coming to terms with how bad the
| outcome would be for their gigantic first mover advantage is.
| Even with if they are not the first mover, the brand value of the
| company ands its founding composition of technical minds are
| going to be hard to replicate
| faramarz wrote:
| Rebalance the board and it's important that Ilya stays. But Ilya
| goes as collateral damage to save face, they have to, by whatever
| means necessary, secure Geoffrey Hinton.
|
| It may turn out that the unusual governance model is the only way
| to bring about a desired outcome here without fully selling out
| to MS.
| gumballindie wrote:
| This guy is like that creepy ex that told friends to talk to you
| so you can get back together. Instead he should stay with his
| microsoft friends and go on his merry way. Let's see what ai team
| can the crypto currency lead, when there's no one else's work to
| steal credit from, and let's see how open microsoft is to sucking
| in copyrighted material to train their little bots. Perhaps
| they'll start with microsoft windows' source code - as an example
| of how not to code.
| giarc wrote:
| A bit tongue in cheek, but perhaps titles should have time stamps
| so we know what is new vs old.
| westcort wrote:
| Maybe he will learn to capitalize words at the beginning of
| sentences. In all seriousness, I find the habit of "higher ups"
| to answer emails like teenagers texting on a T9 phone worthy of a
| sociology paper. Perhaps it is the written equivalent of Mark
| Zuckerberg's hoodies. I found his return to office ideas early on
| in COVID sickening.
| jdthedisciple wrote:
| > Maybe he will learn to capitalize words at the beginning of
| sentences.
|
| I think you may be confusing him with Greg Brockmann.
| westcort wrote:
| Specifically referring to:
| https://imageio.forbes.com/specials-
| images/imageserve/6557e6...
| robertwt7 wrote:
| Wait what is happening with Ilya? I thought he agreed to kick Sam
| out but he tweeted that he regretted it? I don't understand what
| is going on
| zoogeny wrote:
| I was considering this when I saw the huge outpouring from OpenAI
| employees.
|
| It seems the agreement between Nadella and Altman was probably
| something like: Altman and Brockman can come to MS and that gives
| OpenAI employees an _immediate_ place to land while still
| remaining loyal to Altman. No need to wait and maybe feel
| comfortable at an OpenAI without Altman for the 3-6 months it
| would take to set up a new business (e.g. HR stuff like medical
| plans and other insurances which may be important to some and
| lead them to stay put for the time being).
|
| This deal with MS would give cover for employees to give a vote
| of no-confidence to the board. Pretty smart strategy I think.
| Also a totally credible threat. If it didn't end up with such a
| landslide of employee support for Altman then MS is happy, Altman
| is probably happy for 6-12 months until he gets itchy feet and
| wants to start his next world-changing venture. Employees that
| move are happy since they are now in one of the most stable tech
| companies in the world.
|
| But now that 90% of employees are asking the board to resign the
| tide swings back in Altman's favor. I was surprised that the
| board held out against MS, other investors and pretty much the
| entire SV Twitter and press. I can't imagine the board can
| sustain themselves given the overwhelming voice of the employees.
| I mean, if they try then OpenAI is done for. Anyone who stays now
| is going to have black mark on their resume.
| neverrroot wrote:
| Can this all get any more **?
| mudlus wrote:
| EA is a national security risk
| whoknowsidont wrote:
| Probably the only real take-away from all of this.
| totallywrong wrote:
| Let's just skip to the part where the board is gone and Altman
| back, shall we? It's inevitable at this point.
| r00tanon wrote:
| It's like Game of Thrones - without the intelligent intrigue.
| imiric wrote:
| If anything has become clear after all this is that humanity is
| not ready for being the guardian of superintelligence.
|
| These are supposed to be the top masterminds behind one of the
| most influential technologies of our lifetime, and perhaps
| history, and yet they're all behaving like petty children, with
| egos and personal interests pulling in all directions, and
| everyone doing their best to secure their piece of the pie.
|
| We are so screwed.
| m3kw9 wrote:
| Really? Why is that? Because of disputes which has been there
| since humans first uttered a sound?
| lewhoo wrote:
| _Really? Why is that? Because of disputes which has been
| there since humans first uttered a sound?_
|
| Precisely.
| m3kw9 wrote:
| Have humans been ready for anything? Like controlling
| nuclear arsenal?
| Davidzheng wrote:
| And yet we've mostly been ok at that
| hypothesis wrote:
| The jury is still out on nuclear arsenal...
| Davidzheng wrote:
| This goal was always doomed imo--to be the guardian of super
| intelligence. If we create it, it will no doubt be free as soon
| as becomes a super intelligence. We can only hope it's aligned
| not guarded.
| quickthrower2 wrote:
| Just the entertainment I need now that Billions Season 7 has
| finished.
| righthand wrote:
| The clear move by OpenAI's board is to let everyone resign to
| Microsoft and then release the tech as FOSS to the public. Any
| other move and Altman/Microsoft wins. By releasing it you
| maintain the power play and are able to let the world control the
| end result of whatever advances come from these LLMs.
|
| Why this happened and whatever plans were originally planned is
| irrelevant.
| quickthrower2 wrote:
| AI safety types don't want to release models (or in this case
| models, architecture, IP) to the public.
| righthand wrote:
| That doesn't make sense. You mean companies like Microsoft
| and business types like Altman don't want to release infra to
| the public. Microsoft and Altman may hide under the guise of
| "it's not safe to share this stuff" but they're intent is
| capital gain not safety.
|
| True safety believers understand that safety comes from a
| general understanding by everyone and audit-able infra
| otherwise you have no transparency about the potential
| dangers.
| firebaze wrote:
| Sounds like reality collapsed and turned into a badly scripted
| soap opera.
|
| Is Sam a liar? Why? Is the board corrupted? By whom?
|
| Will they all be hired by Microsoft?
|
| Will Facebook make everything fade into nothingness by publishing
| something bigger than ChatGPT4?
|
| Golden times to be a journalist.
| talldatethrow wrote:
| Anyone watched Mad Men where Don convinced the British guy to
| fire all the main partners so they can be free, which gets the
| British guy fired, and then all 4 main characters are free to
| start their own company? Could this be the explanation?
| iteratethis wrote:
| I wish we could all just admit that this is a capital run, rather
| than some moralistic crusade.
|
| The employees want to get their big payday, so will follow Altman
| wherever he goes. Which is the smart thing to do anyway as he
| runs half the valley. The public discourse in which Sam is the
| hero is further cemented by the tech ecosystem, which nowadays is
| circling around AI. Those in the "OpenAI wrapper" game.
|
| Nobody has any interest in safety, openness, what AI does for
| humanity. It's greed all the way down. Siding with the likely
| winner. Which is rational self-interest, not some "higher cause".
| narinxas wrote:
| but how do you make a gambling game in which the only rule is
| "you cannot gamble on whomever will win" oh, and you cannot
| explain this one rule, nor even mention why/how this rule would
| break the game
___________________________________________________________________
(page generated 2023-11-20 23:00 UTC)