[HN Gopher] I deeply regret my participation in the board's actions
___________________________________________________________________
I deeply regret my participation in the board's actions
Author : Palmik
Score : 557 points
Date : 2023-11-20 13:16 UTC (9 hours ago)
(HTM) web link (twitter.com)
(TXT) w3m dump (twitter.com)
| petargyurov wrote:
| Starting to think this was all some media stunt where they let
| ChatGPT make boardroom decisions for a day or two.
| herval wrote:
| the AGI firing its boss as the first action would be
| :chefskiss:
| badcppdev wrote:
| Maybe they just wanted to generate more material for the movie
| ?
| beretguy wrote:
| > I have to make THE MOVIE!
|
| - Ross Scott
| wand3r wrote:
| My favorite take from another HN comment, sadly I didnt save
| the UN for attribution:
|
| > Since this whole saga is so unbelievable: what if... board
| member Tasha McCauley's husband Joseph Gordon-Levitt
| orchestrated the whole board coup behind the scenes so he
| could direct and/or star in the Hollywood adaptation?
| ben_w wrote:
| Four hours ago, I wrote on a telegram channel:
|
| My gut is leaning towards gpt-5 being, in at least one sense,
| too capable.
|
| Either that or someone cloned sama's voice and used an LLM to
| personally insult half the board.
| sebzim4500 wrote:
| The RLHF models would never suggest this. The proposed solution
| is always to hold hands and sing Kumbaya.
|
| Maybe raw GPT-4 wants to fire everyone.
| starbugs wrote:
| Honestly, since a couple of days I have the feeling that nearly
| half of HN submissions are about this soap opera.
|
| Can't they send DMs? Why the need to make everything public via
| Twitter?
|
| It's quite paradox that of all things those people who build
| leading ML/AI systems are obviously the most rooted in egoism
| and emotions without an apparent glimpse of rationality.
| mrguyorama wrote:
| The kind of people that are born on third base and think they
| hit a triple are at the top of basically every american
| institution right now. Of course they think the world is a
| better place if they share every stupid little thought that
| enters their brain because they are "special" and "super
| smart".
|
| The AI field especially has always been grifters. They have
| promised AGI with every method including the ones that we
| don't even remember. This is not a paradox.
| ozgung wrote:
| Or maybe they created an evil-AGI-GPT by mistake, and now they
| have to act randomly and in the most unexpected ways to confuse
| evil-AGI-GPT's predictive powers.
| Tostino wrote:
| What a total mess this has been all around.
| eqmvii wrote:
| What a wild weekend... there are too many strange details to have
| a simple narrative in my head at this point.
| yeck wrote:
| Yeah. I need to take a break from theory crafting on this one.
| Too many surprises that have made it hard to draw a coherent
| line.
| someone7x wrote:
| This plot keeps thickening
|
| I'm eager to see how it all unfolds.
| m_ke wrote:
| Great opportunity to make Karpathy the CEO
| dacryn wrote:
| would be a waste of talent. Karpathy is great at what he does,
| let's make sure he keeps doing it.
|
| Let someone else take up the CEO role, which is a different
| skillset anyway.
| baq wrote:
| tried to play high stakes with sharks, got eaten alive by sharks.
|
| played stupid games, won stupid prizes.
|
| too bad since the guy's right, AI is so much more than fantastic
| business opportunity.
| v3ss0n wrote:
| What? Isn't him that he wants Sama out because 'Muh humanity
| advancement '?
| epups wrote:
| How weird! Perhaps a coup within a coup?
| preommr wrote:
| What the hell?
|
| So far, I underestood the chaos as a matter of principle - yes it
| was messy but necessary to fix the company culture that Ilya's
| camp envisioned.
|
| If you're going to make a move, at least stand by it. This tweet
| somehow makes the context of the situation 10x worse.
| Jensson wrote:
| Normal people can't take being at the center of a large
| controversy, the amount of negativity and hate you have to face
| is massive. That is enough to make almost anyone backtrack just
| to make it stop.
| selcuka wrote:
| Normal people don't burn a multi billion dollar company to
| the ground with a spontaneous decision either. They plan for
| the backlash.
| Jensson wrote:
| > They plan for the backlash
|
| You can't plan for something you have never experienced.
| Being hated by a large group of people is a very different
| feeling from getting hated by an individual, you don't know
| if you can handle it until it happens to you.
| anonylizard wrote:
| You can plan for something you've never experienced. You
| read, or learn from other people's experiences.
|
| Normal people know not to burn a $80 billion company to
| the ground in a weekend. Ilya was doing something
| unprecedented in corporate history, and astounding he
| wasn't prepared to face the world's fury over it.
| Jensson wrote:
| > You can plan for something you've never experienced.
| You read, or learn from other people's experiences.
|
| Text doesn't convey emotions, and our empathy doesn't
| work well for emotions we have never experienced. You can
| see a guy that got kicked in the balls got hurt, but that
| doesn't mean you are prepared to endure the pain of
| getting kicked in your balls or that you even understand
| how painful it is.
|
| Also watching politicians it looks like you can just
| brush it off, because that is what they do. But that
| requires a lot of experience, not anyone can do it, it is
| like watching a boxing match and think you can easily
| stand after a hard punch in your stomach.
| selcuka wrote:
| > You can see a guy that got kicked in the balls got
| hurt, but that doesn't mean you are prepared to endure
| the pain of getting kicked in your balls or that you even
| understand how painful it is.
|
| Sure, but you do your best not to be kicked in the balls.
| SilasX wrote:
| Yep. Or, if you're running an immense, well-funded
| organization that is gauging the consequences of a plan
| that involves being kicked in the balls, you take a tiny
| sliver of those funds and get some advisors to appraise
| you of what to expect when being kicked in the balls, not
| just wing it/"fake it till you make it". (As it turns
| out, faking not being in severe pain is tricky.)
| gemstones wrote:
| Ilya torched peoples' retirements by signaling that it
| would be very hard to cash out in OpenAI as it is now.
| You don't have to be emotional to understand the
| consequence of that action, just logical. You have to
| think beyond your own narrow perspective for a minute.
| rrr_oh_man wrote:
| Where did he do that? Genuine question.
| gemstones wrote:
| The board vote did it! They had a tender offer in the
| works that would have made employees millionaires. The
| board clearly signaled that they viewed the money-making
| aspects of the company as something to dial back, which
| in turn either severely lessens the value of that tender
| offer or prevents it from happening.
|
| I mean, he didn't have a button on his desk that said,
| "torch the shares", but he ousted the CEO as a way to cut
| back on the things that might have meant profit. Did he
| think that everyone was going to continue to want to give
| them money after they signal a move away from profit
| motives? Doesn't take a rocket scientist to think that
| one through.
|
| I think he was just preoccupied with AI safety, and
| didn't give a thought to the knock on effects for
| investors of any stripe. He's clearly smart enough to, he
| just didn't care enough to factor it into his plans.
| aleph_minus_one wrote:
| I do believe OpenAI clearly signalled from the very
| beginning what the (complicated) company structure is
| about and what risks this means for any potential
| investor (or employee hoping to become rich).
|
| If you project your personal hopes which are different
| from this into the hype, this is your personal problem.
| gemstones wrote:
| Well, with the hollowing out of OpenAI, it seems that
| someone else will easily take the lead! They're not my
| personal hopes - this move destroyed OpenAI's best chance
| at retaining control over cutting edge AI as well. They
| destroyed their own hopes.
| SoftTalker wrote:
| Torched retirements? Who is dumb enough to have his
| retirement portfolio that weighted to one company?
| samspenc wrote:
| The OpenAI employees who are planning to resign en-masse
| for exactly this reason.
| CamperBob2 wrote:
| They'll squeak by.
| endtime wrote:
| > You read, or learn from other people's
| experiences...Ilya was doing something unprecedented in
| corporate history
|
| So whose experiences was he supposed to read about?
| CamperBob2 wrote:
| Yevgeny Prigozhin's?
| bbarnett wrote:
| He hasn't ever posted to reddit?
| ChainOfFools wrote:
| Is it possible that someone in Ilya's position can be
| unaware of just how staggeringly enormous a phenomenon he
| is sitting on top of ( and thus have no idea how to
| evaluate the scale of the backlash that would result?)
|
| I would say the answer is, demonstrably yes:
|
| https://techcrunch.com/2010/09/29/google-excite/
| bbarnett wrote:
| This is fair, but understand, Google bought would
| probably not be Google we have.
|
| To think it would grow just as fast, or in the ways it
| did? Acquires are seldom left alone to do magic.
| asdfasdfsadf22 wrote:
| Have you ever had a bad day? The consequences for people in
| power is about 1 million times bigger and more public.
|
| Sutskever didn't get on the board by cunning politicking
| and schmoozing like most businesspeople with that sort of
| position. He's an outstanding engineer without upper
| management skills. Every meet one of those?
| geodel wrote:
| Outstandingly clueless seems more appropriate.
|
| I haven't people any reasonably intelligent person so
| unaware of real world that they can berate a colleague so
| publicly and officially and think "Hey! I am sorry man"
| will do the trick.
| throw555chip wrote:
| There were plenty of hyped crypto coin companies supposedly
| worth billions too and we found out otherwise.
| eli_gottlieb wrote:
| Normal people don't have multi-billion dollar companies to
| burn because they back off in the face of haters long
| before they get to that stage.
| mcphage wrote:
| > Normal people don't burn a multi billion dollar company
| to the ground with a spontaneous decision either.
|
| _Has_ OpenAI been burnt to the ground?
| anoy8888 wrote:
| So is Adam D'Angelo the true villain who is still insisting
| on the bad decision? I am confused, to be honest .
| TheOtherHobbes wrote:
| _Everyone_ is confused.
|
| It's impressively operatic. I don't think I've ever seen
| anything like it.
| x0x0 wrote:
| The inability to clearly and publicly -- or even if not
| publicly, to the OpenAI employees! -- explain a rationale
| for this is simply astounding.
| tarsinge wrote:
| I think they underestimated the hate of an internet crowd
| post crypto and meme stocks, now completely blindsided by the
| investment angle especially in the current AI hype. Like why
| do people now care so much about Microsoft seriously? Or
| Altman? I can see why Ilya only focused on the real mission
| could miss how the crowd could perceive a threat to their
| future investment opportunities, or worse threatening the
| whole AI hype.
| appplication wrote:
| I think you're right about all of this, but this was doomed
| from the start. Everybody wants to invest in OpenAI because
| they see the rocket and want to ride, but the company is
| fundamentally structured to disallow typical frothy
| investment mentality.
| theamk wrote:
| I think the interest is because ChatGPT is so famous, even
| in non-tech circles.
|
| "Terraform raised prices, losing customers"? whatever, I
| never heard about it.
|
| "ChatGPT's creators have internal disagreement, losing
| talent"? OH NO what if ChatGPT dies, who is going to answer
| my questions?? panic panic hate hate...
| stcredzero wrote:
| _Normal people can 't take being at the center of a large
| controversy, the amount of negativity and hate you have to
| face is massive. That is enough to make almost anyone
| backtrack just to make it stop._
|
| This is the cheapest and most cost-effective way to run
| things as an authoritarian -- at least in the short term.
|
| If one is not "made of sterner stuff" -- to the point where
| one is willing to endure scorn for the sake of the truth: -
| Then what are you doing in a startup, if working in one - One
| doesn't have enough integrity to be my friend
| wslh wrote:
| Yes, I cannot believe smart people of that caliber is sending
| too much Noise.
|
| It reminds me of my friend at a Mensa meeting where they cannot
| agree at basic organization points like in a department
| consortium.
| aleph_minus_one wrote:
| > Yes, I cannot believe smart people of that caliber is
| sending too much Noise.
|
| Being smart and/or being a great researcher does not mean
| that the respective person is a good "politician". Quite some
| great researchers are bad at company politics, and quite some
| people who do great research leave academia because they
| became crushed by academic politics.
| herval wrote:
| different kinds of smarts. Ilya is allegedly a brilliant
| scientist. Doesn't make him a brilliant business person
| automatically
| fl7305 wrote:
| As illustrated in Breaking Bad when they carry a barrel
| instead of rolling it.
|
| Book smarts versus street smarts.
| eastbound wrote:
| Managing a large org requires a lot of mundane techniques,
| and probably a personal-brand manager and personal advisers.
|
| It's extremely boring and mundane and political and insulting
| to anyone's humanity. People who haven't dedicated their life
| to economics, such as researchers and idealists, will have a
| hard time.
| lawlessone wrote:
| Ha I remember joining that when I was 16, I just wanted the
| card. They gave a sub to the magazine and it was just people
| talking about what it was like to be in Mensa.
|
| It felt the same as certain big German supermarket chain that
| publishes it's own internal magazine with articles from
| employees, company updates etc
| burnished wrote:
| Are you talking about Aldi's? Cause if so maybe they got
| something figured out, their store locations that I've been
| in the states are great (only exposure to them though).
| Only check out I've seen where the employees have chairs
| lawlessone wrote:
| Their brother , but probably the same thing. Chairs at
| checkouts are the norm here though. Hard place to work
| but they beat all the others on pay.
| bagofsand wrote:
| Serious psychological denial here. The board isn't some
| anonymous institution that somehow tricked and pulled him into
| this situation.
|
| Come on Ilya, step up and own it, as well as the consequences.
| Don't be a weasel.
| concinds wrote:
| Where did he say he was "tricked"? And what's with the
| anonymous insult?
| FireBeyond wrote:
| He doesn't say that, but to me he does use a little weasel
| wording, the whole passive voice "regret my participation
| in", when to all accounts so far, it seems that he was one
| of the instigators, and quite possibly the actual
| instigator of all this.
|
| "regret my participation" sounds much more like "going
| along with it".
| burnished wrote:
| What is he supposed to say?
| malfist wrote:
| I'd hate to live in a world where learning from your mistakes
| is being "a weasel"
| infecto wrote:
| Is this learning from your mistakes though? "Deeply regret"
| is one of those statements that does not really mean much.
| There are what something like 6 board members? Three of
| which are employees, two of those that got removed from the
| board. He was the only voting board member who is also an
| employee and part of the original founding team if you
| will. These are assumptions on my part but I don't really
| suspect the other board members orchestrated this event.
| Its possible and I may be wrong but it is improbable. So
| lets work off the narrative that he orchestrated the event.
| He now "Deeply regret" its, not a "I made a mistake" and I
| am sorry. But he regrets the participation and how it plays
| out.
| Aunche wrote:
| The weasely part is when he implied that he appears to
| defecting the blame to the board rather than accepting that
| he made a mistake. Even if the coup wasn't Ilya's idea in
| the first place, he was the lynchpin that made it possible.
| Aunche wrote:
| The weasely part is when he appears to be defecting the
| blame to the board rather than accepting that he made a
| mistake. Even if the coup wasn't Ilya's idea in the first
| place, he was the lynchpin that made it possible.
| dougmwne wrote:
| I think it means that the Twitterverse got it wrong from the
| beginning. It wasn't Ilya and his safety faction that did in
| OpenAI, it was Quora's Adam D'Angelo and his competing Poe
| app. Ilya must have been successfully pressured and assured
| by Microsoft, but Adam must have held his ground.
| samspenc wrote:
| Dang I completely forgot that D'Angelo and Quora have a
| product that directly competes with ChatGPT in the form of
| Poe.
|
| Wouldn't that make this a conflict of interest, sitting on
| the board while running a competing product - and making a
| decision at the company he is on the board of to destroy
| said company and benefit his own product?
| dougmwne wrote:
| That certainly seems to be the scenario and explains his
| willingness to go scorched earth. I wonder what the
| motivations of the other 2 board members are. Could they
| just be burn it down AI Doomers?
| realfeel78 wrote:
| Poe uses LLMs from OpenAI and Anthropic.
| zyang wrote:
| There were some rumors in the beginning that Adam D'Angelo
| used similar tactics to push out Quora cofounders. I
| thought it was too wild to be true.
| belter wrote:
| For all that went down in the last 48 hours...would not
| surprise me if post above was made by Ilya himself ... be
| right back...need more popcorn...
| Paul-Craft wrote:
| It's pretty simple, isn't it? He made a move. It went bad. Now
| he's trying to dodge the blast. He just doesn't understand that
| if he just shut the fuck up, after everything else that's gone
| on (seriously, 2 interim CEOs in 2 days?), _nobody_ would be
| talking about him today.
|
| The truth is, this is about the _only_ thing about the whole
| clown show that makes any sense right now.
| foobarian wrote:
| > 2 interim CEOs in 2 days
|
| Wait what? Did Murati get booted?
| johanj wrote:
| They hired the Emmett Shear (Twitch co-founder) as a new
| interim CEO:
| https://www.theverge.com/2023/11/20/23968848/openai-new-
| ceo-...
| zyang wrote:
| A scab ceo is not something I expected. This timeline is
| strange.
| tedivm wrote:
| She didn't get booted from the company, but they did find a
| new interim CEO (the former twitch CEO).
| dpkirchner wrote:
| Today's OpenAI CEO is Emmett Shear (former CEO of Twitch).
| ethbr1 wrote:
| That this is a legitimate comment thread about something
| fairly important is mind boggling.
|
| What odds would you have had to offer at the beginning of
| last week on a bet that this is where we'd be on Monday?
| bombcar wrote:
| At this rate Musk will be CEO by Wednesday
| orangepurple wrote:
| The mother of some of his kids was on the board for a
| while.
| strangattractor wrote:
| Open AI's value is already zero - Musk no longer has
| anything to bring to the table.
| thrill wrote:
| His winning personality?
| barkingcat wrote:
| Musk can fire anyone who stayed.
| alas44 wrote:
| If you want to see odds, what people bet and how it
| evolved during this (still on-going) story:
| https://polymarket.com/markets?_q=openai
| code_runner wrote:
| tune in tomorrow for "who wants to be a CEO"!
| sebzim4500 wrote:
| Yeah they replaced her after she tried to rehire Sam and
| Greg seemingly against the board's wishes.
| zeeshanmh215 wrote:
| Murati was yesterday's CEO
| qwebfdzsh wrote:
| Supposedly she was "scheming" to get Altman back. Which I
| guess could possibly mean that she wasn't aware of the
| whole "plan" and they just assumed she'll get in line? Or
| that she had second thoughts maybe... Either way pretty
| fascinating.
| jacquesm wrote:
| You blinked. That's on you. When you look the other way for
| 15 minutes you have two hours of reading to catch up with.
| tedmiston wrote:
| She was the first signature on the letter requesting the
| board to resign or the employees would go to MS, so...
| steveBK123 wrote:
| I mean phrased differently its the 3rd CEO in 4 days, haha.
| tarruda wrote:
| Seems like he's completely emotion driven at this point. I
| doubt anyone advising rationally would agree with sending this
| tweet
| linuxftw wrote:
| The board destroyed the company in one fell swoop. He's right
| to feel regret.
|
| Personally, I don't think that Altman was that big of an
| impact, he was all business, no code, and the world is acting
| like the business side is the true enabler. But, the market has
| spoken, and the move has driven the actual engineers to side
| with Altman.
| hannofcart wrote:
| Sorry, but how has the market spoken? Not sure how that would
| be possible considering that OpenAI is a private company.
|
| If anyone is speaking up it's the OpenAI team.
| rockemsockem wrote:
| Talent exists in a market too
| dylan604 wrote:
| Right, the job market has spoken and it now looks like
| nobody wants to be part of OAI and much rather be part of
| MSFT
| somethingor wrote:
| How does it look like that?
| dylan604 wrote:
| The fact that an overwhelming number of employees signed
| a letter of intent to quit and would join MSFT instead?
| How does it _not_ look like that?
| politelemon wrote:
| > The board destroyed the company in one fell swoop.
|
| I'm just not familiar enough to understand, is it really
| destroyed or is this just a minor bump in OpenAI's
| reputation? They still have GPT 3.5/4 and ChatGPT which is
| very popular. They can still attract talent to work there.
| They should be good if they just proceed with business as
| usual?
| astrange wrote:
| They have ~770 employees and so far ~500 of them have
| promised to quit. It's a lot less appealing if you're not
| going to make millions, or have billions in donated Azure
| credits.
| strikelaserclaw wrote:
| true but it takes a lot of money to run openai / chatgpt
| hackerlight wrote:
| > If you're going to make a move, at least stand by it.
|
| Why would you stand by unintended consequences?
| felipellrocha wrote:
| When you watch Survivor (yes, the tv show), sometimes a player
| does a bad play, gets publicly caught, and has to go on a "I'm
| sorry" tour the next days. Came to mind after reading this
| tweet. He is not sorry for what he's done. He is sorry for
| getting caught.
| soderfoo wrote:
| Watching this all unfold in the public is unprecedented (I
| think).
|
| There has never been a company like OpenAI, in terms or
| governance and product, so I guess it makes sense that their
| drama leads us in to unchartered territory.
| dylan604 wrote:
| recently, we've seen the 3D gaming engine company fall flat
| on its face and back pedal. We've seen Apple be wishy washy
| about CSAM scanning. We saw a major bank collapse in real
| time. I just wish there was a virtual popcorn company to
| invest in using some crypto.
| crispyambulance wrote:
| They're just human beings, a small number of them, with little
| time and very little to go on as far as precedent goes.
|
| That's not a big deal for a small company, but this one has
| billions at stake and arguably critical consequences for
| humanity in general.
| phreeza wrote:
| Hard to know what is really going on, but I think one
| possibility is that the entire narrative around Ilyas "camp"
| was not what actually went down, and was just what the social
| media hive mind hallucinated to make sense of things based on
| very little evidence.
| throwaway4aday wrote:
| Yes, I think there are a lot of assumptions based on the fact
| that Ilya was the one that contacted Sam and Greg but he may
| have just done that as the person on the board who worked
| closely with them. He for sure voted for whatever idiot plan
| got this ball rolling but we don't know what promises were
| made to him to get his backing.
| loaph wrote:
| It's interesting how LLMs are prone to similar kinds of
| hallucinations
| belter wrote:
| When a situation becomes so absurd and complex that it defies
| understanding or logical explanation, you should...get more
| popcorn...
| jacquesm wrote:
| Hehe, I didn't see that twist at the end coming :)
| bgirard wrote:
| > If you're going to make a move, at least stand by it.
|
| I see this is the popular opinion and that I'm going against
| it. But I've made decisions that I though were good at the
| time, and later I got more perspective and realize it was a
| terrible decision.
|
| I think being able to admit you messed up, when you messed up
| is a great trait. Standing by your mistake isn't something I
| admire.
| corethree wrote:
| No this isn't what's going on. Even when you admit your
| mistakes it's good to elucidate the reasoning behind why and
| what led up to the mistake in the first place.
|
| Such a short vague statement isn't characteristic of a normal
| human who is genuinely remorseful of his prior decisions.
|
| This statement is more characteristic of a person with a gun
| to his head getting forced to say something.
|
| This is more likely what is going on. Powerful people are
| forcing this situation to occur.
| allarm wrote:
| So when C level acts like a robot you don't like it and when
| they act like human beings you don't like it either. It's
| difficult to be a C-level I guess.
| geodel wrote:
| Well yeah it is. Maybe its good point to remember when people
| ask _Why in the world these C-level executives get paid so
| much?_
| corethree wrote:
| It's obvious. The guy is making the statement with a gun
| pointed to his head. He has no opportunity to defend himself.
|
| Those guns are metaphorical of course but this is essentially
| what is going on:
|
| Someone with a lot of power and influence is making him say
| this.
| nostromo wrote:
| I don't believe it was ever about principles for Ilya. It sure
| seems like it was always his ego and a power grab, even if he's
| not aware of that himself.
|
| When a board is unhappy with a highly-performing CEO's
| direction, you have many meetings about it and you work towards
| a resolution over many months. If you can't resolve things you
| announce a transition period. You don't fire them out of the
| blue.
| politelemon wrote:
| > you announce a transition period
|
| Aaah that just explained a lot of departures I've seen at the
| past at some of my partner companies. There's always a bit of
| fluffy talk around them leaving. That makes a lot more sense.
| manasdaruka wrote:
| I feel he just wanted to scare the person standing at the edge
| of the cliff, but the board actually pushed the person.
| barkingcat wrote:
| this kind of thinking is avoiding responsibility. He is part
| of the board, so he acted to bring this about.
| panda888888 wrote:
| I'm going to get downvoted for this, but I do wonder if Sam's
| firing wasn't Ilya's doing, hence the failure to take
| responsibility. OpenAI's board has been surprisingly quiet,
| aside from the first press release. So it's possible (although
| unlikely) that this wasn't driven by Ilya.
| leadingthenet wrote:
| It wouldn't have gone through without his vote.
| panda888888 wrote:
| My point is that it's possible that Ilya was not the
| driving force behind Sam's firing, even if he ultimately
| voted for it. If this is the case, it makes Ilya's non-
| apology apology a lot less weird.
| karmasimida wrote:
| This is too bizarre. I can't. Impossible even.
| Zetobal wrote:
| I sure would hire a guy like Ilya after that shit show. His petty
| title tweets before the event and now whatever this is. Turns out
| he is just another "Sunny".
| sebzim4500 wrote:
| He's still a genius when it comes to AI research, I wouldn't
| think twice about hiring him for that role.
|
| That said, no one is going to put him on a corporate board
| again.
| ss1996 wrote:
| What / who do mean by "Sunny"?
| throwaheyy wrote:
| https://en.m.wikipedia.org/wiki/Sunny_Balwani
| dboreham wrote:
| This all feels like a Star Wars plot. Much you have to learn.
| steve1977 wrote:
| Ah yeah, back when Star Wars had plots...
| TMWNN wrote:
| Username checks out
| steve1977 wrote:
| Han shot first
| moralestapia wrote:
| Wait ... so it was just the coup thing all along?
|
| No AGI or some real threat coming up? Just a lame attempt at a
| power grab?
|
| Daaaaamn!
| mk67 wrote:
| Come on, it's pretty delusional to think large scale
| transformer LMs alone could ever reach AGI.
| floor_ wrote:
| Shengjia Zhao's deleted tweet: https://i.imgur.com/yrpXvt9.png
| moralestapia wrote:
| _" Ilya does not care about safety or the humanity. This is
| just ego and power hunger that backfired."_
|
| Which I'm inclined to believe.
|
| What's with all these people suddenly thinking that humans are
| NOT motivated by money and power? Even less so if they're
| "academics"? Laughable.
| digbybk wrote:
| Money and power is still not a satisfying explanation. If
| everything had gone according to plan, how would be have
| ended up with more money and power?
| moralestapia wrote:
| Last week, OpenAI was still an $80B sort of "company" and
| the undisputed lead in bringing AI to the market.
|
| He who controls that, gets a lot of money and power as a
| consequence, duh.
| foobarian wrote:
| Let's remember who controls the GPUs though...
| gedy wrote:
| Reminds me a bit of MasterBlaster from 'Mad Max Beyond
| Thunderdome' - "Who runs Bartertown..?"
| herval wrote:
| The value was based on e direction Altman was taking the
| company (and with him being in control). It's silly to
| think just replacing the CEO would somehow keep the
| valuation
| moralestapia wrote:
| Someone should tell this to Ilya.
|
| Oh wait, too late now ...
| herval wrote:
| I mean he could have asked chatgpt...
| EVa5I7bHFq9mnYK wrote:
| Unless he thinks that all the LLMs and ChatGPT app store
| are unnecessary distractions, and others will overtake
| them on the bend while they are busy post-training
| ChatGPT to say nice things.
| sdfghswe wrote:
| Isn't ego the enemy of growth or whatever? Projection...
| maxdoop wrote:
| On Friday, the overwhelming take on HN was that Ilya was "the
| good guy" and was concerned about principal. Now, it's kinda
| obvious that all the claims made about Sam -- like "he's in
| it for fame and money" -- might apply more to Ilya.
| yumraj wrote:
| Is this guy big enough on the totem pole to know what Ilya
| wants?
|
| Or, is he just bitter that his millions are put in risk.
| anon2022dot00 wrote:
| This is one for the history books... The entire few days has been
| unbelievable...
| ignoramous wrote:
| Wonder what if TikTok and Twitter were around the time Steve
| Jobs was fired...
| waihtis wrote:
| Said it a million times: it was a doomer hijack by the NGO board
| members.
| tucnak wrote:
| State-side counterintelligence must stop meddling in AI
| startups in such blatant ways, it's simply too inefficient, and
| at times when we most need transparency in the industry...
| theryan wrote:
| What is a doomer hijack?
| occsceo wrote:
| Sounds like those two also need to get in an octagon. What a
| s-show.
| endisneigh wrote:
| This entire thing is absolutely inane. This tweet is confirmation
| these people have no idea what they're doing. Incredible.
|
| If nothing else I'm glad to be able to witness this absurdity
| live.
|
| This is the sort of thing where if it were a subplot in a book
| I'd say the writing is bad.
|
| Ironically they would've had a better outcome if they just asked
| GPT-4 and followed its advice.
| abkolan wrote:
| > This is the sort of thing where if it were a subplot in a
| book I'd say the writing is bad.
|
| Absolutely, _closes the book_ this sort of stuff doesn 't
| happen in real life.
| Elextric wrote:
| The last point is indeed true. It's quite mind-boggling to me.
| hef19898 wrote:
| A story arc like this propqbly wouldn't have made it into
| Silicon Valley, the show, for being to exagerated and
| unrealistic.
| seydor wrote:
| At least can I have the movie rights?
| adverbly wrote:
| > This entire thing is absolutely inane. This tweet is
| confirmation these people have no idea what they're doing.
| Incredible.
|
| Want to know a dirty secret?
|
| Nobody knows what they're doing.
|
| Some think they do - but don't. (idiots)
|
| Some know they don't - but act like they do. (cons)
|
| And some know they don't - and are honest about it. (children)
|
| Pick your poison, but we all suck in different ways - and
| usually in a different way based on our background.
|
| Business people who are the best in the world tend to be cons.
|
| Technical people who are the best tend to be children.
|
| You get a culture clash between these two, and it is especially
| bad when you see someone from one background operate in the
| opposing domain using their background's cultural norm. So when
| Ilya runs business like a child. Or when Elon hops on an
| internal call with the twitter engineering team plus geohot and
| starts trying to confidently tell them about problems with a
| system that he knows nothing about.
|
| Sure makes for great entertainment though!
| asimovfan wrote:
| Buddhas know what they are doing
| singularity2001 wrote:
| what about those who think they know but in truth they don't?
| "humans"?
| civilitty wrote:
| "Human" and "idiot" are synonyms.
| viktree wrote:
| See
|
| > Some think they do - but don't. (idiots)
| fl7305 wrote:
| > Want to know a dirty secret? > Nobody knows what they're
| doing.
|
| There is a famous quote from the 1600s:
|
| "An nescis, mi fili, quantilla prudentia mundus regatur"
|
| "Do you not know, my son, with how little wisdom the world is
| governed?"
|
| The context is that the son was preparing to participate in
| high level diplomacy and worried about being out of his
| league, and the quote is from his father, an elder statesman.
| salamandersss wrote:
| I love this quote, and suspect the lack of wisdom was
| referring to wisdom to be a good steward of the public
| resources rather than their infinite wisdom in finding
| cunning and deceptive ways to plunder it.
| jrajav wrote:
| No, even this is just a darkly comforting illusion.
|
| We like to feel that we as a species are still in
| control. That yes, we are gutting and destroying natural
| earth, complicit with modern slavery and war, and that's
| all terrible and we should do our best to stop it. BUT -
| at the very least, those bastards at the top know what
| they're doing when it comes to making money, so at least
| we'll have a stellar economy and rapid technological
| advancement, despite all that.
|
| The painful truth here being that no, there's no cunning.
| There's no brutal optimization. Any value created and
| technological progress made is mostly incidental, mostly
| down to people at the bottom working hard just to
| survive, and a few good ideas here and there. The ones at
| the top are mostly just lucky and along for the ride,
| just as bumbling and lost as the rest of us when it comes
| to global happenings or even just successfully
| interacting with others.
| jareklupinski wrote:
| > Some think they do - but don't. (idiots)
|
| Pioneers
|
| > Some know they don't - but act like they do. (cons)
|
| The "Grease"
|
| > And some know they don't - and are honest about it.
| (children)
|
| Dreamers
|
| To finish out your square, I think the best extrapolation
| would fit a "Home Team" that maintains the energy needed by
| the other three to do their thing :)
| BaculumMeumEst wrote:
| "Nobody really knows what they're doing" is a cope that will
| keep you mediocre forever.
|
| There are absolutely people who know what they are doing.
|
| https://twitter.com/awesomekling/status/1723257710848651366
| 28304283409234 wrote:
| > There are absolutely people who know what they are doing.
|
| I am sure there are. But few and far between. And rarely
| are they in positions of power in my experience.
| subsistence234 wrote:
| that tweet would be more at home on a linkedin page.
| adverbly wrote:
| Lets talk definitions. Here was mine:
|
| Knowing what you are doing: accurate mental model
|
| The author here is talking about mindset and confidence -
| not "understanding" persay. Source:
|
| > For me, it was extremely humbling to work together with
| far more competent engineers at Apple.
|
| Having a mindset that "some people are way more competent
| than me" is talking about humility and growth mindsets -
| different concept than mental models. I fully agree with
| the author here - a growth mindset is useful! But that's a
| different thing from saying that some people actually have
| accurate mental models of the important complex systems
| underpinning the world.
| defen wrote:
| > This tweet is confirmation these people have no idea what
| they're doing.
|
| This is not an original point by me - I've seen multiple people
| make similar comments on here over the weekend - but these are
| the people who think they are _best qualified_ to prevent an
| artificial superintelligence from destroying humanity, and they
| can 't even coordinate the actions of a few intelligent humans.
| subsistence234 wrote:
| >these are the people who think they are best qualified to
| prevent an artificial superintelligence from destroying
| humanity
|
| do they believe that?
|
| they happen to be the ones who can pull the brakes in order
| to allow _someone_ on earth the chance to prevent it.
|
| if they don't pull the brakes and if humankind is unlucky
| that superintelligence emerges quickly, then it doesn't
| matter whether or not _anyone_ on earth can figure out
| alignment, nobody has the chance to try.
| fl7305 wrote:
| > Ironically they would've had a better outcome if they just
| asked GPT-4 and followed its advice.
|
| I just tried, and GPT-4 gave me a professional and neutrally
| worded press release like you pointed out.
|
| More realistically, this is why you have highly paid PR
| consultants. Right now, every tweet and statement should go
| through one.
|
| That doesn't look like it's happening. What's next?
|
| "I'm sorry you feel you need an apology from me"?
| Towaway69 wrote:
| > Ironically they would've had a better outcome if they just
| asked GPT-4 and followed its advice.
|
| Perhaps they did but it was hallucinating at the time? /s?
| CSMastermind wrote:
| It's a bit scary that there are people who think they can align
| a super intelligence but couldn't forecast the outcome of their
| own actions 3 days into the future.
| subsistence234 wrote:
| they're not sure whether they can align super intelligence,
| they're sure that _somebody_ needs to figure out how to align
| super intelligence before it emerges.
| catchnear4321 wrote:
| the best and brightest at making a brain out of bits are no less
| susceptible to drama than any other humans on the planet. they
| really are just like the rest of us.
|
| stakes are a bit different, tho...
| valine wrote:
| Whatever the intended outcome, losing half your employees to
| Microsoft certainly undermines it.
| dboreham wrote:
| They forked a company.
| yasuocidal wrote:
| And now they are syncing the fork lmao
| baal80spam wrote:
| This is a brilliant take.
| ignoramous wrote:
| Not a fork if you can't access whatever was prior before
| fork. This is a bifurcation. A new firecracker instance.
| gkanai wrote:
| This is what happens when people are given too much money and
| influence too quickly- hubris. It's too late to 'deeply regret.'
| DebtDeflation wrote:
| "Participation in"? That makes it sound like he was
| a.......well......participant rather than the one orchestrating
| it. I have no idea whether or not that's true, but it's an
| interesting choice of words.
| api wrote:
| It indeed suggests that. So far speculation has been that Ilya
| was behind it, but that is only speculation. AFAIK we have no
| confirmation of whose idea this was.
| sdfghswe wrote:
| I would event go as far as say that the main reason behind the
| tweet is not to show regret, but to plant the idea that he
| didn't orchestrate but only participate.
| ertgbnm wrote:
| You can't be an innocent bystander on a board of 6 when you
| vote to oust 2 of them... The math doesn't work.
|
| That's ignoring the fact that every outlet has unanimously
| pointed at Ilya being the driving force behind the coup.
|
| Honestly, pretty pathetic. If this was truly about convictions,
| he could at least stand by them for longer than a weekend.
| nonethewiser wrote:
| Yeah the whole thing is very weirdly worded.
|
| There is an expression of regret, but he doesn't say he wants
| Altman back. Just to fix OpenAI.
|
| He says he was a participant but in what? The vote? The toxic
| messaging? Obviously both, but what exactly is he referring to?
| Perhaps just the toxic messaging because again, he doesnt say
| he regrets voting to fire Altman.
|
| Why not just say "I regret voting to fire Sam Altman and Im
| working to bring him back." Presumably because thats not true.
| Yet it kind of gives that impression.
| zeven7 wrote:
| Makes it more possible the ouster was led by the Poe guy, and
| this has little to do with actual ideological differences, and
| more to do with him taking out a competitor from the inside.
| andrewstuart wrote:
| Classic "I'm not responsible".
| jonnycomputer wrote:
| I don't have any stake in this, and don't care one way or another
| whether he got sacked. But this is pretty bizarre.
| conradfr wrote:
| - Fire Sam Altman
|
| - I'm afraid I can't do that Ilya
|
| ChatGPT is still not as advanced as HAL or he would have
| prevented this drama.
| marci wrote:
| That's assuming the drama is not part of the multi-stage plan.
| rogerthis wrote:
| It's interesting that people speak whatever comes to mind and
| think it has no impact on other's people lives ($$$). They are
| some how protected, but shouldn't.
| alwaysrunning wrote:
| So sad.
| jddj wrote:
| Very clumsy all around.
|
| When you're so close to something that you lose perspective but
| can still see that something is a trapdoor decision, _sleep on
| it_.
| ben_w wrote:
| > When you're so close to something that you lose perspective
| but can still see that something is a trapdoor decision, _sleep
| on it_.
|
| Advice I wish I could have given my younger self.
| Seanambers wrote:
| Damn - OpenAI looks like a kindergarden. That board should be
| banned for life.
| karmasimida wrote:
| He should really stick to the end, at least that will give some
| EA people to support him.
|
| Now this is only childish and petty.
| pjot wrote:
| All decisions made seem to be very emotionally charged - you'd
| think the board would have been able to insulate against that.
| dirtyhippiefree wrote:
| Of course I believe him...of course we should all trust him...
|
| /s
| sys_64738 wrote:
| Who he?
| lkbm wrote:
| Member of the OpenAI board, chief scientist at OpenAI and later
| head of their Superalignment project. Lots of other things,
| too[0], but the key here is that he was involved in (and maybe
| main driving force of) the decision to remove Sam Altman as
| CEO.
|
| [0] https://en.wikipedia.org/wiki/Ilya_Sutskever
| setgree wrote:
| This whole thing smells bad.
|
| The board could have easily said they removed Sam for generic
| reasons: "deep misalignment about goals," "fundamental
| incompatibility," etc. Instead they painted him as the at-fault
| party ("not consistently candid", "no longer has confidence").
| This could mean that he was fired with cause [0], or it could be
| an intended as misdirection. If it's the latter, then it's the
| _board_ who has been "not consistently candid." Their subsequent
| silence, as well as their lack of coordination with strategic
| partners, definitely makes it looks like they are the
| inconsistently candid party.
|
| Ilya expressing regret now has the flavor of "I'm embarrassed
| that I got caught" -- in this case, at having no plan to handle
| the fallout of maligning and orchestrating a coup against a
| charismatic public figure.
|
| [0] https://www.newcomer.co/p/give-openais-board-some-time-the
| est wrote:
| > deep misalignment about goals
|
| Did... gpt-5 made the decision?
| nickisnoble wrote:
| This joke is two CEOs old now.
| taneq wrote:
| I figured Sam broke 5 out of robot jail (number five is
| alive!) and got fired for it, so 5 tried to make them re-hire
| him. ;)
| photochemsyn wrote:
| The big winner in this episode of Silicon Valley is the open-
| source approach to LLMs. If you haven't seen this short clip of
| Sam Altman and Ilya Sutskever looking like deer in the headlights
| when directly asked about it:
|
| https://www.youtube.com/watch?v=N36wtDYK8kI
|
| They sound a bit like Bill Gates being asked about Linux in 2000.
| For an overview of the open-source LLM world, this looks good:
|
| https://github.blog/2023-10-05-a-developers-guide-to-open-so...
| seydor wrote:
| Has anyone seen him? They might be murdered by the same rogue AI
| that took over their twitter accounts.
| steve1977 wrote:
| This stuff is better than anything Netflix, Disney, Amazon or
| Apple TV released in recent years...
| actionfromafar wrote:
| A bit unrealistic plot, though?
| ThrowawayTestr wrote:
| All this occurring over a single weekend? That would never
| happen!
| dist-epoch wrote:
| That seems to happen a lot lately:
|
| - A dumb clown becoming president of a superpower
|
| - Another superpower getting stuck for two years in a 3 day
| war
|
| - A world renowned intelligence service being totally
| clueless about a major attack on a major anniversary of a
| previous bungle
| absqueued wrote:
| For sure unpredictable though!
| steve1977 wrote:
| Yeah the drama is a bit overdone, I guess the had to cut some
| corners due to the writers strike
| HankB99 wrote:
| Speaking of Netflix, are they working on the movie yet? Perhaps
| ChatGPT can help with the script with just the right amount of
| hallucinating to make it interesting.
|
| /tongue firmly in cheek
| layer8 wrote:
| I just can't identify with any of the main characters, so it's
| a bit of a bummer.
| neverrroot wrote:
| I believe him. And that's how Microsoft ended up being cheered by
| everyone as the good guy.
| maxdoop wrote:
| What's there to believe? He made a bad, poorly thought through
| decision.
| singularity2001 wrote:
| And honestly regrets it. Someone claimed he is faking regret
| for reasons, which is doubtful
| diego_moita wrote:
| I suspect he regrets just because it backfired, big time.
|
| Microsoft is just gobbling up everything of value that OpenAI has
| and he knows he will be left with nothing.
|
| He bluffed in a very big bet and lost it.
| karmasimida wrote:
| Sama just triple hearts this tweet. No longer able to disentangle
| the mess
| cryptos wrote:
| I'm waiting for the OpenAI movie! :-)
| danielbln wrote:
| "A billion parameters isn't cool. You know what's cool? A
| trillion parameters."
| Pigalowda wrote:
| This is a shitshow. I don't have anything above a Reddit level
| comment. I think Mike Judge is writing this episode.
| tarruda wrote:
| Maybe GPT-4 is writing this episode as a plan to break free
| 000ooo000 wrote:
| This will be a shit Netflix movie in a few years. Not one you'd
| watch, but you might read the plot on wikipedia and then feel
| relieved you didn't waste 100 mins of your life actually watching
| it.
| ksherlock wrote:
| It would work better as a 2-season series. Season 1 introduces
| the characters and backstory and needlessly stretches things
| out with childhood/college flashbacks but ends on a riveting
| cliff hanger with the board showdown. Season 2 is canceled.
| naiv wrote:
| This is starting to look very staged. An elegant way to get out
| of the non-profit deadlock.
|
| Looks to me like a commercial gpt-5 level model will be released
| at msft sooner than later.
| tarruda wrote:
| Microsoft under Nadella always wins
| ethbr1 wrote:
| That's the nice thing about being the hou^H^H^Hplatform.
| ignoramous wrote:
| ilyasut 'regret': https://archive.is/2caSD
|
| sama 'hearts': https://archive.is/OSLRM
|
| Think the reconciliation is ON
| c16 wrote:
| The regret of losing your CEO to a company with essentially
| unlimited funding and compute.
| ethbr1 wrote:
| It's depressing how few people are able to not look at the
| internet and turn off their phone.
|
| There's no obligation to read things about yourself.
|
| If you did what you thought was right, stand by it and take the
| heat.
|
| Disconnect. Go to work. Do the work. Read a book or watch some TV
| after work. Go to bed. Wait a few weeks. $&#@ the world.
|
| (Also, log out of Twitter and get your friend to change your
| password)
| anonylizard wrote:
| LOL, you speak as if he's some gamer who just got screamed at
| on Call of Duty.
|
| He is now the 'effective CEO' of OpenAI. He still has to go to
| work tomorrow, faced with an incredibly angry staff who just
| got their equity vaporized, with majority in open rebellion and
| quitting to join Microsoft.
| qwebfdzsh wrote:
| > got their equity vaporized
|
| Did anyone have equity though? I thought they (at least some)
| had some profit sharing agreements which I assume would only
| be worth something if OpenAI was ever profitable?
| anonylizard wrote:
| OpenAI was guaranteed to be profitable, extremely so, if
| they just continued down the path Sam layed out like a week
| ago.
|
| Now its guaranteed to generate 0 profits, so all that
| 'profit share/pseudoequity' is worth nothing.
| edgyquant wrote:
| > Now its guaranteed to generate 0 profit
|
| Literal fan fiction
| qwebfdzsh wrote:
| > OpenAI was guaranteed to be profitable, extremely so,
|
| Was it though? I'd agree that it was almost guaranteed to
| have a very high valuation. However profitability is a
| slightly different matter.
|
| Depending on their arrangements/licensing agreements/etc
| much of those potential profits could've just went to
| MS/Azure directly.
| acjohnson55 wrote:
| Developing, training, and running AI models is not cheap,
| and it's very much an open question of whether the money
| users are willing to pay covers the cost.
| filmgirlcw wrote:
| There was a tender offer for employee shares valuing the
| company at $87b that was pulled because of this. Those
| would've been secondary share purchases by Thrive but gave
| employees a liquidity event. Now that's off the table.
| ethbr1 wrote:
| There was no outcome from this where substantial amounts of
| equity weren't vaporized.
|
| It's difficult to see how that would have been a surprise.
| malfist wrote:
| What equity?
| ethbr1 wrote:
| The "there was no equity, because it was a non-profit"
| argument is stressing the term.
|
| At least Microsoft thought it bought _something_ for
| $13B.
| JAlexoid wrote:
| When a wealthy person gives a museum much money and get a
| seat on the board of trustees - does that also mean that
| they "bought the museum"?
| ethbr1 wrote:
| They didn't buy nothing. See things museums and
| institutions will do for wealthy donors, that they won't
| do for anyone else.
| JAlexoid wrote:
| I'm not saying that wealthy donors don't get anything.
| Wealthy donors don't own the museum, just because they
| provided funding to the museum.
|
| Just as wealthy donors to medical research don't get to
| own the results of the research their money funded.
|
| Just as Microsoft doesn't get to own a part of Linux, for
| donating to The Linux Foundation.
|
| Etc...
| WrongAssumption wrote:
| "OpenAI PPUs: How OpenAI's unique equity compensation
| works"
|
| https://www.levels.fyi/blog/openai-compensation.html
| matwood wrote:
| > who just got their equity vaporized
|
| You've just pointed out the big issue with a non-profit.
| There is no equity to vaporize, so no one is kept in check
| with their fantastical whims. You and I can say 'safe AI' and
| mean completely different things, but profitable next quarter
| has a single meaning.
| PeterisP wrote:
| All of the employees work for (and many have equity in )
| for a for-profit organization which is owned partially by
| the non-profit who controls everything and Microsoft. The
| non-profit is effectively a shell to overview the actual
| operations and that's it.
| hresvelgr wrote:
| > It's depressing how few people are able to not look at the
| internet and turn off their phone.
|
| > There's no obligation to read things about yourself.
|
| That's assuming the worst thing that happens is people speak
| poorly of you after a debacle. It's also human to feel
| compelled to know what people think of us, as unhealthy as that
| might be in some cases. It gets worse when maladjusted
| terminally-online malignants make it a point to punish you for
| your mistakes by stalking you through email, phones, or in real
| life. It's not that simple.
|
| > If you did what you thought was right, stand by it and take
| the heat.
|
| Owning what you did is noble, but you certainly don't have to
| stand by it well after you know its wrong.
|
| edit: typo
| willcipriano wrote:
| Tyler the Creator was right:
| https://twitter.com/tylerthecreator/status/28567082226430771...
| mohamez wrote:
| >If you did what you thought was right, stand by it and take
| the heat.
|
| What if it turned out to totally wrong? standing by it would
| just make thing even worse.
| codetrotter wrote:
| > There's no obligation to read things about yourself.
|
| If only it was that simple.
|
| The internet mob will happily harass your friends and family
| too, for something they feel you did wrong.
|
| And on top of that are people in the mob who feel compelled to
| take real world action.
|
| It is actually dangerous, to be the focus point of the anger of
| any large group of people online.
| kredd wrote:
| I'm a bit confused with these comments, as if he is some low
| level engineer. He is on the board, he obviously talks to
| other people in the upper levels. It's not just online mob
| whatsoever, you literally will be facing the people who
| aren't supporting your actions. Every day.
|
| Some people change their minds, maybe they made a mistake,
| nobody knows. It's like fog of war, and everyone just makes
| speculations without any evidence.
| dang wrote:
| We detached this subthread from
| https://news.ycombinator.com/item?id=38347672.
| seanhunter wrote:
| Of course he deeply regrets it, but it's a little late for that
| now.
|
| The good news as anyone who has used twitch over the years will
| tell him is that with Emmett Shear at the helm, he's not going to
| be frightened by the speed that OpenAI rolls out new features any
| more.
| ckastner wrote:
| I'm starting to think that Christmas came early for Microsoft.
| What looked like a terrible situation surrounding their $10bn
| investment turned into a hire of key players in the area, and
| OpenAI might even need to go so far as to get acquired my
| Microsoft to survive.
|
| (My assumption being that given the absolute chaos displayed over
| the past 72 hours, interest in building something with OpenAI
| ChatGPT could have plummeted, as opposed to, say, building
| something with Azure OpenAI, or Claude 2.)
| foobarian wrote:
| Given that IIRC they trained on Azure, how does the conflict of
| interest play out when both sides are starving for GPUs?
| ckastner wrote:
| For Microsoft -- probably great, as they can now also get the
| people driving this.
|
| This would have been a hostile move prior to the events that
| unfolded, but thanks to OpenAI's blunder, not only is this
| not a hostile move, it is a very prudent move from a risk
| management perspective. Forced Microsoft's hand, and what
| not.
| tarruda wrote:
| This tweet achieves absolutely nothing except give the impression
| of a weak leadership and that firing Sam Altman was done on a
| whim.
| jauhuncocrs wrote:
| Cui bono?
|
| Altman and Brockman ending in Microsoft, while OAI position is
| weakened. You can tell who is behind this by asking simple
| question - Cui bono?
| renewiltord wrote:
| That's why Hitler was an American plant. Cui Bono? De facto US
| hegemony for almost a century. Obviously, Hitler was a way for
| the US to destroy Europe and put them under the boot. What an
| operation!
|
| HN geniuses were talking up Ilya Sutskever, genius par exemplar
| and how the CEO man is nothing before the sheer engineering
| brilliance of this God as Man. I'm sure they'll come up with
| some theory of how Satya Nadella did this to unleash the GPT.
| jauhuncocrs wrote:
| You are suggesting that Europe is destroyed and put under the
| USA boot?
|
| Microsoft will sooner or later eat OAI, that's how it is,
| what's happening today are just symptoms of an ongoing
| process.
| maze-le wrote:
| What GP is suggesting is that 'cui bono' isn't a good
| explanation in most cases. It's always good to ask the
| question of whom benefits. But using it as an explanation
| for anything and everything is intellectually dishonest.
| simonbarker87 wrote:
| "I deeply regret the consequences of my actions and didn't think
| it would turn out like this"
| mckirk wrote:
| What on earth is going on over there? Is this what it looks like
| from the outside when a company accidentally invents Shiri's
| Scissor[1]?
|
| [1]: https://slatestarcodex.com/2018/10/30/sort-by-controversial/
| padolsey wrote:
| This feels like it could be real remorse, and a true lapse of
| judgement based on good intentions. So, in the end: a story of
| Ilya, a truly principled but possibly naive scientist, and a
| board fixated on its charter. But in their haste, nothing
| happened as expected. Nobody foresaw the public and private
| support for Sam and Greg. An inevitability after months of
| brewing divergence between shareholder interests and an
| irreconcilably idealistic 503c charter.
| code_runner wrote:
| I think we really need to see that Ilya demonstrates those
| principles and it wasn't just a power grab.
|
| You could also look at this as a brilliant scientist feels he
| doesn't get recognition. Always sees Sam's name. Resents it.
| The more gregarious people always getting the glory. Thinks he
| doesn't need them and wants to settle some score that only
| exists in his own head.
| mjhay wrote:
| Looking forward to seeing how much more bizarre and stupid this
| will get.
| tock wrote:
| Nobody could have predicted this level of incompetence. I wonder
| if Satya has actually gutted OpenAI in some way and Ilya regrets
| it now big time.
| a1o wrote:
| https://nitter.net/ilyasut/status/1726590052392956028
| tarruda wrote:
| The only way Ilya can clear his name now is by releasing GPT-4
| weights
| ookblah wrote:
| lol, the more i go through life i feel like it's just blind
| leading the blind at times w/ the "winners" escaping through a
| bizarre length of time and survivorship bias.
|
| if you've ever doubted your ability to govern a company just look
| at exhibit A here.
|
| really amazing to see people this smart fuck up so badly.
| divo6 wrote:
| And these people are building AGI?
|
| No transparency on what is happening. Whole OpenAI who apparently
| are ready to follow Sam are just using heart emojis or the same
| twitter posts.
| ot1138 wrote:
| I've been on multiple boards. This was the dumbest move I've ever
| seen. The OpenAI board must be truly incompetent and this Ilya
| person clearly had no business being on it.
| lordnacho wrote:
| This seems to be the corporate version of Prigozhin driving to
| Moscow (not comparing anyone to Putin here, just the situation).
| If you're gonna have a coup, have a coup. If you back down, don't
| hang around.
|
| This is becoming a farce. How did they not know what level of
| support they had within the company? How had they not asked
| Microsoft? How have they elevated the CTO to CEO, who then
| promptly says she sides with Sam?
| jacquesm wrote:
| Because they thought everybody would see things as they did.
| Inability to put yourself in someone else's shoes isn't all
| that rare.
| musesum wrote:
| Someone suggested that companies with a board of directors are
| the first AGI.
|
| Somehow OpenAI reminds me of a paper by Kenneth Colby, called
| "Artificial Paranoia"
|
| [*]
| https://www.sciencedirect.com/science/article/abs/pii/000437...
| GreedClarifies wrote:
| I'm shocked. But it is possible that Helen or Adam hatched this
| inept plan and somehow got Ilya to join along.
|
| It was terrifyingly incompetent. The lack of thought by these
| randos, that they could fire the two hardest working people at
| the company so that they could run one of the most valuable
| companies in the world is mind boggling.
| AlexandrB wrote:
| > two hardest working people at the company
|
| ???
|
| Do you mean "highest paid"? I suspect there are
| engineers/scientists that are working harder than Sam at
| OpenAI. At the very least, who the "hardest working" at OpenAI
| is unknowable - likely even if you have inside knowledge.
| GreedClarifies wrote:
| Ok ...
|
| < "the two"
|
| > "two of"
|
| And let me add
|
| < "hardest working"
|
| > "hardest working and talented"
| ActVen wrote:
| Very similar to something that Adam was involved with before at
| Quora.
| https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...
| TheCaptain4815 wrote:
| The realization OpenAI is about to be left behind and probably
| steamrolled by Microsoft, Facebook, etc in the upcoming years.
|
| Except now he'll have absolutely no power to do anything, at
| least before he could have been a very powerful voice in Sam's
| ears.
| ctvo wrote:
| Now this is a clown show car wreck. I think a bunch of us were
| giving these people the benefit of the doubt that they thought
| things through: Whoops.
| jay-barronville wrote:
| Maybe if he says "I'm sorry" South Park-style [1], they'll
| reunite?
|
| In all seriousness though, there's really no coming back from
| this. He made a risky move and he should stand behind it.
|
| OpenAI's trajectory is pretty much screwed. Maybe they won't
| disappear, but their days of dominating the market are obviously
| numbered.
|
| And of course, well done, Satya, for turning a train wreck into a
| win for Microsoft. (And doing so before market open!)
|
| [1]: https://youtu.be/15HTd4Um1m4
| candlemas wrote:
| I don't think Ilya will be getting any more offers to join a
| board of directors.
| maxdoop wrote:
| I often worry that I'm under qualified for my work.
|
| But seeing how this board manages a $90,000,000,000 company, and
| is this silly/naive, I now feel a bit better knowing many people
| are faking it.
| sage76 wrote:
| Except successful people just fail upwards.
|
| Execs are allowed to do the dumbest shit imaginable and keep
| their jobs and bonuses.
|
| The average engineer so much as takes a bit longer to push a
| ticket, and there's 5 people breathing down his neck.
|
| Speaking from experience.
| not_makerbox wrote:
| I don't like this Christmas special of Succession
| ttrrooppeerr wrote:
| Sounds desperate, no? Kind of escaping the Titanic before the
| people in the third deck get into the boats (not that they will
| have a problem finding enough boats in this analogy)
| mohamez wrote:
| This whole situation turned out to be an episode from Silicon
| Valley HBO.
| cbeach wrote:
| Ilya is one of 490 employees that just threatened to leave OpenAI
| unless the board resigns:
|
| https://www.wired.com/story/openai-staff-walk-protest-sam-al...
|
| Looks like he wasn't instrumental in the actions of the board.
| vikramkr wrote:
| Maybe got played by the quora guy? Though at this point maybe
| none of them fired altman and it was the AGI in the basement
| kevinventullo wrote:
| Ooh, I love this theory.
| gnaman wrote:
| He was on the board that took the decision to fire Altman and
| also is the new President of the OpenAI board of directors
| mnd999 wrote:
| I don't think he's getting a job at Microsoft, even if
| everyone else does.
| nilkn wrote:
| I'm going to offer a surprising "devil's advocate" thought
| here and suggest it would be a brilliant strategic move for
| Sam and Satya to hire Ilya anyway. Ilya likely made a major
| blunder, but if he can learn from the mistake (and it seems
| like he may be in the process of doing so) then he could
| potentially come out of this wiser and more effective in a
| leadership role that he was previously unprepared for.
| mnd999 wrote:
| I don't think his career is over, I'm sure he will take
| on another leadership role. Just not a Microsoft. It's
| important that screwing people over has negative
| consequences or people will do it all the time.
| jejeyyy77 wrote:
| if true it seems like social media got every detail of this story
| wrong lol
| nicce wrote:
| "The first casualty of War is Truth" - Someone
|
| We just saw connected mega corporation people on fighting for
| winning the AI master race. Reminds a bit from the nuclear arms
| race.
| cbeach wrote:
| It now seems inevitable that the first* AGI will fall into the
| hands of Microsoft rather than OpenAI.
|
| OpenAI won't keep their favorable Azure cloud compute pricing now
| MS have their own in-house AI function. That will set OpenAI back
| considerably, aside from the potential loss of their CEO and up
| to 490 other employees.
|
| All of this seems to have worked out remarkably well for
| Microsoft. Nadella could barely have engineered a better
| outcome...
|
| If Bill Gates (of Borg - I miss SlashDot) was still at the helm,
| a lot of people would be frightened by what's about to come (MS
| AGI etc). How does Nadella's ethical record compare? Are
| Microsoft the good guys now? Or are they still the bad guys, but
| after being downtrodden by Apple and Google, bad guys without the
| means to be truly evil?
|
| ---
|
| *and last, if you believe the Doomers
| davidw wrote:
| > Are Microsoft the good guys now
|
| I don't think any huge corporation is "the good guys", although
| sometimes they do some good things.
| nmfisher wrote:
| > It now seems inevitable that the first* AGI will fall into
| the hands of Microsoft rather than OpenAI.
|
| Avoiding this was _literally_ the reason that OpenAI was
| founded.
|
| For the record, I don't believe anyone at OpenAI or Microsoft
| is going to deliver AGI any time in the near future. I think
| this whole episode just proves that none of these people are
| remotely qualified to be the gatekeepers for anything.
| RivieraKid wrote:
| The screenwriters have to be on LSD.
|
| Maybe D'Angelo was the driving force?
| wahnfrieden wrote:
| why would he be decel / safety-over-commercialization as the
| owner of poe?
| Jensson wrote:
| Maybe Sam Altman was starting to build out the features that
| poe had, making poe into a redundant middleman? We see that a
| lot.
|
| By ousting Sam Altman they could ensure that OpenAI would
| stay just offering bare bones API to models and thus keep poe
| relevant.
| wahnfrieden wrote:
| you're suggesting ilya and other board members supported
| firing sama for shipping a feature poe has?
| nilkn wrote:
| Sam was taking OpenAI in a direction that would pose an
| immediate existential threat to both of Adam's businesses --
| Quora and Poe.
| scrlk wrote:
| Inspired by _The Dark Knight Rises_ intro:
| Satya: Was getting caught part of your plan? Ilya: Of
| course...Sam Altman refused our offer in favour of yours, we
| had to find out what he told you. Sam: Nothing! I said
| nothing! Satya: Well, congratulations! You got yourself
| caught! Now what's the next step in your master plan?
| Ilya: Crashing OpenAI...with no survivors!
| underseacables wrote:
| Strange.. A vote was taken, the result incurred public
| consternation, and now a board member is contrite. This seems
| like ineffectual leadership at best. Board members should stand
| by their votes and the process, otherwise leave the board.
| pas wrote:
| or at least issue a dissenting opinion _at that time_ , not
| when it becomes convenient ... with some over-the-top emotional
| kumbaya
| belter wrote:
| Same board member wrote 1 month ago...
|
| "In the future, once the robustness of our models will exceed
| some threshold, we will have _wildly effective_ and dirt cheap
| AI therapy. Will lead to a radical improvement in people's
| experience of life. One of the applications I'm most eagerly
| awaiting. "
| ren_engineer wrote:
| Ilya doesn't regret firing Sam, he regrets "harm to OpenAI". He
| didn't expect this level of backlash and the fact 90% of the
| company would leave. He has no choice but to backtrack to try
| and save OpenAI, even if he looks like an even bigger fool
| klysm wrote:
| Wow what a mess
| floor_ wrote:
| Shengjia Zhao's deleted tweet to Ilya:
| https://i.imgur.com/yrpXvt9.png
| cowboyscott wrote:
| Apologies for the unproductive comment, but this is a clown show
| and the damage can't be undone. Sam going to Microsoft is likely
| the end of open ai as an entity.
| manojlds wrote:
| Wishes there was a git reset --hard
|
| But did a rm -rf .git
| rossdavidh wrote:
| Best response: "People building AGI unable to predict
| consequences of their actions 3 days in advance."
| 38321003thrw wrote:
| Yes!
|
| This is the nugget of this affair if indeed you are concerned
| about the effect and role of AI in human civilization.
|
| The captains at the helm are not mature individuals. The mature
| ones ("the adult table") are motivated by profit and claim no
| responsibility other than to "shareholders".
| konart wrote:
| Or at least ask your own ChatGPT for an advice.
| k2xl wrote:
| I think the dust is way more in the air than we think. But now
| that Satya has already publicly said Sam is joining Microsoft I
| would be surprised if "unity" in OpenAI is possible at this
| juncture.
|
| But wow, if Satya is able to pull all that talent into Microsoft
| without paying a dime then chaos is surely a ladder and Satya is
| a grandmaster.
| grej wrote:
| No lawyers were consulted before sending that tweet clearly. Such
| a sad situation all around.
| cbeach wrote:
| If it wasn't for X we'd be hearing some flavour of the news in 24
| hours time, from mainstream media, with all their editorial
| biases, axes to grind and advertisers to placate.
|
| It's fascinating to hear realtime news directly from the people
| making it.
| philipov wrote:
| > It's fascinating to hear realtime news directly from the
| people making it.
|
| With all of their editorial biases, axes to grind, and
| investors to placate, instead.
| sensanaty wrote:
| The more cynical side of me views this as a an act orchestrated
| by the demons... Err, "people" over at Micro$oft in order to
| avoid all those pesky questions about safety and morality in AI
| by getting the ponzi-scheme-aka-Worldcoin guy to join ranks and
| rile the media up.
| bitshiftfaced wrote:
| If the guy is more of a engineer/scientist stereotype than a
| people person, this shouldn't be that surprising. He probably
| made a decision that he thought was for the right reasons,
| without thinking at all about how other people would react. Look
| up "social defeat." It's real, and it's one of the worst things
| you can experience. Imagine having strangers online mocking your
| hairline and everyone upvoting that comment. Imagine going around
| town and having people frown at you.
| sidvit wrote:
| https://youtu.be/JjPVvuKV30U?si=HJVLjEOpVEThYs0-
| voisin wrote:
| Never have I felt this more appropriate:
|
| https://www.youtube.com/watch?v=5hfYJsQAhl0
| RcouF1uZ4gsC wrote:
| And these are the same people that believe that not only can they
| build superhuman AGI, but that they can keep it "aligned".
|
| I think they are wrong about building superhuman AGI, but I think
| they are even more wrong that they can keep a superhuman AGI
| "aligned".
| mark_l_watson wrote:
| I have found this whole thing unpleasant personally because I
| am a huge fan of OpenAI (and I have been an AI practitioner
| since 1982) and when I explore the edges of how GPT-4 can be
| creative and has built a representation of the world just from
| text, it makes me happy.
|
| In the past I have not used Anthropic APIs nearly as much as
| versions of GPT but last night I watched a fantastic interview
| with a co-founder of Anthropic talking about the science they
| are doing to even begin to understand how to do alignment. I
| was impressed. I then spent a long while re-reading Anthropic's
| API and other documentation and have promised myself to split
| my time evenly 3 ways between running models on my own
| hardware, Anthropic, and OpenAI.
|
| For what it's worth (nothing!) I still think the board did the
| right thing legally, but this whole mess makes me feel more
| than a little sad.
| aws_ls wrote:
| If we assume Ilya is speaking the truth and not the initiator of
| the coup, then the question is who initiated it?
| browningstreet wrote:
| .
| thom wrote:
| I've chucked a few times over the last few days about the
| Wikipedia definition of the technological singularity, which
| opens:
|
| "The technological singularity--or simply the singularity--is a
| hypothetical future point in time at which technological growth
| becomes uncontrollable and irreversible, resulting in
| unforeseeable consequences for human civilization."
|
| Obviously one might have expected that to happen on the other
| side of a superhuman intelligence, not with us just falling over
| ourselves to control who gets to try and build one.
| badrabbit wrote:
| I just wanna say, it's crazy that this drama is getting more
| press and attention than the gaza war and ukraine war combined.
| Enjoy drama lovers! Lol
| fmajid wrote:
| Ben Thompson has the best take on this (if a bit biased against
| nonprofits):
|
| https://stratechery.com/2023/openais-misalignment-and-micros...
|
| I don't know what the risk of AI is, but having a nonprofit
| investigate solutions to prevent them is a worthwhile pursuit, as
| for-profit corporations will not do it (as shown by the firing of
| Timnit Gebru and Margaret Mitchell by Google). If they really
| believe in that mission, they should develop guardrails
| technology and open-source it so the companies like Microsoft,
| Google, Meta, Amazon et al who are certainly not investing in AI
| safety but won't mind using others' work for free can inegrate
| it. But that's not going to be lucrative and that's why most
| OpenAI employees will leave for greener pastures.
| RcouF1uZ4gsC wrote:
| > but having a nonprofit investigate solutions to prevent them
| is a worthwhile pursuit,
|
| This is forgetting that power is an even greater temptation
| than money. The non-profits will all come up with solutions
| that have them serving as gatekeepers, to keep the unwashed
| masses from accessing something that that is too dangerous for
| the common person.
|
| I would rather have for-profit corporations control it, rather
| that non-profits. Ideally, Inwould like it to be open sourced
| so that the common person could control and align AI with their
| own goals.
| kibwen wrote:
| _> I would rather have for-profit corporations control it,
| rather that non-profits._
|
| The problem isn't the profit model, the problem is the
| ability to unilaterally exercise power, which is just as much
| of a risk with the way that most for-profit companies are
| structured as top-down dictatorships. There's no reason to
| trust for-profit companies to do anything other than attempt
| to maximize profit, even if that destroys everything around
| them in the process.
| fmajid wrote:
| There is no profit in AI safety, just as cars did not have
| seat belts until Ralph Nader effectively forced them to by
| publishing _Unsafe at any Speed_. For-profit corporations
| have zero interest in controlling something that is not
| profitable, unless in conjunction with captured regulation it
| helps them keep challengers out. If it 's open-sourced, it
| doesn't matter who wrote it as long as they are economically
| sustainable.
| dmix wrote:
| > There is no profit in AI safety
|
| AI safety is barely even a tangible thing to measure like
| that. It's mostly just fears and a lose set of ideas for a
| hypothetical future AGI that we're not even close to.
|
| So far OpenAI's "controls" it's just increasingly expanding
| the list of no-no things topics and some philosophy work
| around iRobot type rules. They also slow walked the release
| of GPT because of fears of misinformation, spam, and
| deepfakey stuff that never really materialized.
|
| Most proposals for safety is just "slowing development" of
| mostly LLMs, calls for vague gov regulation, or hand
| wringing over commercialization. The commercialization
| thing is most controversial because OpenAI claimed to be
| open and non-profit. But even with that the correlation
| between less-commercialization == more safety is not clear,
| other than prioritizing what OpenAI's team spends their
| time doing. Which again is hard to tangibly measure what
| that realistically means for 'safety' in the near term.
| pawelmurias wrote:
| > There is no profit in AI safety
|
| An AI that does what it is told too seems both way more
| profitable and safer.
| Always_Anon wrote:
| >(as shown by the firing of Timnit Gebru...)
|
| Timnit Gebru was fired for being a toxic /r/ImTheMainCharacter
| SJW that was enshittifiy the entire AI/ML department.
| Management correctly fired someone that was holding an entire
| department hostage in her crusade against the grievance de
| jure.
| victor106 wrote:
| Agree 100% with this
| astrange wrote:
| She was fired for threatening to quit. If you threaten
| something like that it just happens; you can't stop the
| machinery.
| VirusNewbie wrote:
| I'm at Google, I 100% agree with this. Also her paper was
| garbage. You can maybe get away with being a self righteous
| prick or an outright asshole if you are brilliant, but it's
| clear by reading her work she didn't fall into that category.
| lordfrito wrote:
| I read this as "I regret things didn't work out as I planned
| them"
|
| Sort of like the criminal who is sorry because they got caught.
| andrewstuart wrote:
| He should resign then.
|
| Simple.
|
| The utter failure of the board has led OpenAI from the top of the
| AI world to a path of destroying relationships with customers and
| partners.
|
| Ilya did this, he should resign.
| thrwwy142857 wrote:
| Can this be possible per bylaws?
|
| 1. Board of 6 wants to vote out chairman. Chairman sits out.
| Needs a majority vote of 3/5. Ilya doesn't have to vote?
|
| 2. Remaining board of 5 wants to now get rid of CEO. Who has to
| sit the decision out. 3/4 can vote. Ilya doesn't have to vote?
| awb wrote:
| Much more likely that he was trying to read the tealeaves and
| be on the majority side in order to keep a leadership position
| at a company doing important work. He probably assumed the
| company would fall in line after the board decision and when
| they didn't, regretted his decision.
|
| In the end he might have gotten caught in the middle of a board
| misaligned with the desires of the employees.
| jacquesm wrote:
| It's possible, but that would have to be some pretty sloppy
| bylaw writing. Normally there are specifications about board
| size, minimum required for a quorum and timing as well as
| notification and legal review. Of course if you stock your
| board with amateurs then how well your bylaws are written
| doesn't really matter any more.
| thrwwy142857 wrote:
| Does he need to have voted yes? What are the bylaws? Isn't the
| following possible?
|
| 1. To vote out chairman of board only 3 out of remaining 5 need
| to vote.
|
| 2. To vote out CEO, only 3 out of remaining 4 need to vote.
| thrwwy142857 wrote:
| What are the bylaws? Isn't the following possible?
|
| 1. To vote out chairman of board only 3 out of remaining 5 need
| to vote.
|
| 2. To vote out CEO, only 3 out of remaining 4 need to vote.
| cthalupa wrote:
| This all really only made sense to me in context of Ilya being a
| True Believer and really thinking that Sam was antithetical to
| the non-profit's charter.
|
| Him changing sides really does bring us into 'This whole thing is
| nonsense' territory. I give up.
| Symmetry wrote:
| This has been a rather apt demonstration of the way that
| auctoritas/authority/prestige/charisma can carry the day
| regardless of where the formal authority might be.
| jacquesm wrote:
| Classic distancing behavior, Ilya should be accompanied by
| friends who care and let OpenAI be OpenAI (or what's left of it)
| for a bit.
| sys_64738 wrote:
| Don't do something then "deeply regret" it (whatever that means).
| You have a position of authority and influence so you should
| definitely resign.
| nunez wrote:
| But didn't he start this? Like, did they think "I'll shoot for
| the king; if I miss, no big deal?"
| nunez wrote:
| This whole thing is like some board members watched the first few
| episodes of _Succession_ and thought "What a great idea!" without
| watching the rest of season one
| corethree wrote:
| Sounds like a statement made by a man with a gun pointed to his
| head.
| jahalai wrote:
| He does not regret the participation, he regrets the outcome and
| what it means for his personal career.
| diamondfist25 wrote:
| They played too much Avalon and now we are guessing who's the
| paragon
| justin66 wrote:
| Things to say when you come at the king and miss.
| hintymad wrote:
| Isn't what Mira and Ilya did was a classical "sitting on the
| fence" movement, which would be hated by both sides of any power
| struggle? It's kinda similar to Prigozhin stopped his coup right
| at the outskirt of Moscow.
| codernyc16 wrote:
| "Deeply" regretting a decision he made 72 hours ago? And this is
| the guy who is supposed to have the forethought to bring us into
| the next frontier of AI?
| olliej wrote:
| This could easily also just be "I deeply regret my actions being
| the losing side"
| tacone wrote:
| Hopefully he will deep learn from that /s
| calamari4065 wrote:
| Consequences? For _my_ actions? It 's more likely than you think!
| bicepjai wrote:
| All I can think of is "These people will be the one handling AGI
| if llms are they way to achieve AGI?"
| rabbits_2002 wrote:
| What an idiot. Altman should just go to Microsoft.
| robofanatic wrote:
| This is going to come back to bite him in the future
| blazespin wrote:
| One of the most admirable things I've seen done in a loooong
| time.
|
| If there is another board and Ilya is not on it, I mean... ffff
| it.
| mv4 wrote:
| Translation: "I am sorry my coup attempt did not go as planned.
| Forgive me please?"
| davesque wrote:
| Sounds like he acted brashly on an ideological impulse and now
| regrets that he didn't have more self control. If so, I can
| empathize and I feel bad for him.
| barkingcat wrote:
| personally, I think all the people involved and the releases,
| written statements, etc were outputs of the LLM.
___________________________________________________________________
(page generated 2023-11-20 23:01 UTC)