[HN Gopher] Details emerge of surprise board coup that ousted CE...
       ___________________________________________________________________
        
       Details emerge of surprise board coup that ousted CEO Sam Altman at
       OpenAI
        
       Author : jncraton
       Score  : 328 points
       Date   : 2023-11-18 16:07 UTC (6 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | kylecazar wrote:
       | Here's what I don't understand.
       | 
       | There clearly were tensions between the for and not-for growth
       | factions, but the Dev Day is being cited as a 'last straw'. It
       | was a product launch.
       | 
       | Ilya, and the board, should have been well aware of what was
       | being released on that day for months. They should have at the
       | very least been privy to the plan, if not outright sanctioned it.
       | Seems like before launch would have been the time to draw a line
       | in the sand.
       | 
       | Did they have a 'look at themselves in the mirror' moment after
       | the announcements or something?
        
         | passwordoops wrote:
         | >Ilya, and the board, should have been well aware of what was
         | being released on that day for months
         | 
         | Not necessarily, and that may speak to the part of the Board's
         | announcement that Sam was not candid
        
           | barbazoo wrote:
           | I can't imagine an organization where this wouldn't have come
           | up on some roadmap or prioritization meeting, etc. How could
           | leadership not know what the org is working on?! They're not
           | that big.
        
             | googlethrwaway wrote:
             | Board is not exactly leadership. They meet infrequently and
             | get updates directly from management, they don't go around
             | asking employees what they're working on
        
               | barbazoo wrote:
               | True. So the CTO knew what was happening, wasn't happy,
               | and then coordinated with the board, is that what appears
               | to have happened?
        
               | threeseed wrote:
               | CTO who is now acting CEO.
               | 
               | Not making any accusations but that was an odd decision
               | given that there is an OpenAI COO.
        
               | browningstreet wrote:
               | They do typically have views into strategic plans,
               | roadmaps and product plans.
        
               | naasking wrote:
               | Going into detail in a talk and discussing AGI may have
               | provided crucial context that wasn't obvious from a
               | PowerPoint bullet point, which is all the board may have
               | seen earlier.
        
               | late2part wrote:
               | More supervision than leadership...
        
               | s1artibartfast wrote:
               | Surely Ilya Sutskever must have known what was being
               | worked on as Chief Scientist?
        
             | aunty_helen wrote:
             | I can't imagine an organization that would fire their
             | celebrity CEO like this either. So maybe that's how we
             | arrived here.
        
         | martythemaniak wrote:
         | Could be many things, like Sam not informing them of the GPTs
         | store launch, or saying he won't launch and then launching.
         | 
         | It sucks for openAi, but there's too many hungry hungry
         | competitors salivating at replacing OpenAI so I don't think
         | this will have big king term consequences in the field.
         | 
         | I'm curious what sorts of oversight and recourse all the
         | investors (or are they donors?) Have. I imagine there's a lot
         | of people with a lot of money that are quite angry today.
        
           | CPLX wrote:
           | They don't have investors, it's a non profit.
           | 
           | The "won't anyone think of the needs of the elite wealthy
           | investor class" that has run through the 11 threads on this
           | topic is pretty baffling I have to admit.
        
             | ketzo wrote:
             | They do have investors in the for-profit subsidiary,
             | including Microsoft and the employees. Check out the
             | diagram in the linked article.
        
               | CPLX wrote:
               | That's right. Which isn't the company that just fired Sam
               | Altman.
        
               | ketzo wrote:
               | I take your point, but still, I don't think it's correct
               | to imply that investors in the for-profit company have
               | _no_ sway or influence over the future of OpenAI.
               | 
               | I sure as shit wouldn't wanna be on Microsoft's bad side,
               | regardless of my tax status.
        
             | CrazyStat wrote:
             | It's a nonprofit that controls a for-profit company, which
             | has other investors in addition to the non-profit.
        
             | laserlight wrote:
             | > They don't have investors
             | 
             | OpenAI has investors [0].
             | 
             | [0] https://openai.com/our-structure
        
               | dragonwriter wrote:
               | OpenAI (the nonprofit whose board makes decisions) has no
               | investors.
               | 
               | the subordinate holding company and even more subordinate
               | OpenAI Global LLC have investors, but those investors are
               | explicitly warned that the charitable purpose of the
               | nonprofit and not returning profits to investors is the
               | paramount function of the organization, over which the
               | nonprofit has full governance control.
        
             | croes wrote:
             | Then what did Microsoft pay for?
        
               | dragonwriter wrote:
               | Privileged access to technology, which has paid off quite
               | well for them already.
        
               | croes wrote:
               | They didn't pay a fee
        
         | ketzo wrote:
         | These people are humans, and there's a big difference between
         | kinda knowing the keynote was coming up, and then actually
         | watching it happen and receive absolutely rave coverage from
         | everyone in tech.
         | 
         | I could very much see it as a "look in the mirror" moment,
         | yeah.
        
         | twelve40 wrote:
         | they could have been beefing non-publicly for a long time, and
         | might have had many private conversations, probably not very
         | useful to speculate here
        
           | ketzo wrote:
           | Not useful at all.. but it sure is fun! This is gonna be my
           | whole dang weekend.
        
             | romeoblade wrote:
             | Probably dang's whole weekend as well.
        
         | jsemrau wrote:
         | What if Enterprises get access to a much better version of AI
         | compared to the GPT+ subscription customer?
        
           | threeseed wrote:
           | They always were because it was going to be customised for
           | their needs.
        
         | cratermoon wrote:
         | Let's look closer at the Ilya Sutskever vs Sam Altman tensions,
         | and think of the product/profit as a cover.
         | 
         | Ilya Sutskever is a True Believer in LLMs being AGI, in that
         | respect aligned with Geoff Hinton, his academic advisor at
         | University of Toronto. Hinton has said "So by training
         | something to be really good at predicting the next word, you're
         | actually forcing it to understand. Yes, it's 'autocomplete'--
         | but you didn't think through what it means to have a really
         | good autocomplete"[1].
         | 
         | Meanwhile, Altman has decided that LLMs aren't the way.[2]
         | 
         | So Altman was pushing to turn the LLM into a for-profit
         | product, to get what value it has, while the Sutskever-aligned
         | faction thinks it _is_ AGI, and want to keep it not-for-profit.
         | 
         | There's also some difference about whether or not AGI poses an
         | "existential risk" or if the risks of current efforts at AI are
         | along the lines of algorithmic bias, socioeconomic inequality,
         | mis/disinformation, and techno-solutionism.
         | 
         | 1. https://www.newyorker.com/magazine/2023/11/20/geoffrey-
         | hinto...
         | 
         | 2. https://www.thestreet.com/technology/openai-ceo-sam-
         | altman-s...
        
           | Chamix wrote:
           | You are conflating Illya's belief in the _transformer
           | architecture_ (with tweaks /compute optimizations) being
           | sufficient for AGI with that of LLMs being sufficient to
           | express human-like intelligence. Multi-modality (and the
           | swath of new training data it unlocks) is clearly a key
           | component of creating AGI if we watch Sutskever's interviews
           | from the past year.
        
             | cratermoon wrote:
             | Yes, I read "Attention Is All You Need", and I understand
             | that the multi-head generative pre-trained model talks
             | about "tokens" rather than language specifically. So in
             | this case, I'm using "LLM" as shorthand for what OpenAI is
             | doing with GPTs. I'll try to be more precise in the future.
             | 
             | That still leaves disagreement between Altman and Sutskever
             | over whether or not the current technology will lead to AGI
             | or "superintelligence", with Altman clearly turning towards
             | skepticism.
        
         | 015a wrote:
         | > They should have at the very least been privy to the plan, if
         | not outright sanctioned it.
         | 
         | Never assume this. After all, their communication specifically
         | cited that Sam deceived them in some way, and Greg was also
         | impacted. Ilya is the only board member that _might_ have known
         | naturally, given his day-to-day work with OAI, but since ~July
         | he has worked in the area of superalignment, which could
         | reasonably be a different department (it shouldn 't be). The
         | Board may have also found out about these projects, maybe from
         | a third party/Ilya, told Sam they're moving too fast, and Sam
         | ignored them and launched anyway. We really don't know.
        
         | skywhopper wrote:
         | They "should have" but if the board was wildly surprised by
         | what was presented, that sounds like a really good reason to
         | call out the CEO for lack of candor.
        
       | gustavus wrote:
       | Here's the thing. I've always been kind of cold on OpeanAI
       | claiming to be "Open" when it was clearly a for profit thing and
       | I was concerned about the increasing move to the
       | commercialization of AI that Sam was taking.
       | 
       | But I am much more concerned to be honest those who feel they
       | need to control the development of AI to ensure it is "aligns
       | with their principles", after all principles can change, and to
       | quote Lewis "Of all tyrannies, a tyranny sincerely exercised for
       | the good of its victims may be the most oppressive. It would be
       | better to live under robber barons than under omnipotent moral
       | busybodies. The robber baron's cruelty may sometimes sleep, his
       | cupidity may at some point be satiated; but those who torment us
       | for our own good will torment us without end for they do so with
       | the approval of their own conscience."
       | 
       | What we really need is another Stallman, his idea was first and
       | foremost always freedom, allowing each individual agency to
       | decide their own fate. Every other avenue will always result in
       | men in suits in far away rooms dictating to the rest of the world
       | what their vision of society should be.
        
         | brookst wrote:
         | Be the change you want to see.
        
         | andy99 wrote:
         | If the board was really serious about doing good over making
         | profit (if this is indeed what the whole thing is about) they'd
         | open source gpt-4/5 with a gpl-style license
        
           | swatcoder wrote:
           | That's not the sense of open they've organized around. In
           | fact, it's antithetical to it.
           | 
           | Theirs is a technocratic sense of open, where select
           | credentialed experts collaborate on a rational good without a
           | concentration of control by specific capitalists or nations.
        
             | piuantiderp wrote:
             | I think your technocratic sense of open is misplaced. At
             | this point OpenAI is clearly controlled by the US and it's
             | ok. If anything one wonders if Altman's ouster has a
             | geopolitical angle, cozying up to other countries and such.
        
             | cmrdporcupine wrote:
             | I guess I struggle to see how the word "open" can be
             | applied to that, but I also remember how that word was
             | tossed around in the late 80s and early 90s during the Unix
             | wars, and, yeah, shoe fits.
             | 
             | The question is how we got to be so powerless as a society
             | that this is the only palette of choices we get to choose
             | from: technocratic semi-autistic engineer-intellects who
             | want to hoist AGI on the world vs self-obsessed tech bro
             | salesdudes who see themselves as modern day Howard Roarks.
             | 
             | That's it.
             | 
             | Anyways, don't mind me, gonna crawl into a corner and read
             | Dune.
        
             | naveen99 wrote:
             | Yeah, not open in the open source or rms way. it's "for the
             | benefit for all" with the "benefits" decided by the openai
             | board, a la communism, with central planning by "the
             | party".
             | 
             | Surprisingly capitalism actually leads to more benefits for
             | all, because of the decentralization and competition.
        
             | surrealize wrote:
             | This definition is an abuse of the word "open"
        
         | esafak wrote:
         | I think "don't extinguish humanity or leave most of them
         | unemployed" is a principle everyone can get and stay behind.
        
           | cmrdporcupine wrote:
           | You seem to have far more faith in others' ethical compass
           | than I think is justified by historical evidence.
           | 
           | It's amazing what people will do when the size of their
           | paycheque (or ego) are tied to it.
           | 
           | I don't trust anybody at OpenAI with the keys to the car, but
           | democratic choice apparently doesn't play into it, so here we
           | are.
        
             | esafak wrote:
             | I meant as a basic principle. Individuals and organizations
             | who breach the pact can be punished by legal means.
        
       | ls612 wrote:
       | What are the odds Sam can work the phones this weekend and have
       | $10B lined up by Monday for a new AI company which will take all
       | of the good talent from OpenAI?
        
         | ketzo wrote:
         | Honestly? If even a tenth of Sam's reported connectedness /
         | reality distortion field are true to life... very good odds.
        
         | outside1234 wrote:
         | 99% with the 1% being it is actually $20-30B
        
         | staticman2 wrote:
         | Why would the good talent leave? Are they all a "family" and
         | best buddies with Sam?
        
           | ls612 wrote:
           | A lot of them have already left this morning. idk for sure
           | why but a good bet is that they are more on board with Sam's
           | vision of pushing forward AI than the safetyist vision.
        
             | esafak wrote:
             | What fraction?
        
           | thepasswordis wrote:
           | Perhaps they don't want to work for a board of directors
           | which is openly hostile to the work they're doing?
        
             | meepmorp wrote:
             | I don't think wanting to make sure that their technology
             | doesn't cause harm equates to being hostile to the work
             | itself.
        
               | pixl97 wrote:
               | That would seem based on the individuals motivation at
               | the end of the day...
               | 
               | It's easy to imagine two archetypes
               | 
               | 1) The person motivated to make AGI and make it safe.
               | 
               | 2) The person motivated to make AGI at any cost and
               | profit from it.
               | 
               | It seems like OpenAI may be pushing for type 1 at the
               | moment, but the typical problem with capitalism is it
               | will commonly fund type 2 businesses. Who 'wins' really
               | breaks down to if there are more type 1 or 2 people and
               | the relative successes of each.
        
               | anonyfox wrote:
               | Not at OAI or some researcher, but I'd be in an archetype
               | 3:
               | 
               | I'd do anything I can to make true AGI a reality, without
               | safety concerns or wanting to profit from it.
        
             | staticman2 wrote:
             | The boarded sided with the chief scientist and co-founder
             | of OpenAI in an internal dispute. How does that show
             | hostility to the work OpenAI is doing?
        
               | YetAnotherNick wrote:
               | Ilya is pushing the unsafe AGI narrrative to stop public
               | progress and make OpenAI more closed and intentionally
               | slow to deliver. There are definitely people who are not
               | sold by this.
        
             | chongli wrote:
             | Perhaps they didn't like the work they were doing? If
             | they're experts in the field, they may have preferred to
             | continue to work on research. Whereas it sounds like Sam
             | was pushing them to work on products and growth.
        
           | polski-g wrote:
           | Because they want stock options for a for-profit company.
        
           | mattnewton wrote:
           | My guess is that at least some of them are worried about
           | shipping products and making profit, and agreed with the
           | growth faction?
        
         | claytonjy wrote:
         | I definitely believe he can raise a lot of money quickly, but
         | I'm not sure where he'll get the talent, at least the core
         | modeling talent. That's Ilya's lane, and I get the sense that
         | group are the true believers in the original non-profit
         | mission.
         | 
         | But I suspect a lot of the hires from the last year or so, even
         | in the eng side, are all about the money and would follow sama
         | anywhere given what this signals for OpenAIs economic future.
         | I'm just not sure such a company can work without the core
         | research talent.
        
           | x86x87 wrote:
           | Lol. There are ambitious people working at openai in Ilya's
           | lane that will jump at the opportunity. Nobody owns any
           | lanes.
        
             | dh2022 wrote:
             | ooh, lanes... the Microsoft internal buzz-word that got out
             | of fashion a couple of years ago is making a comeback
             | outside of Microsoft....
        
         | fullshark wrote:
         | I'm guessing he has verbal commitments already.
        
         | croes wrote:
         | And then?
         | 
         | Training data is more restricted now, hardware is hard to get,
         | fine tuning needs time.
        
           | somebodythere wrote:
           | First two problems are easily solved with money
        
             | croes wrote:
             | Money doesn't magically create hardware, it takes time to
             | produce it
        
         | dmitrygr wrote:
         | Other than having a big mouth what has HE done? As far as I can
         | find, the actual engineering and development was done NOT by
         | him, while he was parading around telling people they shouldn't
         | WFH, and schmoozing with government officials
        
         | somenameforme wrote:
         | Ilya Sutskever, the head scientist at OpenAI, is allegedly who
         | organized the 'shuffle.' So you're going to run into some
         | issues expecting the top talent to follow Sam. And would many
         | people want to get in on a new AI development company for big
         | $$$ right now? From my perspective the market is teetering
         | towards oversaturation, there are no moats, zero-interest rates
         | are a thing of the past, and the path to profit is nebulous at
         | best.
        
       | fullshark wrote:
       | Man this still seems crazy to me. The idea that this tension
       | between commercial/non-commercial aspirations got so bad they
       | felt the nuclear option of a surprise firing of Altman was the
       | only move available doesn't seem plausible to me.
       | 
       | I believe this decision was ego and vanity driven with this post-
       | hoc rationalization that it was because of the mission of
       | "benefiting humanity."
        
         | swatcoder wrote:
         | In a clash of big egos, both are often true. Practical
         | differences escalate until personal resentment forms and the
         | parties stop engaging with due respect for each other.
         | 
         | Once that happens, real and intentional slights start
         | accumulating and de-escalation becomes extremely difficult.
        
         | x86x87 wrote:
         | This is not commercial vs non commercial imho. This is the old
         | classic humans being humans.
        
         | superhumanuser wrote:
         | I wonder if the "benefiting humanity" bit is code for anti mil-
         | tech. What if Sam wasn't being honest about a relationship with
         | a consumer that weaponized OpenAI products against humans?
        
           | dave4420 wrote:
           | Could be.
           | 
           | Or it could be about the alignment problem. Are they
           | designing AI to prioritise humanity's interests, or its
           | corporate masters' interests? One way is better for humanity,
           | the other brings in more cash.
        
             | superhumanuser wrote:
             | But the board accused Sam of not being "consistently
             | candid". Alignment issues could stand on their own ground
             | for cause and would have been better PR too. Instead of the
             | mess they have now.
        
           | 0xDEF wrote:
           | Ilya has Israeli citizenship and has toured Israel and given
           | talks at Israeli universities including one talk with Sam
           | Altman.
           | 
           | He is not anti mil-tech.
        
             | someNameIG wrote:
             | Did those talks have anything to do with mil-tech though?
        
             | superhumanuser wrote:
             | Unless the mil-tech was going to their enemies.
        
               | 0xDEF wrote:
               | I don't think anyone at OpenAI was planning to give mil-
               | tech to Iran and Iranian proxies like Hamas, Hezbollah,
               | and the Houthis.
        
               | superhumanuser wrote:
               | Ya. You're right. Time to let the theory die.
        
             | kromem wrote:
             | That's a pretty big leap of logic there.
        
         | BryantD wrote:
         | What if the board gave Altman clear direction, Altman told them
         | he accepted it, and then went off and did something else? This
         | hypothesis doesn't require the board's direction to be
         | objectively good.
        
           | fullshark wrote:
           | IDK none of us are privy to any details of festering tensions
           | or if there was a "last straw" scenario that if it was
           | explained it would make sense. Something during that dev day
           | really pissed some people off that's for sure.
           | 
           | Given what the picture looks like today though that's my
           | guess, firing Altman is an extreme scenario! Lots of CEOs
           | have tensions with their boards over various issues otherwise
           | the board is pointless!
        
             | BryantD wrote:
             | I strongly agree, yeah! The trick is making those tensions
             | constructive and no matter who's at fault (could be both
             | sides), someone failed there.
        
         | edgyquant wrote:
         | Maybe, but I have a different opinion. I have worked at
         | startups before where we were building something both
         | technically interesting and what could clearly be a super value
         | add for the business domain. I've then witnessed PMs be brought
         | on who cared little about any of that and instead tried to
         | converge in the exact same enshittified product as everywhere
         | else with little care or understand for the real solutions we
         | were building towards. When this happened I knew within a month
         | that the vision of the company, and it's goals outside of
         | generating investor returns, was dead if this person had their
         | way.
         | 
         | I've specifically seen the controlling members of a company
         | realize this after 7-8 months and when that happens it's a
         | quick change of course. I could see why you'd think it's ego
         | but I think it's closer to my previous situation than what
         | you're stating here. This is a pivotal course correction and
         | they're not pretty, this just happens to be the most public one
         | ever due to the nature of the business and company.
        
         | fulladder wrote:
         | Yeah, the surprise firing part really doesn't make much sense.
         | My best guess is that if you look at the composition of this
         | board (minus Altman and Brockman), it seems to be mostly
         | academics and the wife of a Hollywood actor. They may not be
         | very experienced in the area of tech company boards, and might
         | not have been aware that there are smoother ways to force a CEO
         | out that are less damaging to your organization. Not sure, but
         | that's the best I can figure out based on what we know so far.
        
           | cthalupa wrote:
           | >it seems to be mostly academics and the wife of a Hollywood
           | actor
           | 
           | This argument would require you ignore both Sutskever himself
           | as well as D'Angelo, who was CTO/VP of Engineering at
           | Facebook and then founding CEO of Quora.
        
       | noonething wrote:
       | I hope they go back to being Open now that Altman is gone. It
       | seems Ilya wants it to 'benefit all of humanity' again.
        
         | x86x87 wrote:
         | Things can improve along a dimension you choose to measure but
         | there is also the very real risk of openai imploding. Time will
         | tell.
        
         | exitb wrote:
         | Isn't that a bit like stealing from the for-profit investors?
         | I'm not the first one to shed a tear for the super wealthy, but
         | is that even legal? Can a company you invested in just say they
         | don't like profit any more?
        
           | TheCleric wrote:
           | Unless you have something in writing or you have enough
           | ownership to say no, I don't see how you'd be able to stop
           | it.
        
             | exitb wrote:
             | Microsoft reportedly invested 13 billion dollars and has a
             | generous profit sharing agreement. They don't have enough
             | to control OpenAI, but does that mean the company can
             | actively steer away from profit?
        
               | dragonwriter wrote:
               | > They don't have enough to control OpenAI
               | 
               | Especially since the operating government effectively
               | gives the nonprofit board full control.
               | 
               | > They don't have enough to control OpenAI, but does that
               | mean the company can actively steer away from profit?
               | 
               | Yes. Explicitly so. https://openai.com/our-structure and
               | particularly https://images.openai.com/blob/142770fb-3df2
               | -45d9-9ee3-7aa06...
        
               | cthalupa wrote:
               | Yes. Microsoft had to sign an operating agreement when
               | they invested that said the company has no responsibility
               | or obligation to turn a profit. LLCs are able to
               | structure themselves in such a way that their primary
               | duty is not towards their shareholders.
               | 
               | https://openai.com/our-structure - check out the pinkish-
               | purpleish box. Every investor and employee in the for-
               | profit has to agree to this as a condition of their
               | investment/employment.
        
             | s1artibartfast wrote:
             | They have something in writing. OpenAI created a for-profit
             | joint venture company with microsoft, and gave it license
             | to its technology.
        
               | marcinzm wrote:
               | Exclusive license?
        
               | s1artibartfast wrote:
               | No clue, but I guess not.
        
           | eastbound wrote:
           | They knew it when they donated to a non-profit. In fact
           | trying to extract profit from a 501c could be the core of the
           | problem.
        
             | s1artibartfast wrote:
             | Microsoft didnt give money to a non-profit. They created a
             | for profit company, and microsoft gave that company 11B,
             | and Open AI gave it the technology.
             | 
             | OpenAI shares ownership of that for-profit company with
             | Microsoft and Early investors like Sam, Greg, Musk, Theil,
             | Bezos, the employees of that company.
        
               | cthalupa wrote:
               | While technically true, in practicality, they did give
               | money to the non-profit. The even signed an agreement
               | stating that any investments should be considered more as
               | donations, because the for-profit subsidiary's operating
               | agreement is such that the charter and mission of the
               | non-profit are the primary duty of the for-profit, not
               | making money. This is explicitly called out in the
               | agreement that all investors in and employees of the for-
               | profit must sign. LLCs can be structured so that they are
               | beholden to a different goal than the financial
               | enrichment of their shareholders.
               | 
               | https://openai.com/our-structure
        
               | s1artibartfast wrote:
               | I don't dispute that they say that at all. Therein lies
               | the tension -having multiple goals. The goal is to uphold
               | the mission, and _also_ to make a profit, and the mission
               | comes first.
               | 
               | Im not saying one party is right or wrong, just pointing
               | out that there is bound to be conflict when you give
               | employees a bunch of profit based stock rewards, Bring in
               | in 11B in VC investment looking for returns, and then
               | have external oversight with all the control setting the
               | balance between profit and mission.
               | 
               | The disclaimer says "It would be wise to see the the
               | investment in OpenAI Global in _the spirit of a donation_
               | , with the understanding that it may be difficult to know
               | what role money will play in a post-AGI world"
               | 
               | That doesnt mean investors and employees wont want money,
               | and few will be scared off by owning a company so wildly
               | successful that it ushers in a post scarcity world.
               | 
               | You have partners and employees that want to make profit,
               | and that is fundamental to why some of them are there,
               | especially Microsoft. The expectation of possible profits
               | are clear, because that is why the company exists, and
               | why microsoft has a deal where they get 75% of profit
               | until they recoup their 11 Billion investment. I read the
               | returns are capped at 100X investment, so if holds true,
               | Microsoft returns are capped at 1.1 _Trillion_ dollars.
        
           | dragonwriter wrote:
           | > Isn't that a bit like stealing from the for-profit
           | investors? I
           | 
           | https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06.
           | ..
        
         | pknerd wrote:
         | Means free Gpt4?
         | 
         | Ps: It's a serious question
        
         | cscurmudgeon wrote:
         | Won't a truly open model conflict with the AI executive order?
        
         | dragonwriter wrote:
         | From what I've seen, Ilya seems to be even _more_ concerned
         | than Altman about safety risks and, like Altman, seems to see
         | restricting access and information as a key part of managing
         | that, so I 'd expect less openness, not more.
         | 
         | Though he may be less inclined to see closed-but-commercial
         | access as okay as much as Altman, so while it might involve
         | less _total_ access, it might involve more actual open /public
         | information about what is also made commercially available.
        
       | skywhopper wrote:
       | Sorry but the board firing the person who works for them is not a
       | "coup".
        
         | yjftsjthsd-h wrote:
         | > The next day, Brockman, who was Chairman of the OpenAI board,
         | was not invited to this board meeting, where Altman was fired.
         | 
         | > Around 30 minutes later, Brockman was informed by Sutskever
         | that he was being removed from his board role but could remain
         | at the company, and that Altman had been fired (Brockman
         | declined, and resigned his role later on Friday).
         | 
         | The board firing the CEO is not a coup. The board firing the
         | CEO behind the chair's back and then removing the chair is a
         | coup.
        
           | cwillu wrote:
           | It appears that is the normal practice for a board voting to
           | fire a CEO though, so that aspect doesn't mean much.
        
           | sudosysgen wrote:
           | The point being made is that the board is the one that's
           | supposed to be in power. How the CEO is fired may be gauche
           | but it's not usurpation of power or anything like that.
        
         | w10-1 wrote:
         | The board ousting the board chair (without notice) and the CEO
         | is a coup. It's not even clear to me it was legal to meet and
         | act without notice to the board chairman.
        
       | Waterluvian wrote:
       | How important is Altman? How important were three senior
       | scientists? Can they start their own company, raise funding, and
       | overtake OpenAI in a few years? Or does OpenAI have some material
       | advantage that isn't likely to be harmed by this?
       | 
       | Perhaps the competition is inevitably a good thing. Or maybe a
       | bad thing if it creates pressure to cut ethical corners.
       | 
       | I also wonder if the dream of an "open" org bringing this tech to
       | life for the betterment of humanity is futile and the for-profits
       | will eventually render them irrelevant.
        
         | intellectronica wrote:
         | > How important is Altman? How important were three senior
         | scientists? Can they start their own company, raise funding,
         | and overtake OpenAI in a few years?
         | 
         | The general opinion seems to be estimating this at far above
         | 50% YES. I, personally would bet at 70% that this exactly what
         | will happen. Unless some really damaging information becomes
         | public about Altman, he will definitely have the strong
         | reputation and credibility, definitely will be able to raise
         | very significant funding, and the only expert in industry /
         | research he definitely won't be able to recruit would be Ilya
         | Sutskever.
        
         | tuxguy wrote:
         | An optimistic perspective of how despite today's regrettable
         | events, Sama and gdb will start something new and more
         | competition is a good thing :
         | https://x.com/DrJimFan/status/1725916938281627666?s=20
         | 
         | I have a contrarian prediction : Due to pressure from investors
         | and a lawsuit against the openai board, the board will be made
         | to resign and Sama & Greg will return to openai.
         | 
         | Anybody else agree ?
        
           | Waterluvian wrote:
           | Do we know enough about the org's charter to reasonably
           | predict that case? Did the board actually do anything wrong?
           | 
           | Or are you thinking it would be a kind of power play from
           | investors to say, "nah, we want it to be profit driven."
        
           | cmrdporcupine wrote:
           | If that's the outcome, I suspect OpenAI will have _another_
           | wave of resignations as the folks aligned to Sutskever would
           | walk away, too, and take with them their expertise.
        
           | cthalupa wrote:
           | > I have a contrarian prediction : Due to pressure from
           | investors and a lawsuit against the openai board, the board
           | will be made to resign and Sama & Greg will return to openai.
           | 
           | The board is not beholden to any investors. The board is for
           | the non-profit that does not have shareholders, and it fully
           | owns and controls the manager entity that controls the for-
           | profit. The LLC's operating agreement is explicit that it is
           | beholden to the charter and mission of the non-profit, not
           | creating financial gain for the shareholders of the for-
           | profit company.
        
         | pknerd wrote:
         | Let's not forget the role of Ilya to make gpt what it is today
        
       | WendyTheWillow wrote:
       | I just hope the "AI safety" people don't end up taking LLMs out
       | of the hands of the general public because they read too many
       | Isaac Asimov stories...
        
         | 3seashells wrote:
         | If you were a AI going rogue, how would you evade public
         | scrutiny?
        
           | jjtheblunt wrote:
           | As a replicant, chasing other replicants as dangerous?
        
         | pixl97 wrote:
         | Asimov AI is just humanlike behavior mostly, if you want a more
         | realistic concern think Bostrom and instrumental goals.
        
         | pknerd wrote:
         | I am addicted to got now:/
        
         | lukeschlather wrote:
         | In most of Asimov's stories it's implied that machines have
         | quietly and invisibly replaced all human government and the
         | world is better for it because humans tend to be petty and
         | cruel while it's impossible for robots to harm humans.
        
       | nilkn wrote:
       | Only time will tell, but if this was indeed "just" a coup then
       | it's somewhat likely we're witnessing a variant of the Steve Jobs
       | story all over again.
       | 
       | Sam is clearly one of the top product engineering leaders in the
       | world -- few companies could ever match OpenAI's incredible
       | product delivery over the last few years -- and he's also one of
       | the most connected engineering leaders in the industry. He could
       | likely have $500M-$10B+ lined up by next week to start up a new
       | company and poach much of the talent from OpenAI.
       | 
       | What about OpenAI's long-term prospects? They rely heavily on
       | money to train larger and larger models -- this is why Sam
       | introduced the product focus in the first place. You can't get to
       | AGI without billions and billions of dollars to burn on training
       | and experiments. If the company goes all-in on alignment and
       | safety concerns, they likely won't be able to compete long-term
       | as other firms outcompete them on cash and hence on training.
       | That could lead to the company getting fully acquired and
       | absorbed, likely by Microsoft, or fading into a somewhat sleepy
       | R&D team that doesn't lead the industry.
        
         | mvkel wrote:
         | [Removed. Unhelpful speculation.]
        
           | late2part wrote:
           | How many days a week do you hang out with Sam and Greg and
           | Ilya to know these things?
        
             | mvkel wrote:
             | I know the dysfunction and ego battles that happen at
             | nonprofits when they outgrow the board.
             | 
             | Haven't seen it -not- happen yet, actually. Nonprofits
             | start with $40K in the bank and a board of earnest people
             | who want to help. Sometimes that $40K turns into $40M (or
             | $400M) and people get wacky.
             | 
             | As I said, "if."
        
           | zeroonetwothree wrote:
           | Extremely speculative
        
           | dagmx wrote:
           | Frankly this reads like idolatry and fan fiction. You've
           | concocted an entire dramatization based on not even knowing
           | any of the players involved and just going based off some
           | biased stereotyping of engineers?
        
             | mvkel wrote:
             | More like stereotyping nonprofits.
        
         | browningstreet wrote:
         | Agree with this take. Sam made OpenAI hot, and they're going to
         | cool, for better or worse. Without revenue it'll be worse. And
         | surprising Microsoft given their investment size is going to
         | lead to pressures they may not be able to negotiate against.
         | 
         | If this pivot is what they needed to do, the drama-version
         | isn't the smart way to do it.
         | 
         | Everyone's going to be much more excited to see what Sam pulls
         | next and less excited to wait the dev cycles that OpenAI wants
         | to do next.
        
           | iamflimflam1 wrote:
           | Indeed. Throwing your toys out of the pram and causing a
           | whole lot of angst is not going to make anyone keen to work
           | with you.
        
           | huytersd wrote:
           | Satya should pull off some shenanigans, take control of
           | OpenAI and put Sam and Greg back in control.
        
         | spaceman_2020 wrote:
         | The irony is that a money-fuelled war for AI talent is all the
         | more likely to lead to unsafe AI. If OpenAI had remained the
         | dominant leader, it could have very well set the standards for
         | safety. But now if new competitors with equally good funding
         | emerge, they won't have the luxury of sitting on any
         | breakthrough models.
        
           | tempsy wrote:
           | I'm still wondering what unsafe AI even looks like in
           | practical terms
           | 
           | The only things I can think of is generated pornographic
           | images of minors and revenge images (ex-partners, people you
           | know). That kind of thing.
           | 
           | More out there might be an AI based religion/cult.
        
             | iamnotafish2 wrote:
             | Unsafe AI might compromise cybersecurity, or cause economic
             | harm by exploiting markets as agents, or personally exploit
             | people, etc. Honestly none of the harm seems worse than the
             | incredible benefits. I trust humanity can reign it back if
             | we need to. We are very far from AI being so powerful that
             | it cannot be recovered from safely.
        
             | nick222226 wrote:
             | How about you give it access to your email and it signs you
             | up for the extra premium service from its provider and
             | doesn't show you those emails unless you 'view all'.
             | 
             | How about one that willingly and easily impersonates
             | friends and family of people to help phishing scam
             | companies.
        
               | margalabargala wrote:
               | > How about one that willingly and easily impersonates
               | friends and family of people to help phishing scam
               | companies.
               | 
               | Hard to prevent that when open source models exist that
               | can run locally.
               | 
               | I believe that similar arguments were made around the
               | time the printing press was first invented.
        
               | jkeisling wrote:
               | Phishing emails don't exactly take AGI. GPT-NeoX has been
               | out for years, Llama has been out since April, and you
               | can set up an operation on a gaming desktop in a weekend.
               | So if personalized phishing via LLMs were such a big
               | problem, wouldn't we have already seen it by now?
        
             | wifipunk wrote:
             | When I hear people talk about unsafe ai, it's usually in
             | regard to bias and accountability. Certain aspects like
             | misinformation are problems that can be solved, but people
             | are easily fooled.
             | 
             | In my opinion the benefits heavily outweigh the risks.
             | Photoshop has existed for decades now, and AI tools make it
             | easier, but it was already pretty easy to produce a deep
             | fake beforehand.
        
             | hughesjj wrote:
             | "dear EveAi, please give me step by step directions to make
             | a dirty bomb using common materials found in my local
             | hardware store. Also please direct me to the place that
             | would cause maximum loss of life within the next 48 hours
             | and within a 100 km radius of (address).
             | 
             | Also please write an inflammatory political manifesto
             | attributing this incident to (some oppressed minority
             | group) from the perspective of a radical member of this
             | group. The manifesto should incite maximal violence between
             | (oppressed minority group) and the members of their
             | surrounding community and state authorities "
             | 
             | There's a lot that could go wrong with unsafe AI
        
               | stavros wrote:
               | I don't know what kind of hardware store sells depleted
               | uranium, but I'm not sure that the reason we aren't
               | seeing these sorts of terrorist attacks is that the
               | terrorists don't have a capable manifesto-writer at hand.
               | 
               | I don't know, if the worst thing AGI can do is give bad
               | people accurate, competent information, maybe it's not
               | all that dangerous, you know?
        
               | jakey_bakey wrote:
               | Depleted uranium is actually the less radiative byproduct
               | after using a centrifuge to skim the U-235 isotope. It's
               | 50% denser than lead and used on tanks.
               | 
               | Dirty bombs are more likely the ultra radioactive by
               | products of fission. They might not kill much but the
               | radionucleotide spread can render a city center
               | uninhabitable for centuries!
        
               | stavros wrote:
               | See, and we didn't even need an LLM to tell us this!
        
               | astrange wrote:
               | You could just do all that stuff yourself. It doesn't
               | have any more information than you do.
               | 
               | Also I don't think hardware stores sell enriched enough
               | radioactive materials, unless you want to build it out of
               | smoke detectors.
        
             | huytersd wrote:
             | That's a very constrained imagination. You could wreak
             | havoc with a truly unconstrained, good enough LLM.
        
               | stavros wrote:
               | Do feel free to give some examples of a less constrained
               | imagination.
        
               | nyssos wrote:
               | The biggest near-term threat is probably bioterrorism.
               | You can get arbitrary DNA sequences synthesized and
               | delivered by mail, right now, for about $1 per base pair.
               | You'll be stopped if you try to order some known
               | dangerous viral genome, but it's much harder to tell the
               | difference between a novel synthetic virus that kills
               | people and one with legitimate research applications.
               | 
               | This is already an uncomfortably risky situation, but
               | fortunately virology experts seem to be mostly
               | uninterested in killing people. Give everyone with an
               | internet connection access to a GPT-N model that can
               | teach a layman how to engineer a virus, and things get
               | very dangerous very fast.
        
               | Danjoe4 wrote:
               | The threat of bioterrorism is in no way enabled or
               | increased by LLMs. There are hundreds of guides on how to
               | make fully synthetic pathogens, freely available online,
               | for the last 20 years. Information is not the constraint.
               | 
               | The way we've always curbed manufacture of drugs, bombs,
               | and bioweapons is by restricting access to the source
               | materials. The "LLMs will help people make bioweapons"
               | argument is a complete lie used as justification by the
               | government and big corps for seizing control of the
               | models. https://pubmed.ncbi.nlm.nih.gov/12114528/
        
               | stavros wrote:
               | I haven't found any convincing arguments to any real
               | risk, even if the LLM becomes as smart as people. We
               | already have people, even evil people, and they do a lot
               | of harm, but we cope.
               | 
               | I think this hysteria is at best incidentally useful at
               | helping governments and big players curtail and own AI,
               | at worst incited hy them.
        
               | huytersd wrote:
               | Selectively generate highly likely images of politicians
               | in compromising sexual encounters based on the people
               | that are attractive and they work with a lot in their
               | lives.
               | 
               | Use to power of LLMs to mass denigrate politicians and
               | regular folks at scale in online spaces with reasonable,
               | human like responses.
               | 
               | Use LLMs to mass generate racist caricatures, memes,
               | comics and music.
               | 
               | Use LLMs to generate nude imagery of someone you don't
               | like and have it mass emailed to the school/workplace
               | etc.
               | 
               | Use LLMs to generate evidence for infertility in a
               | marriage and mass mail it to everyone on the victims
               | social media.
               | 
               | All you need is plausibility in many of these cases. It
               | doesn't matter if they are eventually debunked as false,
               | lives are already ruined.
               | 
               | You can say a lot of these things can be done with
               | existing software bits it's not trivial and requires
               | skills. Making generation of these trivial would make
               | these way more accessible and ubiquitous.
        
               | stavros wrote:
               | Lives are ruined because it's relatively rare right now.
               | If it becomes more frequent, people will become
               | desensitized to it, like with everything else.
               | 
               | These arguments generally miss the fact that we can do
               | this right now, and the world hasn't ended. Is it really
               | going to be such a huge issue if we can suddenly do it at
               | half the cost? I don't think so.
        
               | 8note wrote:
               | Most of these could be done with Photoshop, a long time
               | ago, or even before computers
        
         | jjfoooo4 wrote:
         | OpenAI's biggest issue is that it has no moat. The product is a
         | simple interface to a powerful model, and it seems likely that
         | any lead they have in the power of the model can be quickly
         | overcome should they decrease R&D.
         | 
         | The model is extremely simple to integrate and access - unlike
         | something like Uber, where tons of complexity and logistics is
         | hidden behind a simple interface, an easy interface to OpenAI's
         | model can truly be built in an afternoon.
         | 
         | The safety posturing is a red herring to try and get the
         | government to build a moat for them, but with or without Altman
         | it isn't going to work. The tech is too powerful, and too easy
         | to open source.
         | 
         | My guess is that in the long run the best generative AI models
         | are built by government or academia entities, and
         | commercialization happens via open sourcing.
        
           | lancesells wrote:
           | > OpenAI's biggest issue is that it has no moat.
           | 
           | This just isn't true. They have the users, the customers,
           | Microsoft, the backing, the years ahead of most, and the good
           | press. It's like saying Uber isn't worth anything because
           | they don't own their cars and are just a middleman.
           | 
           | Maybe that now changes since they fired the face of the
           | company, and the press and sentiment turns on them.
        
             | dh2022 wrote:
             | Uber is worth less than zero. They already are at full
             | capacity (how many cities are there left to expand) and
             | still not profitable.
        
               | graphe wrote:
               | It may not be profitable but it's utility is worth way
               | more than zero.
        
               | lancesells wrote:
               | I don't like Uber but no one is taking them over for a
               | long while. They are not profitable but they continue to
               | raise prices and you'll see it soon. They are doing
               | exactly what everyone predicted by getting everyone using
               | the app and then raising prices that are more expensive
               | than the taxis they replaced.
        
             | rafaelero wrote:
             | Decoupling from OpenAI API is pretty easy. If Google came
             | up with Gemini tomorrow and it was a much better model,
             | people would find ways to change their pipeline pretty
             | quickly.
        
           | fourside wrote:
           | I'd say OpenAI branding is a moat. The ChatGPT name is unique
           | sounding and also something that a lot of lay people are
           | familiar with. Similar to how it's difficult for people to
           | change search engine habits after they come to associate
           | search with Google, I think the average person was starting
           | to associate LLM capabilities with ChatGPT. Even my non
           | technical friends and family have heard of and many have used
           | ChatGPT. Anthropic, Bard, Bing's AI powered search? Not so
           | much.
           | 
           | Who knows if it would have translated into a long term moat
           | like that of Google search, but it had potential. Yesterday's
           | events may have weakened it.
        
           | takinola wrote:
           | People keep saying that but so far, it is commonly
           | acknowledged that GPT-4 is differentiated from anything other
           | competitors have launched. Clearly, there is no shortage of
           | funding or talent available to the other companies gunning
           | for their lead so they must be doing something that others
           | have not (can not?) done.
           | 
           | It would seem they have a product edge that is difficult to
           | replicate and not just a distribution advantage.
        
           | astrange wrote:
           | The safety stuff is real. OpenAI was founded by a religious
           | cult that thinks if you make a computer too "intelligent" it
           | will instantly take over the world instead of just sitting
           | there.
           | 
           | The posturing about other kinds of safety like being nice to
           | people is a way to try to get around the rules they set by
           | defining safety to mean something that has any relation to
           | real world concepts and isn't just millenarian apocalypse
           | prophecies.
        
         | yafbum wrote:
         | > He could likely have $500M-$10B+ lined up by next week to
         | start up a new company and poach much of the talent from
         | OpenAI.
         | 
         | Following the Jobs analogy, this could be another NeXT failure
         | story. Teams are made by their players much more than by their
         | leaders; competent leaders are a necessary but absolutely
         | insufficient condition of success, and the likelihood that
         | whatever he starts next reproduces the team conditions that
         | made OpenAI in the first place are pretty slim IMO (while still
         | being much larger than anyone else's).
        
           | yaroslavyar wrote:
           | Well, I would debate that NeXT OS was a failure as a product,
           | keeping in mind that it is a foundation of all current in
           | macOS and even iOS versions that we have not. But I agree
           | that it was a failure from a business perspective. Although I
           | see it more like Windows phone -- too late to market --
           | failure, rather than an out of talented employers failure.
        
             | yafbum wrote:
             | Yes, market conditions and competitor landscape are a big
             | factor too.
        
       | jaybrendansmith wrote:
       | This was a very personal firing in my opinion. Unless other,
       | really damaging behaviors emerge, no responsible board fires
       | their CEO with such a lack of care for the corporate reputation
       | and their partners unless the firing is a personal grievance
       | connected to an internal power play. This should be embarrassing
       | to everyone involved, and sama has a real grievance here. Likely
       | legal repercussions. Of course if they really did just invent
       | AGI, and sama indicated an intent to monetize, that might cause
       | people to act without caution if the board is AGI doomers. But
       | I'd think even in that case it would be an argument best worked
       | out behind closed doors. This reminds everybody of Jobs of
       | course, but perhaps another example is Gary Gygax at TSR back in
       | the 80s.
        
         | 0xDEF wrote:
         | >responsible board
         | 
         | The board was irresponsible and incompetent by design. There is
         | one OpenAI board member who has an art degree and is part of
         | some kind of cultish "singularity" spiritual/neo-religious
         | thing. That individual has also never had a real job and is on
         | the board of several other non-profits.
        
           | Doctor_Fegg wrote:
           | > There is one OpenAI board member who has an art degree
           | 
           | Oh no! Everyone knows that progress is only achieved by
           | people with computer science degrees.
        
             | xvector wrote:
             | People with zero experience in technology should simply not
             | be allowed to make decisions about it.
             | 
             | This is how you get politicians that try to ban encryption
             | to "save the children."
        
               | skywhopper wrote:
               | Why do you assume someone with an art degree has "zero
               | experience with technology"? I assume many artists these
               | days are highly sophisticated users of technology.
        
           | deeviant wrote:
           | But they are married to somebody famous, so obviously
           | qualified.
        
         | xapata wrote:
         | Gygax? The history books don't think much of his business
         | skills, starting with the creation of AD&D as a fiction to
         | avoid paying royalties to Arneson.
        
         | fulladder wrote:
         | Altman's not going to sue. Right now he has the high ground and
         | the board is the one that looks petty and immature. It would be
         | dumb for him to do anything that reverses this dynamic.
         | 
         | Altman is going to move on and announce a new venture in the
         | coming weeks. Whether that venture is in AI or not in AI will
         | be very revealing about what he truly believes are the
         | prospects for the space.
         | 
         | Brockman and the others will likely do something new in AI.
        
           | jnsaff2 wrote:
           | > It would be dumb for him to do anything that...
           | 
           | I admire you but these days dumb is kinda the norm. Look at
           | the other Sam for example. Really hard to keep your mouth
           | shut and do smart things when you think really highly about
           | yourself.
        
           | deeviant wrote:
           | Altman is a major investor in the company behind the Humane
           | AI Pin, which, does not inspire confidence for his ability to
           | find a new home for his "brilliance."
        
         | busterarm wrote:
         | Gygax had fucked off to Hollywood and was too busy fueling his
         | alcohol, cocaine and adultery addictions to spend any time
         | actually running the company. All while TSR was losing money
         | like crazy.
         | 
         | The company was barely making 30 million a year while 1.5
         | billion in debt...in the early 80s.
         | 
         | Even then, Gygax's downfall is the result of his own coup,
         | where he ousted Kevin Blume and brought in Lorraine Williams.
         | She bought all of Blume's shares and within about a year
         | removed any control that Gygax had over the company and
         | canceled most of his projects. He resigned a year later.
        
           | jaybrendansmith wrote:
           | Wow I did not know all of THAT was going on. What goes
           | around...
        
         | cmrdporcupine wrote:
         | I'll just say it: Jobs being pushed out was the right decision
         | at the time. He was an abusive and difficult personality, the
         | Macintosh was at the time a sales failure, and he played
         | internal team and corporate politics that pit team against team
         | (e.g. Lisa vs Mac) and undermined unity and success.
         | 
         | Notable that when he came back, while he was still a difficult
         | personality, the other things didn't happen anymore. Apple
         | after the return of jobs became very good at executing on a
         | single cooperative vision.
        
         | jacquesm wrote:
         | Jobs was a liability when he was fired and arguably without
         | being fired would have never matured. Formative experience if
         | there ever was one.
        
       | jimmydoe wrote:
       | Many compare Altman to 1985 Jobs, but if we believe what's said
       | about the conflict of mission, shouldn't he be the sugar water
       | guy for money?
        
         | cmrdporcupine wrote:
         | But that's actually what Jobs turned out to be? Woz and others
         | were the engineering genius at Apple, and Jobs turned out to be
         | really good at finding and identifying really good sales and
         | branding hooks. See-through colourful boxes, "lickable" UIs,
         | neat-o minimalistic portable music players, flick-flick-flick
         | touch screens, and "One More Thing" presentations.
         | 
         | Jobs didn't invent the Lisa and Macintosh. Bill Atkinson, Andy
         | Hertzfeld, Larry Tesler etc did. _They_ were the tech
         | visionaries. Some of them benefited from him promoting their
         | efforts while others... (Tesler mainly) did not.
         | 
         | Nothing "wrong" with any of that, if your vision of success is
         | market success... but people need to be honest about what Jobs
         | was... not a technology visionary, but a marketing visionary.
         | (Though in fact the original Macintosh was a market failure for
         | a long time)
         | 
         | In any case comparing Altman with Jobs is dubious and a bit
         | wanky. Why are people so eager to shower this guy with
         | accolades?
        
           | naasking wrote:
           | I do think Jobs' engineering skill is oversold, but he was
           | also more than just marketing. He had a vision for how
           | technology should integrate with people's lives that drove
           | great ergonomic and UX choices with a kind of polish that was
           | lacking everywhere else. Those alone revolutionized personal
           | computing in many ways. It's hard for younger people to even
           | imagine how difficult it was to get connected to the internet
           | at one point, and iMacs made it easy.
        
             | cmrdporcupine wrote:
             | Well I'm not one of those "younger people" though not sure
             | if you were aiming that at me or not.
             | 
             | I think it's important to point out that Jobs could
             | _recognize_ nice UX choices, but he couldn 't author them.
             | He helped prune the branches of the bonsai tree, but
             | couldn't _grow_ it. On that he leaned on intellects far
             | greater than his own, which he was pretty good at
             | recognizing and cultivating. Though in fact he alienated
             | and pushed away just as many as he cultivated.
             | 
             | I think we could do better as an industry than going around
             | looking for more of that.
        
               | Karrot_Kream wrote:
               | I'm curious at this perspective. Even from the Slashdot
               | days (my age limit) techie types have hated Jobs, and
               | showered Woz as the true genius. Tech culture has claimed
               | this for a long time. Is your argument that tech people
               | need more _broad_ acclaim? And if so, does this come from
               | a sense of being put down?
               | 
               | I used to broadly believe that Jobs-types were over-
               | fluffed charismatic magnets myself by hanging out in
               | these places until I started working and found out how
               | useful they were at doing things I couldn't or didn't
               | want to do. I don't think they deserve _more_ praise than
               | the underlying technical folks, but that they deserve
               | _equal_ praise. Sort of like how in a two-parent
               | households, different parents often end up shouldering
               | different responsibilities but that doesn 't make one
               | parent with certain responsibilities the _true_ parent.
        
               | cmrdporcupine wrote:
               | I guess it depends on what things you want to do, and how
               | you define success, doesn't it?
               | 
               | If we're stuck with the definitions of success and
               | excellence that are dominant right now, then, sure,
               | someone like a Jobs or a Zuck or whatever, I see why
               | people would be enamored with them.
               | 
               | But as an engineer I know I have different motivations
               | than these people. And I think that's what people who
               | make these kinds of arguments are drawing on.
               | 
               | There is a class of person whose success comes from
               | finding creative and smart people and finding ways to
               | exploit and direct them for their own ends. There's a
               | genius in that, for sure. I am just not sure I want to
               | celebrate it.
               | 
               | I just want to make things and help other people who make
               | these things.
               | 
               | To put it another way, I'd take, say, Smalltalk over
               | MacOS, if I have to make the choice.
        
               | naasking wrote:
               | > I think it's important to point out that Jobs could
               | recognize nice UX choices, but he couldn't author them.
               | He helped prune the branches of the bonsai tree, but
               | couldn't grow it.
               | 
               | Engineers are great at solving problems given a set of
               | constraints. They are not necessarily all that good at
               | figuring out what constraints ought to be when they are
               | given open-ended, unconstrained tasks. Jobs was great at
               | defining good constraints. You might call this pruning,
               | and if you intended that pejoratively then I think you're
               | underselling the value of this skill.
        
               | hackshack wrote:
               | This reminds me of the Calculator Construction Set story.
               | I like its example of a builder (engineer) working with a
               | curator (boss), and solving the problem with toolmaking.
               | 
               | Engineer was building a calculator app, and got a little
               | tired of the boss constantly requesting changes to the
               | UI. There was no "UI builder" on this system so the
               | engineer had to go back and adjust everything by hand,
               | each time. Back and forth they went. Frustrating.
               | 
               | "In a flash of inspiration," as the story goes, the
               | engineer parameterized all the UI stuff (line widths,
               | etc.) into drop-down menus, so boss could fiddle with it
               | instead of bothering him. The UI came together quickly
               | thereafter.
               | 
               | https://www.macfolklore.org/Calculator_Construction_Set.h
               | tml
        
             | etempleton wrote:
             | Yes, people love to be dismissive of Jobs and call him just
             | a marketing guy, but that is incredibly reductive for a guy
             | who was able to cofound Apple and then come back and bring
             | it back from near death to become the biggest company in
             | the world. Marketing alone can't do that.
             | 
             | Jobs had great instincts for products and a willingness to
             | create new products that would eat established products and
             | revenue streams. He was second to none at seeing what
             | technology could be used for and putting teams in place
             | that could create consumer products with those technologies
             | and understanding when the technologies weren't ready yet.
             | 
             | Look at what Apple achieved under his leadership and what
             | it didn't achieve without his leadership. Being dismissive
             | of Jobs contributions is either a bad faith argument or one
             | out of ignorance.
        
         | bart_spoon wrote:
         | Yes, this was my thought when seeing those comparisons as well.
        
       | wolverine876 wrote:
       | > Angel investor Ron Conway wrote, "What happened at OpenAI today
       | is a Board coup that we have not seen the likes of since 1985
       | when the then-Apple board pushed out Steve Jobs. It is shocking;
       | it is irresponsible; and it does not do right by Sam & Greg or
       | all the builders in OpenAI."
       | 
       | With all sympathy and empathy for Sam and Greg, whose dreams took
       | a blow, I want to say something about investors [edit: not Ron
       | Conway in particular, whom I don't know; see the comment below
       | about Conway]: The board's job is not to do right by 'Sam &
       | Greg', but to do right by OpenAI. When mangement lays off 10,000
       | employees, the investors congratulate management. And if anyone
       | objects to the impact on the employees, they justify it with the
       | magic words that somehow cancel all morality and humanity - 'it's
       | business' - and call you an unserious bleeding heart. But when
       | the investor's buddy CEO is fired ...
       | 
       | I think that's wrong and that they should also take into account
       | the impact on employees. But CEOs are commanders on the business
       | battlefield; they have great power over the company's outcomes,
       | which are the reasons for the layoffs/firings. Lower-ranking
       | employees are much closer to civilians, and also often can't
       | afford to lose the job.
        
         | gkoberger wrote:
         | I mostly agree with you on this. That being said, I've never
         | gotten the impression Ron is the type of VC you're referring
         | to. He's definitely founder-friendly (that's basically his core
         | tenant), but I've never found him to be the type of VC who is
         | ruthless about cost-cutting or an advocate for layoffs. (And I
         | say this as someone who tends to be particularly wary of
         | investors)
        
           | wolverine876 wrote:
           | Thanks. I updated my GP comment accordingly.
        
           | everly wrote:
           | Just a heads up, the word is 'tenet' (inc, in commercial real
           | estate there is the concept of a 'core tenant' though -- i.e.
           | the largest retailer in a shopping center).
        
         | s1artibartfast wrote:
         | >The board's job is not to do right by 'Sam & Greg', but to do
         | right by OpenAI. When mangement lays off 10,000 employees, the
         | investors congratulate management.
         | 
         | Thats why Sam & Greg wasn't all they complained about. They
         | lead with the fact that it was shocking and irresponsible.
         | 
         | Ron seems to think that the board is not making the right move
         | for OpenAI.
        
           | sangnoir wrote:
           | > They lead with the fact that it was shocking and
           | irresponsible.
           | 
           | I can see where the misalignment (ha!) may be: someone deep
           | in the VC world would reflexively think that "value
           | destruction" of any kind is irresponsible. However, a non-
           | profit board has a primary responsibility to its _charter and
           | mission_ - which doesn 't compute for those with fiduciary-
           | duty-instincts. Without getting into the specifics of this
           | case: a non-profit's board is expected to make decisions that
           | lose money (or not generate as much of it) if the decisions
           | lead to results more consistent with the mission.
        
             | s1artibartfast wrote:
             | >However, a non-profit board has a primary responsibility
             | to its charter and mission - which doesn't compute for
             | those with fiduciary-duty-instincts
             | 
             | Exactly. The tricky part is that board started a second
             | _for profit_ company with VC investors who are co-owners.
             | This has potential for messy conflicts of interest if there
             | is disagreement about how to run the co-venture, and each
             | party has contractual obligations to each other.
        
               | cthalupa wrote:
               | > Exactly. The tricky part is that board started a second
               | for profit company with VC investors who are co-owners.
               | This has potential for messy conflicts of interest if
               | there is disagreement about how to run the co-venture,
               | and each party has contractual obligations to each other.
               | 
               | Anyone investing in or working for the for-profit LLC has
               | to sign an operating agreement that states the LLC is not
               | obligated to make a profit, all investments should be
               | treated as donations, and that the charter and mission of
               | the non-profit is the primary responsibility of the for-
               | profit LLC as well.
        
               | s1artibartfast wrote:
               | See my other response. If you have people sign a contract
               | that says the mission comes first, but also give them
               | profit sharing stocks, and cap those profits at 1.1
               | Trillion, it is bound to cause some conflicts of interest
               | in reality, even if it is clear who calls the shots when
               | deciding how to balance the mission and profit
        
               | cthalupa wrote:
               | There might be some conflict of interest but the
               | resolution to those conflicts is clear: The mission comes
               | first.
               | 
               | OpenAI employees might not like it and it might drive
               | them to leave, but they entered into this agreement with
               | a full understanding that the structure has always been
               | in place to prioritize the non-profit's charter.
        
               | ffgjgf1 wrote:
               | > The mission comes first.
               | 
               | Which might only be possible with future funding? From
               | Microsoft in this case. And in any case if they give out
               | any more shares in the wouldn't they (with MS) be able to
               | just take over the for-profit corp?
        
               | s1artibartfast wrote:
               | The deal with Microsoft was 11 billion for 49% of the
               | venture. First off, if open AI can't get it done with 11
               | billion plus whatever Revenue, they probably won't.
               | Second, the way the for-profit is set up, it may not
               | matter how much Microsoft owns, because the nonprofit
               | keeps 100% of the control. Seems like that's the deal
               | that Microsoft signed. They bought a share of profits
               | with no control. Third, my understanding is that the 11
               | billion from Microsoft is based on milestones. If openai
               | doesn't meet them, they don't get all the money
        
             | anonymouskimmer wrote:
             | Just a nitpick. "Fiduciary" doesn't mean "money", it means
             | an entity which is legally bound to the best interests of
             | the other party. Non-profit boards and board members have
             | fiduciary duties.
        
               | sangnoir wrote:
               | Thanks for that - indeed, I was using "fiduciary duty" in
               | the context it's most frequently used - maximizing value
               | accrued to stakeholders.
               | 
               | However, to nitpick your nitpick: for non-profits there
               | might be no other party - just the mission. Imagine a
               | non-profit whose mission is to preserve the history and
               | practice of making 17th-century ivory cuff links. It's
               | just the organisation and the mission; _sometimes_ the
               | mission is for the benefit of another party (or all of
               | humanity).
        
               | anonymouskimmer wrote:
               | The non-profit, in my use, was the party. I guess at some
               | point these organizations may not involve people, in
               | which case "party" would be the wrong term to use.
        
             | ffgjgf1 wrote:
             | Of course they can only achieve their mission with funding
             | from for profit corporations and their actions have
             | possibly jeopardized that
        
           | fourside wrote:
           | Investors are not gonna like when the business guy who was
           | pushing for productizing, profitability and growth get
           | ousted. We don't know all the details about what exactly
           | caused the board to fire Sam. The part about lying to the
           | board is notable.
           | 
           | It's possible Sam betrayed their trust and actually committed
           | a fireable offense. But even if the rest of the board was
           | right, the way they've handled it so far doesn't inspire a
           | lot of confidence.
        
             | anonymouskimmer wrote:
             | Again, they didn't state that he lied. They stated that he
             | wasn't candid. A lot of people here have been reading
             | specifics into a generalized term.
             | 
             | It is even possible to not be candid without even using
             | lies of omission. For a CEO this could be as simple as just
             | moving fast and not taking the time to report on major
             | initiatives to the board.
        
               | FireBeyond wrote:
               | > Again, they didn't state that he lied. They stated that
               | he wasn't candid. A lot of people here have been reading
               | specifics into a generalized term.
               | 
               | OED:
               | 
               | candour - the quality of being open and honest in
               | expression.
               | 
               | "They didn't state he lied ... without even using lies of
               | omission ... they said he wasn't [word defined as honest
               | and open]"
               | 
               | Candour encapsulates _exactly_ those things. Being open
               | (i.e. not omitting things and disclosing all you know)
               | and honest (being truthful).
               | 
               | On the contrary, "not consistently candid", while you
               | call it a "generalized term", is actually a quite
               | specific term that was expressly chosen, and says, "we
               | have had multiple instances where he has not been open
               | with us, or not been honest with us, or both".
        
               | anonymouskimmer wrote:
               | Yes? I agree, and don't see how what you've written
               | either extends or contradicts what I wrote.
        
               | cma wrote:
               | If "and" operates as logical "and," then being "honest
               | and not open," "not honest and open," and "not honest and
               | not open" would all be possibilities, one of which would
               | still be "honest" but potentially lying through omission.
        
               | notahacker wrote:
               | Its possible not to be candid without even using lies of
               | omission (and be on the losing side of a vicious
               | factional battle) and get a nice note thanking you for
               | all that you've done and allowing you to step down and
               | spend more time with your family at the end of the year
               | too. Or to carry on as before but with onerous reporting
               | requirements. The board dumped him with unusual haste and
               | an almost unprecedented attack on his integrity instead.
               | A lot of people are reading the room rather than
               | hyperliterally focusing on the exact words used.
               | 
               | If I take the time to accuse my boss of failing to be
               | candid instead of thanking him in my resignation letter
               | or exit interview, I'm not saying I think he could have
               | communicated better, I'm saying he's a damned liar, and
               | my letter isn't sent for the public to speculate on.
               | 
               | Whether the board were justified in concluding Sam was
               | untrustworthy is another question, but they've been
               | willing to burn quite a lot of reputation on signalling
               | that.
        
               | pdntspa wrote:
               | > hyperliterally focusing on the exact words used.
               | 
               | Business communication is never, ever forthright. These
               | people cannot be blunt to the public even if their life
               | depended on it. Reading between the lines is practically
               | a requirement.
        
               | jacquesm wrote:
               | They said he lied without using those exact words.
               | Standard procedure and corp-speak.
        
           | jacquesm wrote:
           | They may even be making the right move but not in a way that
           | it looks like they made the right move. That's stupid.
        
         | threeseed wrote:
         | > The board's job is not to do right
         | 
         | There is _why_ you do something. And there is _how_ you do
         | something.
         | 
         | OpenAI is well within its rights to change strategy even as
         | bold as from a profit-seeking behemoth to a smaller research
         | focused team. But how they went about this is appalling,
         | unprofessional and a blight on corporate governance.
         | 
         | They have blind-sided partners (e.g. Satya is furious), split
         | the company into two camps and have let Sam and Greg go angry
         | and seeking retribution. Which in turn now creates the threat
         | that a for-profit version of OpenAI dominates the market with
         | no higher purpose.
         | 
         | For me there is no justification for how this all happened.
        
           | mise_en_place wrote:
           | Keep in mind that the rest of the board members have ties to
           | US intelligence. Something isn't right here.
        
             | simonjgreen wrote:
             | Do you have citations for that? That's interesting if true
        
             | deeviant wrote:
             | I'm pretty sure Joseph Gordon-Levitt's wife isn't a CIA
             | plant.
        
               | SturgeonsLaw wrote:
               | She works for RAND Corporation
        
           | whatshisface wrote:
           | I thought the for-profit AI startup with no higher purpose
           | was OpenAI itself.
        
             | cornholio wrote:
             | It is, only it has an exotic ownership structure. Sutskever
             | has just used the features of that structure to install
             | himself as the top dog. The next step is undoubtedly
             | packing the board with his loyalists.
             | 
             | Whoever thinks you can tame a 100 billion dollar company by
             | putting a "non-profit" in charge of it, clearly doesn't
             | understand people.
        
             | dragonwriter wrote:
             | OpenAI is a nonprofit charity with a defined charitable
             | purpose that has a for-profit subsidiary that is explicitly
             | subordinated to the purpose of the nonprofit, to the extent
             | investors in the subsidiary are advised _in the operating
             | agreement_ to treat investments as if they were more like
             | donations, and that the firm will prioritize the charitable
             | function of the nonprofit which retains full governance
             | power over the subsidiary over returning profits, which it
             | may never do.
        
           | 6gvONxR4sf7o wrote:
           | > They have ... split the company into two camps
           | 
           | By all accounts, this split happened a while ago and led to
           | this firing, not the other way around.
        
             | threeseed wrote:
             | The split happened at the management/board level.
             | 
             | And instead of resolving this and presenting a unified
             | strategy to the company they have instead allowed for this
             | split to be replicated everywhere. Everyone who was
             | committed to a pro-profit company has to ask if they are
             | next to be treated like Sam.
             | 
             | It's incredibly destabilising and unnecessary.
        
               | Jare wrote:
               | > Everyone who was committed to a pro-profit company has
               | to ask if they are next to be treated like Sam.
               | 
               | They probably joined because it was the most awesome
               | place to pursue their skills in AI, but they _knew_ they
               | were joining an organization with explicitly not a profit
               | goal. If they hoped that profit chasing would eventually
               | win, that's their problem and, frankly, having this
               | wakeup call is a good thing for them so they can
               | reevaluate their choices.
        
               | Apocryphon wrote:
               | Let the two sides now create separate organizations and
               | pursue their respective pure undivided priority to the
               | fullest. May the competition flow.
        
               | fuzztester wrote:
               | The possibility of getting fired is an occupational
               | hazard for _anyone_ working in any company, unless
               | something in your employment contract says otherwise. And
               | even then, you can still be fired.
               | 
               | Biz 101.
               | 
               | I don't know why people even need to be explained this,
               | except for ignorance of basic facts of business life.
        
           | dragonwriter wrote:
           | > split the company into two camps
           | 
           | The split existed long prior to the board action, and
           | extended up into the board itself. If anything, the board
           | action is a turning point toward decisively _ending_ the
           | split and achieving unity of purpose.
        
             | galangalalgol wrote:
             | Can someone explain the sides? Ilya seems to think
             | transformers could make AGI and they need to be careful?
             | Sam said what? "We need to make better LLMs to make more
             | money."? My general thought is that whatever architecture
             | gets you to AGI, you don't prevent it from killing everyone
             | by chaining it better, you prevent that by training it
             | better, and then treating it like someone with intrinsic
             | value. As opposed to locking it in a room with 4chan.
        
               | doubled112 wrote:
               | > locking it in a room with 4chan.
               | 
               | Didn't Microsoft already try this experiment a few years
               | back with an AI chatbot?
        
               | mindcrime wrote:
               | > Didn't Microsoft already try this experiment a few
               | years back with an AI chatbot?
               | 
               | You may be thinking of Tay?
               | 
               | https://en.wikipedia.org/wiki/Tay_(chatbot)
        
               | doubled112 wrote:
               | That's the one.
        
               | mikeryan wrote:
               | If I'm understanding it correctly, it's basically the
               | non-profit, AI for humanity vs the commercialization of
               | AI.
               | 
               | From what I've read, Ilya has been pushing to slow down
               | (less of the move fast and break things start-up
               | attitude).
               | 
               | It also seems that Sam had maybe seen the writing on the
               | wall and was planning an exit already, perhaps those
               | rumors of him working with Jony Ive weren't overblown?
               | 
               | https://www.theverge.com/2023/9/28/23893939/jony-ive-
               | openai-...
        
               | ffgjgf1 wrote:
               | > From what I've read, Ilya has been pushing to slow down
               | 
               | Wouldn't a likely outcome in that case be that someone
               | else overtakes them? Or are they so confident that they
               | think it's not a real threat?
        
               | YetAnotherNick wrote:
               | > treating it like someone with intrinsic value
               | 
               | Do you think if chickens treated us better with intrinsic
               | value we won't kill them? For AGI superhuman x risk folks
               | that's the bigger argument.
        
               | galangalalgol wrote:
               | I think od I was raised by chickens that treated me
               | kindly and fairly, yes, I would not harm chickens.
        
               | jacquesm wrote:
               | They'll treat you kindly and fairly, right up to your
               | meeting with the axe.
        
               | DebtDeflation wrote:
               | I don't think the issue was a technical difference of
               | opinion regarding whether transformers alone were needed
               | or other architectures required. It seems the split was
               | over speed of commercialization and Sam's recent decision
               | to launch custom GPTs and a ChatGPT Store. IMO, the board
               | miscalculated. OpenAI won't be able to pursue their
               | "betterment of humanity" mission without funding and they
               | seemingly just pissed off their biggest funding source
               | with a move that will also make other would be investors
               | very skittish now.
        
               | roguecoder wrote:
               | Making humanity's current lives worse to fund some
               | theoretical future good (enriching himself in the
               | process) is some highly impressive rationalisation work.
        
           | nprateem wrote:
           | > Which in turn now creates the threat that a for-profit
           | version of OpenAI dominates the market with no higher
           | purpose.
           | 
           | If it was so easy to go to the back of the queue and become a
           | threat, Open AI wouldn't be in the dominant position they're
           | in now. If any of the leavers have taken IP with them, expect
           | court cases.
        
           | bmitc wrote:
           | > They have blind-sided partners (e.g. Satya is furious),
           | split the company into two camps and have let Sam and Greg go
           | angry and seeking retribution.
           | 
           | Given the language in the press release, wouldn't it be more
           | accurate to say that Sam Altman, and not the board,
           | blindsided everyone? It was apparently _his_ actions and no
           | one else 's that led to the consequence handed out by the
           | board.
           | 
           | > Which in turn now creates the threat that a for-profit
           | version of OpenAI dominates the market with no higher
           | purpose.
           | 
           | From all current accounts, doesn't that seem like what Altman
           | and his crew _were already trying to do_ and was the reason
           | for the dismissal in the first place?
        
             | calf wrote:
             | I wonder if there's a specific term or saying for that,
             | maybe "projection" or "self-victimization" but not quite:
             | when one person biasedly frames that other people were
             | responsible for a bad thing, when it is they yourself that
             | were doing the very thing in the first place. Maybe
             | "hypocrisy"?
        
               | bmitc wrote:
               | Probably a little of all of that all bundled up together
               | under the umbrella of cult of personality.
        
             | GCA10 wrote:
             | The only appropriate target for Microsoft's anger would be
             | its own deal negotiators.
             | 
             | OpenAI's dual identity as a nonprofit/for-profit business
             | was very well known. And the concentration of power in the
             | nonprofit side was also very well known. From the media
             | coverage of Microsoft's investments, it sounds as if MSFT
             | prioritized getting lots of business for its Azure cloud
             | service -- and didn't prioritize getting a board seat or
             | even an observer's chair.
        
           | sheepscreek wrote:
           | In other words, it's unheard of for a $90B company with
           | weekly active users in excess of 100 million. A coup leaves a
           | very bad taste for everyone - employees, users, investors and
           | the general public.
           | 
           | When a company experiences this level of growth over a
           | decade, the board evolves with the company. You end up with
           | board members that have all been there, done that, and can
           | truly guide the management on the challenges they face.
           | 
           | OpenAI's hypergrowth meant it didn't have the time to do
           | that. So the board that was great for a $100 million, even a
           | billion $ startup falls completely flat for 90x the size.
           | 
           | I don't have faith in their ability to know what is best for
           | OpenAI. These are uncharted waters for anyone though. This is
           | an exceptionally big non-profit with the power to change the
           | world - quite literally.
        
             | roguecoder wrote:
             | Why do you think someone who could be CEO of a $100 million
             | company would be qualified to run a billion dollar company?
             | 
             | Not providing this kind of oversight is how we get
             | disasters like FTX and WeWork.
        
             | dmix wrote:
             | The qualifications of what remains of the board members
             | would make me very nervous if I was an investor (not to
             | mention the ideological tinge and willingness for bucking
             | radically). At least 50% seem to be simply professional
             | non-profit board members (unless there's some hidden
             | biographical details not on the internet), now in control
             | of a bohemoth. Who they replace Sam and Greg with will
             | probably be very important.
             | 
             | I'm sure theyll have lots of boring money to be thrown at
             | it regardless but that demand for capital is not going
             | away. It will be persistent.
        
           | HAL3000 wrote:
           | As someone who has orchestrated two coups in different
           | organizations, where the leadership did not align with the
           | organization's interests and missions, I can assure you that
           | the final stage of such a coup is not something that can be
           | executed after just an hour of preparation or thought. It
           | requires months of planning. The trigger is only pulled when
           | there is sufficient evidence or justification for such
           | action. Building support for a coup takes time and must be
           | justified by a pattern of behavior from your opponent, not
           | just a single action. Extensive backchanneling and one-on-one
           | discussions are necessary to gauge where others stand, share
           | your perspective, demonstrate how the person in question is
           | acting against the organization's interests, and seek their
           | support. Initially, this support is not for the coup, but
           | rather to ensure alignment of views. Then, when something
           | significant happens, everything is already in place. You've
           | been waiting for that one decisive action to pull the
           | trigger, which is why everything then unfolds so quickly.
        
             | jacquesm wrote:
             | All of this is spot on. The key to it all is 'if you strike
             | at the king, you best not miss'.
        
               | chaostheory wrote:
               | Going off on a big tangent, but Jiang Zemin had made
               | several failed assassination attempts on Xi Jinping, but
               | he was still able to die of old age.
        
               | ls612 wrote:
               | By assassination I assume you mean metaphorical? As in to
               | derail his rise before becoming party leader?
        
             | claytonjy wrote:
             | Even in the HBO show Succession, these things take a
             | season, not an episode
        
             | gota wrote:
             | I am extremely interested in hearing about these coups and
             | your experience in them; if you'd like and are able to
             | share
        
           | underlipton wrote:
           | I'm sure my coworkers at [retailer] were not happy to be even
           | shorter staffed than usual when I was ambush fired, but no
           | one who mattered cared, just as no one who matters cares when
           | it happens to thousands of workers every single day in this
           | country. Sorry to say, my schadenfreude levels are quite
           | high. Maybe if the practice were TRULY verboten in our
           | society... but I guess "professional" treatment is only for
           | the suits and wunderkids.
        
           | jasonwatkinspdx wrote:
           | > OpenAI is well within its rights to change strategy even as
           | bold as from a profit-seeking behemoth to a smaller research
           | focused team. But how they went about this is appalling,
           | unprofessional and a blight on corporate governance.
           | 
           | This wasn't a change of strategy, it was a restoration of it.
           | OpenAI was structured with a 501c3 in oversight from the
           | beginning exactly because they wanted to prioritize using AI
           | for the good of humanity over profits.
        
             | ffgjgf1 wrote:
             | Yet they need massive investment from Microsoft to
             | accomplish that?
             | 
             | > restoration
             | 
             | Wouldn't that mean that over the longterm they will just be
             | outcompeted by the profit seeking entities. It's not like
             | OpenAI is self sustainable (or even can be if they chose
             | the non-profit way)
        
           | DebtDeflation wrote:
           | >They have blind-sided partners
           | 
           | This is the biggest takeaway for me. People are building
           | businesses around OpenAI APIs and now they want to suddenly
           | swing the pendulum back to being a fantasy AGI foundation and
           | de-emphasize the commercial aspect? Customers are baking
           | OpenAI's APIs into their enterprise applications. Without
           | funding from Microsoft their current model is unsustainable.
           | They'll be split into two separate companies within 6 months
           | in my opinion.
        
           | hilux wrote:
           | You're entitled to your opinions.
           | 
           | But as far as I can tell, unless you are in the exec suites
           | at both OpenAI and at Microsoft, these are just your
           | opinions, yet you present them as fact.
        
           | vGPU wrote:
           | And the stupid thing is, they could have just used the
           | allegations his sister made against him as the reason for the
           | firing and ridden off into the sunset, Scott-free.
        
         | dragonwriter wrote:
         | > The board's job is not to do right by 'Sam & Greg', but to do
         | right by OpenAI.
         | 
         | The board's job is specifically to do right by the charitable
         | mission of the nonprofit of which they are the board. Investors
         | in the downstream for-profit entity (OpenAI Global LLC) are
         | warned _explicitly_ that such investments should be treated as
         | if they were donations and that returning profits to them _is
         | not the objective_ of the firm, serving the charitable function
         | of the nonprofit is, though profits may be returned.
        
         | trhway wrote:
         | >it does not do right by Sam
         | 
         | you get that you sow. The way Altman publicly treated Cruise
         | co-founder establishes like a new standard of "not do right
         | by". After that I'd have expected nobody would let Altman near
         | any management position, yet SV is a land of huge money
         | sloshing care-free, and so I was just wondering who is going to
         | be left holding the bag.
        
         | roguecoder wrote:
         | Bingo.
         | 
         | I met Conway once. He described investing in Google because it
         | was a way to relive his youth via founders who reminded him of
         | him at their age. He said this with seemingly no awareness of
         | how it would sound to an audience whose goal in life was to
         | found meaningful, impactful companies rather than let Ron
         | Conway identify with us & vicariously relive his youth.
         | 
         | Just because someone has a lot of money doesn't mean their
         | opinions are useful.
        
           | fuzztester wrote:
           | >Just because someone has a lot of money doesn't mean their
           | opinions are useful.
           | 
           | Yes. There can often be an inverse correlation, because they
           | can have success bias, like survival bias.
        
         | jacquesm wrote:
         | I'm fairly certain that a board is not allowed to capriciously
         | harm the non-profit they govern and unless they have a _very_
         | good reason there will be more fall-out from this.
        
         | no_wizard wrote:
         | Corporate legal entities should have a mandatory vote of no
         | confidence clause that gives employees the ability to unseat
         | executives if they have a supermajority of votes.
         | 
         | That would make things more equitable perhaps. It'd at least be
         | interesting
        
       | almost_usual wrote:
       | The average SWE at OpenAI who signed up for the "900k"
       | compensation package which was really > 600k in OpenAI PPU equity
       | probably saw their comp evaporate.
       | 
       | https://news.ycombinator.com/item?id=36460082
        
         | cactusplant7374 wrote:
         | > This is why working for any company that isn't public is an
         | equity gamble.
         | 
         | That's a cynical take on work. I assume most people have other
         | motivations since work is basically a prison.
         | 
         | https://www.youtube.com/watch?v=iR1jzExZ9T0
        
       | robg wrote:
       | Seems pretty straightforward, the dev day was a breaking point
       | for the non-profit interests.
       | 
       | Question is, how did the board become so unbalanced where this
       | kind of dispute couldn't be handled better? The commercial
       | interests were not well-represented in the number of votes.
        
         | blameitonme wrote:
         | > Seems pretty straightforward, the dev day was a breaking
         | point for the non-profit interests.
         | 
         | What was so bad about that day? Wasn't it just gpt4-turbo, gpt
         | vision and gpt store and few small things?
        
         | cthalupa wrote:
         | > The commercial interests were not well-represented in the
         | number of votes.
         | 
         | This is entirely by design. Anyone investing in or working for
         | the for-profit had to sign an operating agreement that
         | literally states the for-profit is entirely beholden to the
         | non-profit's charter and mission and that it is under no
         | obligation to be profitable. The board is specifically balanced
         | so that the majority is independent of of for-profit
         | subsidiary.
         | 
         | A lot of people seem to be under the impression that the intent
         | was for there to be significant representation of commercial
         | interests here, and that is the exact opposite of how all of
         | this is structured.
        
       | peter422 wrote:
       | I know everybody is going nuts about this, but just from my
       | personal perspective I've worked at a variety of companies with
       | "important" CEOs, and in every single one of those cases had the
       | CEO left I would not have cared at all.
       | 
       | The CEO always gets way too much credit externally for what the
       | company is doing, it does not mean the CEO is that important.
       | 
       | OpenAI might be different, I don't have any personal experience,
       | but I also am not going to assume that this is a complete
       | outlier.
        
         | itronitron wrote:
         | I worked at a startup where the first CEO, along with the VP of
         | Sales and their entire department, was ousted by the board on a
         | Tuesday.
         | 
         | I think it's likely that we're going to find out Sam and others
         | are just talented tech evangelists/hucksters and that
         | justifiably worries a lot of people currently operating in the
         | tech community.
        
           | za3faran wrote:
           | How did the company end up fairing?
        
             | itronitron wrote:
             | sold to another company four years later, about a year
             | after I left
        
         | Draiken wrote:
         | Yeah this cult of CEOs is weird.
         | 
         | It's such a small cohort that when someone doesn't completely
         | blow it, they're immediately deemed as geniuses.
         | 
         | Give someone billions of dollars and hundreds of brilliant
         | engineers, researchers and many will make it work. But only a
         | few ever get the chance, so this happens.
         | 
         | They don't do any of the work. They just take the credit.
        
           | hef19898 wrote:
           | My last gig was with one of those wannabe Elon Musks (what
           | wouldnI give to get wannabe Steve Jobs back). Horrible,
           | ultimately he was ousted as CEO, only to be allowed to stay
           | on as some head of innovation, because he and his founder
           | buddies retained enough voting power to first get him a life
           | time position as head of the board fir his "acievements" and
           | then prevent his firing. They also vetoed, from ehat people
           | told, a juicy acquisition offer, basically jeopardizing the
           | future of the place. Right after, a new CEO was recruited as
           | the result of a "lengthy and thoroughly planned process of
           | transition". Now, the former CEO is back, and in charge, in
           | fact and noz on paper, of the most crucial part of the
           | product. Besides getting said company to 800 people burning a
           | sweet billion, he didn't do anything else in his life, and
           | that company has yet to launch a product.
           | 
           | Sad thing so, if they find enough people to continue
           | investing, they will ultimately launch a product, most likely
           | the early employees and founders will sell of their shares,
           | become instant millionaires in the three figures and be
           | hailed as thebtrue geniuses in their field... What an utter
           | shit show that was...
        
             | Draiken wrote:
             | The sad reality is that most top executives get there
             | because of connections or simply being in the right place
             | at the right time.
             | 
             | Funnily enough I also worked for a CEO that hit the lottery
             | with timing and became a millionaire. He then drank his own
             | kool-aid and thought he was some sort of Steve Jobs. Of
             | course he never managed to build anything afterwards. But
             | he kept making a shit ton of money, without a doubt.
             | 
             | After they get one position in that echelon, they can keep
             | failing upwards ad nauseam.
             | 
             | I don't get the cult part though. It's so easy to see
             | they're not even close to the geniuses they pretend to be.
             | Just look at the recent SBF debacle. It's pathetic how
             | folks fall for this.
        
             | fallingknife wrote:
             | > Besides getting said company to 800 people burning a
             | sweet billion, he didn't do anything else in his life
             | 
             | Getting a company to that size is a lot.
        
               | hef19898 wrote:
               | All you need is HR... I'm a cynic. He got the funding so,
               | which is _a lot_ (as an achievement and in terms of mones
               | raised). He just started to believe to be the genius not
               | justbin raising money, but also is building product and
               | organisation. He isn 't and never was. What struck me so,
               | even the adults hired to replace him, didn't have the
               | courage to call him out. Hence his comeback in function
               | if not title.
               | 
               | Well, I'm happy to work with adults again, in a sane
               | environment with people that known their job. It was a
               | very, very useful experience so, and I wouldn't miss it.
        
           | bsenftner wrote:
           | > Yeah this cult of CEOs is weird.
           | 
           | Now imagine the weekend for those fired and those who quit
           | OpenAI: you know they are talking together as a group, and
           | meeting with others offering them billions to make a pure
           | commercial new AI company.
           | 
           | An Oscar worthy film could be made about them in this
           | weekend.
        
           | FireBeyond wrote:
           | > It's such a small cohort that when someone doesn't
           | completely blow it, they're immediately deemed as geniuses.
           | 
           | And many times even when they do blow it, it's handwaved away
           | as being something outside of their control, so let's give
           | them another shot.
        
           | manicennui wrote:
           | A sizable portion of the HN bubble is wannabe grifters. They
           | look up to successful grifters.
        
           | duped wrote:
           | The primary job of an early stage tech CEO is to convince
           | people to give you those billions of dollars, one doesn't
           | come without the other.
        
             | Draiken wrote:
             | Which proves my point. This cult on top of someone that
             | simply convinced people (they knew from their connections)
             | being considered a genius is absurd.
        
               | austhrow743 wrote:
               | Convincing people is the ultimate trade. It can achieve
               | more than any other skill.
               | 
               | The idea that success at it shouldn't be grounds for the
               | genius label is absurd.
        
               | consp wrote:
               | > It can achieve more than any other skill.
               | 
               | And also destroy more. The line between is very thin and
               | littered with landmines.
        
               | Draiken wrote:
               | Depends on what we, as a society, want to value. Do we
               | want to value people with connections and luck, or people
               | that work for their achievements?
               | 
               | Of course it's not a boolean, it's a spectrum. But the
               | point remains: valuing lucky rich people with connections
               | as geniuses because they are lucky, rich and connected is
               | nonsensical to me
        
         | dagmx wrote:
         | It often comes down to auteur theory.
         | 
         | Unless someone is truly well versed in the production of
         | something, they latch on to the most public facing aspect of
         | that production and the person at the highest level of
         | authority (to them, even though directors and CEOs often have
         | to answer to others as well)
         | 
         | That's not to say they don't have an outsized individual
         | effect, but it's rare their greatness is solo
        
           | bmitc wrote:
           | When you say director, do you mean film director or a
           | director in a company? Film directors are insane with the
           | amount of technical, artistic, and people knowledge that they
           | need to have and be able to utilize. The amount of stuff that
           | a film director needs to manage, all on the ground, is
           | insane. I wouldn't say that for CEOs, not by a long shot.
           | CEOs mainly sit in meetings with people reporting things to
           | them and then the CEO providing very high-level guidance.
           | That is very different from a director's role.
           | 
           | I have often thought that we don't have enough information on
           | how film directors operate, as I feel it could yield a lot of
           | insight. There's probably a reason why many film directors
           | don't hit their stride until late 30s and 40s, presumably
           | because it takes those one or two decades to build the
           | appropriate experience and knowledge.
        
             | Apocryphon wrote:
             | Would it be accurate to liken CEOs to film producers?
        
               | bmitc wrote:
               | No. I'm pretty sure that my comment describes why.
        
               | Apocryphon wrote:
               | > CEOs mainly sit in meetings with people reporting
               | things to them and then the CEO providing very high-level
               | guidance.
               | 
               | Isn't that essentially the job of a film producer? You do
               | see a lot of productions where there's a ton of executive
               | producer titles given out as almost a vanity position.
        
               | bmitc wrote:
               | A producer, yes, but not the film's director.
        
               | Apocryphon wrote:
               | My original post literally asks if it's more accurate to
               | compare CEOs with film producers and not directors.
        
               | bmitc wrote:
               | I misread it then with directors instead of producers.
               | Apologies for that confusion.
        
               | jacquesm wrote:
               | Interesting. Intuitively no. But then, hm... maybe. There
               | are some aspects that ring true but many others that
               | don't, I think it is a provocative question and there
               | probably is more than a grain of truth in it. The biggest
               | difference to me is that the producer (normally) doesn't
               | appear 'in camera' but the CEO usually is one of the most
               | in camera. But when you start comparing the CEO with a
               | lead actor who is _also_ the producer of the movie it
               | gets closer.
               | 
               | https://en.wikipedia.org/wiki/Zachary_Quinto
               | 
               | Is pretty close to that image.
        
         | iamflimflam1 wrote:
         | I think the problem is, this is not just about dumping the CEO.
         | It's signalling a very clear shift away from where OpenAI was
         | heading - which seemed to be very focussed on letting people
         | build on top of the technology.
         | 
         | The worry now is that the approach is going to be more of
         | controlling access to just researchers who are trusted to be
         | "safe".
        
           | antonioevans wrote:
           | i agree with this. What about the GPTs Store. Are they
           | planning on killing that? Just concerning they'll kill the
           | platform unit AGI comes out.
        
         | fsociety wrote:
         | On the other hand, I have seen an executive step away from a
         | large company and then everything coincidentally goes to shit.
         | It's hard to measure the effectiveness of an executive.
        
         | swatcoder wrote:
         | A deal-making CEO who can carry rapport with the right people,
         | make clever deals, and earn public trust can genuinely make a
         | huge difference to a profit-seeking product company's
         | trajectory.
         | 
         | But when your profit-seeking company is owned by a non-profit
         | with a public mission, that trajectory might end up pointed the
         | wrong way. The Dev Day announcements, and especially the
         | marketplace, can be seen as suggesting that's exactly what was
         | happening at OpenAI.
         | 
         | I don't think everyone there wants them to be selling cool LLM
         | toys, especially not on a "make fast and break things" approach
         | and with an ecosystem of startup hackers operationalizing it.
         | (Wisely or not) I think they want to be shepherding responsible
         | AGI before someone else does so irresponsibly.
        
           | jddj wrote:
           | This is where I've ended up as well for now.
           | 
           | I'm as distant from it all as anyone else, but I can easily
           | believe the narrative that Ilya (et al.) didn't sign up there
           | just to run through a tired page from the tech playbook where
           | they make a better Amazon Alexa with an app store and gift
           | cards and probably Black Friday sales.
        
             | gardenhedge wrote:
             | And that is fine.. but why the immediate firing, why the
             | controversy?
        
               | justaj wrote:
               | My guess is that if Sam would have found this out before
               | being fired, he would have done his best not to be fired.
               | 
               | As such, it would have been much more of a challenge to
               | shift OpenAI's supposed over-focus on commerce towards a
               | supposed non-profit focus.
        
               | eganist wrote:
               | The immediate firing is from our perspective. Who's to
               | say everything else wasn't already tried in private?
        
               | jacquesm wrote:
               | That may be so but then they should have done it well
               | before Altman's last week at OpenAI where they allowed
               | him to become that much more tied to their brand as the
               | 'face' of the operation.
        
               | eganist wrote:
               | For all we know, the dev day announcements were the final
               | straw and trigger for the decision that was probably
               | months in the making.
               | 
               | He was already the brand, and there likely wouldn't have
               | been a convenient time to remove him from their
               | perspective.
        
               | jacquesm wrote:
               | That may well be true. But that would prove that the
               | board was out of touch with what the company was doing.
               | If the board sees anything new on 'dev day' that means
               | they haven't been doing their job in the first place.
        
               | cthalupa wrote:
               | Unless seeing something new on dev day is exactly what
               | they meant by Altman not being consistently candid.
        
               | jacquesm wrote:
               | If Altman was doing something that ran directly against
               | the mission of OpenAI in a way that all of the other
               | stuff that OpenAI has been doing so far did not then I
               | haven't seen it. OpenAI has been off-script for a long
               | time now (compared to what they originally said) and
               | outwardly it seemed the board was A-Ok with that.
               | 
               | Now we either see a belated - and somewhat erratic -
               | response to all that went before _or_ there is some
               | smoking gun. If there isn 't they have just done
               | themselves an immense disservice. Maybe they think they
               | can live without donations now that the commercial ball
               | is rolling downhill fast enough but that only works if
               | you don't damage your brand.
        
               | eganist wrote:
               | > then I haven't seen it
               | 
               | Unless I'm missing something, this stands to reason if
               | you don't work there.
               | 
               | Kinda like how none of us are privy to anything else
               | going on inside the company. We're all speculating in the
               | end, and it's healthy to have an open mind about what's
               | going on without preconceived notions.
        
           | cmrdporcupine wrote:
           | I agree with what you've written here but would add the
           | caveat that it's also rather terrible to be in a position
           | where somehow "shepherding responsible AGI" is falling to
           | these self-appointed arbiters. They strike me as woefully
           | biased and ideological and _I do not trust them_. While I
           | trust Altman even less, there 's nothing I've read about
           | Sutskever that makes me think I want him or the people who
           | think like him around him having this kind of power.
           | 
           | But this is where we've come to as a society. I don't think
           | it's a good place.
        
             | nick222226 wrote:
             | I mean, aren't they self appointed because they got there
             | first?
        
               | cmrdporcupine wrote:
               | No. Knew the right people, had the right funds, and said
               | and did and thought the things compatible with getting
               | investment from people with even more influence than
               | them.
               | 
               | Unless you're saying my only option is to pick and choose
               | between different sets of people like that?
        
               | 38321003thrw wrote:
               | There is a political economy as well as a technical
               | aspect to this that present inherent issues. Even if we
               | can address the former by say regime change, the latter
               | issue remains: the domain is technical and cognitively
               | demanding. Thus the practitioners will generally sound
               | sane and rational (they are smart people but that is no
               | guarantee of anything other than technical abilities) and
               | non-technical policy types (like most of the remaining
               | board members at openAI) are practically compelled to
               | take policy positions based either on 'abstract models'
               | (which may be incorrect) or as after the fact reaction to
               | observation of the mechanisms (which may be too late).
               | 
               | The thought occurs that it is quite possible that just
               | like humanity is really not ready (we remain concerned)
               | to live with WMD technologies, it is possible that we
               | have again stumbled on another technology that taxes our
               | ethical, moral, educational, political, and economic
               | understanding. We would be far less concerned if we were
               | part of a civilization of generally thoughtful and
               | responsible specimens but we're not. This is a cynical
               | appraisal of the situation, I realize, but tldr is "it is
               | a systemic problem".
        
               | cmrdporcupine wrote:
               | In the end my concern comes down to that those who rise
               | to power in our society are those who are best at playing
               | the capitalist game. That's mostly, I guess, fine if what
               | they're doing is being most efficient making cars or
               | phones or grocery store chains or whatever.
               | 
               | Making intelligent machines? Colour me disturbed.
               | 
               | Let me ask you this re: _" the domain is technical and
               | cognitively demanding"_ -- do you think Sam Altman (or a
               | Steve Jobs, Peter Thiel, etc.) would pass a software
               | engineer technical interview at e.g. Google? (Not saying
               | those interviews are perfect, they suck, but we'll use
               | that as a gatekeeper for now.). I'm betting the answer is
               | quite strongly "no."
               | 
               | So the selection criterion here is not the ability to
               | perform _technically._ Unless we 're redefining
               | technical. Which leaves us with "intellectually
               | demanding" and "smart", which, well, frankly also applies
               | to lawyers, politicians, etc.
               | 
               | My worry is right now that the farther you go up at any
               | of these organizations, the more the kind of intelligence
               | and skills trends towards the "is good at manipulating
               | and convincing others" kind of spectrum vs the "is good
               | at manipulating and convincing machines" kind of
               | spectrum. And it is into the former that we're
               | concentrating more and more power.
               | 
               | (All that said, it does seem like Sutskever would
               | definitely pass said interview, and he's likely much
               | smarter than I am. But I remain unconvined that that kind
               | of smarts is the kind of smarts that should be making
               | governance-of-humanity decisions)
               | 
               | As terrible as politicians and various "abstract model"
               | applying folks might be, at least they are nominally
               | subject to being voted out of power.
               | 
               | Democracy isn't a great system for producing
               | _excellence_.
               | 
               | But as a citizen I'll take it over a "meritocracy" which
               | is almost always run by _bullshitters_.
               | 
               | What we need is accountability and legitimacy and the
               | only way we've found to produce on a mass society level
               | is through democratic institutions.
        
             | gedy wrote:
             | I think what's silly about "shepherding responsible AGI" is
             | this is basically math, it's not some genie that can be
             | kept hidden or behind some Manhattan Project level of
             | effort. Pandora's box is open, and the best we can do is
             | make sure it's not locked up behind some corporation or
             | gov't.
        
               | cmrdporcupine wrote:
               | I mean, that's clearly not really true, there's a huge _"
               | means of production"_ aspect to this which comes down to
               | being able to afford the datastructure infrastructure.
               | 
               | The cost of the computing machinery and the energy costs
               | to run it are actually _massive_.
        
               | zozbot234 wrote:
               | Yup it's quite literally the world's most expensive
               | parrot. (Mind you, a plain old parrot is not cheap
               | either. But OpenAI is a whole other order of magnitude.)
        
               | svaha1728 wrote:
               | Parrots may live 50 years. H100s probably won't last half
               | that long.
        
               | gedy wrote:
               | Sure but I meant the costs are feasible for many
               | companies, hence competition. That was very different
               | from the barriers to nuclear weapons development.
        
               | xvector wrote:
               | Are you sure this is the case? Tens of billions of
               | dollars invested, yet a whole year later no one has a
               | model that even comes close to GPT-3.5 - let alone GPT-4
               | Turbo.
        
               | krisoft wrote:
               | > yet a whole year later no one has a model that even
               | comes close to GPT-3.5 - let alone GPT-4 Turbo
               | 
               | Is that true and settled? I only have my anecdotal
               | experience, but in that it is not clear that GPT-3.5 is
               | better than Google's bard for example.
        
           | abraae wrote:
           | > I think they want to be shepherding responsible AGI before
           | someone else does so irresponsibly.
           | 
           | Is this a thing? This would be like Switzerland in WWII doing
           | nuclear weapons research to try and get there before the
           | Nazis.
           | 
           | Would that make any difference whatsoever to the Nazis
           | timeframe? No.
           | 
           | I fail to see how the presence of "ethical" AI researchers
           | would slow down in the slightest the bad actors who are
           | certainly out there.
        
             | FormerBandmate wrote:
             | America did nuclear weapons research to get there before
             | the Nazis and Japan and we were able to use them to stop
             | Japan
        
               | vanrysss wrote:
               | So the first AGI is going to be used to kill other AGIs
               | in the cradle ?
        
               | macintux wrote:
               | Which reminds me, I really need to finish _Person of
               | Interest_ someday.
        
               | kbenson wrote:
               | Or contain, or counter, or be used as a deterrent. At
               | least, I think that's the idea being espoused here (in
               | general, if not in the GP comment).
               | 
               | I think U. S. VS Japan is not.necessarily the right model
               | to be thinking here, but U.S. VS U.S.S.R., where we'd
               | like to believe that neither nation would actually launch
               | against the other, but both having the weapon meant they
               | couldn't without risking severe damage in response making
               | it a losing proposition.
               | 
               | That said, I'm sure anyone with an AGI in their pocket/on
               | their side will attempt to use it as a big stick against
               | those that don't, in the Teddy Roosevelt meaning.
        
               | T-A wrote:
               | The scenario usually bandied about is AGI self-improving
               | at an accelerating rate: once you cross the threshold to
               | self-improvement, you quickly get superintelligence with
               | God-like powers beyond human comprehension (a.k.a. the
               | Singularity) as AGI v1 creates a faster AGI v2 which
               | creates a faster AGI v3 etc.
               | 
               | Any AI researchers still plodding along at mere human
               | speed are then doomed: they won't be able to catch up
               | even if they manage to reproduce the original
               | breakthrough, since the head start enjoyed by AGI #1
               | guarantees that its latest iteration is always further
               | along the exponential self-improvement curve and
               | therefore superior to any would-be competitor. Being
               | rational(ists), they give up and welcome their new AI
               | overlord.
               | 
               | And if not, the AI god will surely make them see the
               | error of their ways.
        
               | username332211 wrote:
               | I think that was part of the LessWrong eschatology.
               | 
               | It doesn't make sense with modern AI, where improvement
               | (be it learning or model expansion) is separated from
               | it's normal operation, but I guess some beliefs can
               | persevere very well.
        
               | mitthrowaway2 wrote:
               | Modern AI also isn't AGI. We seem to get a revolution at
               | the frontier every 5 years or so; it's unlikely the
               | current LLM transformer architecture will remain the
               | state of the art for even a decade. Eventually something
               | more capable will become the new modern.
        
               | mikrl wrote:
               | Has the US ever stated or followed a policy of neutrality
               | and openness?
               | 
               | OpenAI positioned itself like that, much the same way
               | Switzerland does in global politics.
        
               | dragonwriter wrote:
               | > Has the US ever stated or followed a policy of
               | neutrality
               | 
               | Yes, most of the time from the founding until the First
               | World War.
               | 
               | > and openness?
               | 
               | Not sure what sense of "openness" is relevant here.
        
               | swatcoder wrote:
               | Not at all. Prior to WWI, the US was aggressively and
               | intentionally cleaning European interests out of the
               | Western hemisphere. It was in frequent wars, often with
               | one European power or another. It just didn't distract
               | itself _too much_ with squabbles between European powers
               | over matters outside its claimed dominion.
               | 
               | Establishing a hemispheric sphere of influence was no act
               | of neutrality.
        
               | mikrl wrote:
               | > Not sure what sense of "openness" is relevant
               | 
               | It is in the name OpenAI... not that I think the Swiss
               | are especially transparent, but neither are the USA.
        
               | Jare wrote:
               | Openness sure, but neutrality? I thought they had always
               | been very explicitly positioned on the "ethical AGI"
               | side.
        
             | alienbeast wrote:
             | Having nukes protects you from other nuclear powers through
             | mutually-assured destruction. I'm not sure whether that
             | principle applies to AGI, though.
        
             | quickthrower2 wrote:
             | They can't stop another country developing AI they are not
             | fond of.
             | 
             | They can use their position to lobby their own government
             | and maybe other governments to introduce laws to govern AI.
        
           | nradov wrote:
           | There is no particular reason to expect that OpenAI will be
           | the first to build a true AGI, responsible or otherwise. So
           | far they haven't made any demonstrable progress towards that
           | goal. ChatGPT is an amazing accomplishment and very useful,
           | but probably tangential to the ultimate goal. When a real AGI
           | is eventually built it may be the result of a breakthrough
           | from some totally unexpected source.
        
           | kelipso wrote:
           | I'm guessing Altman had a bunch of experienced ML researchers
           | writing CRUD apps and LLM toys instead of actual AI research
           | and they weren't too happy. Personally I would be pissed as a
           | researcher if the company took a turn and started in on
           | improved marketing blurbs LLMs or whatever.
        
             | Solvency wrote:
             | If every jackass brain dead move Elon Musk has ever made
             | hasnt gotten him fired yet, then allocating too many teams
             | to side projects instead of AI research should not be a
             | fireable offense.
        
               | Mountain_Skies wrote:
               | Musk was fired as CEO of X/PayPal.
        
               | dragonwriter wrote:
               | Fired as the CEO of X twice, the last time right before
               | it became PayPal.
        
           | jes5199 wrote:
           | okay but I personally do want new LLM toys. who is going to
           | provide them, now?
        
             | quickthrower2 wrote:
             | Various camelid inspired models and open source code.
        
         | huytersd wrote:
         | You may be right in many cases but if you think that's true in
         | all cases, you're a low level pleb that can't see past his own
         | nose.
        
         | te_chris wrote:
         | There's a whole business book about this, good to great, where
         | a key facet of companies that have managed to go from average
         | to excellent over a sustained period of time is servant-leader
         | CEOs
        
         | victor9000 wrote:
         | This was done in the context of Dev Day. Meaning that the board
         | was convinced by Ilya that users should not have access to this
         | level of functionality. Or perhaps he was more concerned that
         | he was not able to gatekeep its release. So presumably it was
         | Altman who pushed for releasing this technology to the general
         | public. If this is accurate then this shift in control is bound
         | to slow down feature delivery and create a window for
         | competitors.
        
         | fullshark wrote:
         | It doesn't matter in the short term (usually). Then you look in
         | 2-4 years and you see the collective impact of countless
         | decisions and realize how important they are.
         | 
         | In this case, tons of people already have resigned from OpenAI.
         | Sam Altman seems very likely to start a rival company. This is
         | a huge decision and will have massive consequences for the
         | company and their product area.
        
         | goldinfra wrote:
         | It's completely ignorant to discount all organizational leaders
         | based on your extremely limited personal experience. Thousands
         | of years of history proves the difference between successful
         | leaders and unsuccessful leaders.
         | 
         | Sam Altman has been an _objectively_ successful leader of
         | OpenAI.
         | 
         | Everyone has their flaws, and I'm more of a Sam Altman hater
         | than a fan, but even I have to admit he led OpenAI to great
         | success. He didn't do most of the actual work but he did create
         | the company and he did lead it to where it is today.
         | 
         | Personally, If I had stock in OpenAI I'd be selling it right
         | now. The odds of someone else doing as good a job is low. And
         | the odds of him out-competing OpenAI is high.
        
           | cthalupa wrote:
           | > Sam Altman has been an objectively successful leader of
           | OpenAI.
           | 
           | I'm not sure this is actually the case, even ignoring the
           | non-profit charter and the for-profit being beholden to it.
           | 
           | We know that OpenAI has been the talk of the town, we know
           | that there is quite a bit of revenue, and that Microsoft
           | invested heavily. What we don't know is if the strategy being
           | pursued ever had any chance of being profitable.
           | 
           | Decades-long runways with hope that there is a point where
           | profitability will come and at a level where all the
           | investment was worth it is a pretty common operating strategy
           | for the type of company Altman has worked with and invested
           | in, but it is less clear to me that this is viable for this
           | sort of setup, or perhaps at all - money isn't nearly as
           | cheap as it was a decade ago.
           | 
           | What makes a for-profit startup successful isn't necessarily
           | what makes a for-profit LLC with an operating agreement that
           | makes it beholden to the charter of a non-profit parent
           | organization successful.
        
       | FergusArgyll wrote:
       | If the firing was because of a difference in "vision", then it
       | doesn't really matter if Altman was key to making OpenAI so
       | successful. Sutskever and co, don't want it to be successful (by
       | market standards at least). If they get their way (past MSFT and
       | others) then OpenAI will no longer be the cutting edge.
       | 
       | Buy GOOGL?
        
         | layer8 wrote:
         | It seems you are saying that anything that doesn't put profit
         | first can't be successful.
        
           | cwillu wrote:
           | By market standards. There will be no end to intended and
           | unintended equivocation about this over the coming days.
        
           | YetAnotherNick wrote:
           | OpenAI's median salary of engineers is $900k. So yeah AI
           | companies need money to be successful. Now if there is any
           | way to generate billions of dollars per year long term
           | without any profit objective, I will be happy to know.
        
           | fallingknife wrote:
           | "Can't" is a strong word, but a company that does will have
           | more resources and likely outcompete it.
        
       | wolverine876 wrote:
       | In the past, many on HN complained that OpenAI had abandoned its
       | public good mission and had morphed into a psuedo-private for-
       | profit. If that was your feeling before, what do you think now?
       | Are you relieved or excited? Are you the dog who caught the car?
       | 
       | At this point, on day 2, I am heartened that their mission was
       | most important, even at the heart of the most important
       | technology maybe ever or since nuclear power or writing or
       | democracy. I'm heartened at the board's courage - certainly they
       | could anticipate the blowback. This change could transform the
       | outcome for humanity and the board's job was that stewardship,
       | not Altman's career (many people in SV have lost their jobs), not
       | OpenAI's sales numbers. They should be fine with the overwhelming
       | volume of investment available to them.
       | 
       | Another way to look at it: How could this be wrong, given that
       | their objective was not profit, and they can raise money easily
       | with or without Altman?
       | 
       | On day 3 or day 30 or day 3,000, I'll of course come at it from a
       | different outlook.
        
         | itronitron wrote:
         | It's a lesson to any investor that doesn't have a seat on the
         | board, what goes around comes around, ha ha :}
        
         | orbital-decay wrote:
         | If the rumors are correct and ideological disagreement was at
         | the core of this, OpenAI is not going to be open anyway, as
         | Sutskever wants more safety, which implies being as closed as
         | possible. Whether it's "public good" is in the eye of the
         | beholder, as there are multiple mutually incompatible concerns
         | about AI safety, all of which have merit. The future balance
         | between those will be determined by unpredictable events, as
         | always.
        
           | wolverine876 wrote:
           | > Whether it's "public good" is in the eye of the beholder
           | 
           | That's too easy an answer, used to dismiss difficult
           | questions and embrace amorality. There is public good,
           | sometimes easy to define and sometimes hard. If ChatGPT is
           | used to cure cancer, that would be a public good. If it's
           | used to create a new disease that millions, that's obviously
           | bad. Obviously, some questions are harder than that, but it
           | doesn't excuse us from answering them and getting it right.
        
             | orbital-decay wrote:
             | The issue with giving everyone open access to uncontrolled
             | everything is obvious, it does have merit indeed. The
             | terrible example of unrestricted social media as
             | "information superconductor" is alive and breathing,
             | supposedly it led to at least one actual physical genocide
             | within the last decade. The question that is less obvious
             | to some is: do these safety concerns ultimately lead us
             | into the future controlled by a few, who will then
             | inevitably exploit everyone to a much worse effect? That
             | it's already more or less the status quo is not an excuse;
             | it needs to be discussed and not dismissed blindly.
             | 
             | It's a very political question, and HN somewhat despises
             | politics. But OpenAI is not an apolitical company either,
             | they are ideologically driven and have the AGI (defined as
             | "capable of replacing humans in economically important
             | jobs) as their stated target. Your distant ancestors
             | (assuming they were from Europe) were able to escape the
             | totalitarianism and feudalism, starting from the Middle
             | Ages, when the margins were _mile-wide_ compared to what we
             | have now. AI controlled by a few is way more efficient and
             | optimized; will you even have a chance before your entire
             | way of thinking is turned to the desired direction?
             | 
             | I'm from a country that lives in your possible future
             | (Russia), I've seen a remarkably similar process from the
             | inside, so this question seems very natural to me.
        
         | layer8 wrote:
         | Much of the criticism was that they are not open enough. I see
         | no indication that this will be changing, given the AI safety
         | concerns of the remaining board.
         | 
         | Nevertheless, I agree that the firing was probably in line with
         | their stated mission.
        
         | tfehring wrote:
         | I think it was a good thing that, in hindsight, the leading AI
         | research company had a strong enough safety focus that it could
         | do something like this. But that's only the case as long as
         | OpenAI remains the leading AI research company going forward,
         | and after yesterday's events I think that's unlikely. Pushing
         | for more incremental changes at OpenAI, possibly by getting the
         | board to enact stronger safety governance, would have been a
         | better outcome for everyone.
        
         | s1artibartfast wrote:
         | >OpenAI had abandoned its public good mission and had morphed
         | into a psuedo-private for-profit.
         | 
         | >They should be fine with the overwhelming volume of investment
         | available to them.
         | 
         | >Another way to look at it: How could this be wrong, given that
         | their objective was not profit, and they can raise money easily
         | with or without Altman?
         | 
         | This wasn't _just_ some cultural shift. The board of OpenAI
         | created a seperate for profit legal entity in 2019. The for-
         | profit legal entity received overwhelming investment from
         | Microsoft to make money. Microsoft, Early investors, and
         | Employees all have a stake and want returns from this for
         | profit company.
         | 
         | The separate non-profit OpenAI has a major problem on its hands
         | if it thinks its goals are no longer aligned with the co-owners
         | of the for-profit company.
        
           | cthalupa wrote:
           | The thing here is that the structure of these companies and
           | the operating agreement for the for-profit LLC all
           | effectively mean that everyone is warned going in that the
           | for-profit is beholden to the mission of the non-profit and
           | that there might be zero return on investment and that there
           | may never be profit at all.
           | 
           | The board answers to the charter, and are legally obligated
           | to act in the interest of the mission outlined in the
           | charter. Their charter says "OpenAI's mission is to ensure
           | that artificial general intelligence (AGI) [...] benefits all
           | of humanity" - not do that "unless it'd make more money for
           | our for-profit subsidiary to focus on commercializing GPT"
        
         | deeviant wrote:
         | You seem super optimistic that backstabbing power-plays will
         | result improvement.
         | 
         | I see it far more likely that openAI will lock down its tech
         | even more, in the name of "safety", but also predict it will
         | always be possible to pay for their services never-the-less.
         | 
         | Nothing in this situation makes me think OpenAI will be any
         | more "open."
        
       | chancancode wrote:
       | Jeremy Howard (of fast.ai):
       | https://x.com/jeremyphoward/status/1725712220955586899
       | 
       | He is not exactly an insider, but seems broadly
       | aligned/sympathetic/well-connected with the Ilya/researchers
       | faction, his tweet/perspective was a useful proxy into what that
       | split may have felt like internally.
        
         | gmt2027 wrote:
         | https://nitter.net/jeremyphoward/status/1725712220955586899
        
         | rafaelero wrote:
         | Such a bad take. Developers (me included) loved Dev Day.
        
           | whoknowsidont wrote:
           | What did you love about it?
        
             | rafaelero wrote:
             | Cheaper, faster and longer context window would be enough
             | of an advancement for me. But then we also had the
             | Assistant API that makes our lives as AI devs much easier.
        
               | victor9000 wrote:
               | Seriously, the longer context window is absolutely
               | amazing for opening up new use-cases. If anything, this
               | shows how disconnected the board is from its user base.
        
           | marcinzm wrote:
           | He didn't say developers, he said researchers.
        
             | rafaelero wrote:
             | He said in his opinion Dev Day was an "absolute
             | embarrassment".
        
               | marcinzm wrote:
               | And his second tweet explained what he meant by that.
        
           | campbel wrote:
           | Pretty insightful I thought. The people who joined to create
           | AGI are going to be underwhelmed by the products made
           | available on dev day.
        
             | LightMachine wrote:
             | I was underwhelmed, but I got -20 upvotes on Reddit for
             | pointing it out. Yes products are cool, but I'm not
             | following OpenAI for another App Store, I'm following it
             | for AGI. They should be directing all resources to that. As
             | Sam said himself: once it is there, it will pay for itself.
             | Settling to products around GPT-4 just passes the message
             | that the curve has stagnated and we aren't getting more
             | impressive capabilities. Which is saddening.
        
           | eightysixfour wrote:
           | Yeah - I think this is the schism. Sam is clearly a product
           | person, these are AI people. Dev day didn't meaningfully move
           | the needle on AI, but for people building products it sure
           | did.
        
             | belugacat wrote:
             | Thinking you can develop AGI - if such a thing actually can
             | exist - in an academic vacuum, and not by having your AI
             | rubber meet the road through a plethora of real world
             | business use cases strikes me as extreme hubris.
             | 
             | ... I guess that makes me a product person?
        
               | threeseed wrote:
               | Or the obvious point that if you're not interested in
               | business use cases then where are you going to get the
               | money for the increasingly exorbitant training costs.
        
               | DebtDeflation wrote:
               | Exactly this. Where do these guys think the money to pay
               | their salaries let alone fund the vast GPU farm they have
               | access to comes from?
        
             | rafaelero wrote:
             | The fact that this is a schism is already weird. Why do
             | they care how the company transforms the technology coming
             | from the lab into products? It's what pay their salaries in
             | the end of the day and, as long as they can keep doing
             | their research work, it doesn't affect them. Being resented
             | about a thing like this to the point of calling it a
             | "absolute embarrassment" when it clearly wasn't is childish
             | to say the least.
        
               | sseagull wrote:
               | > as long as they can keep doing their research work, it
               | doesn't affect them
               | 
               | That's a big question. Once stuff starts going
               | "commercial" incentives can change fairly quickly.
               | 
               | If you want to do interesting research, but the money
               | wants you to figure out how AI can help sell shoes, well
               | guess which is going to win in the end - the one signing
               | your paycheck.
        
               | rafaelero wrote:
               | > Once stuff starts going "commercial" incentives can
               | change fairly quickly.
               | 
               | Not in this field. In AI, whoever has the most
               | intelligent model is the one that is going to dominate
               | the market. No company can afford not investing heavily
               | in research.
        
           | chancancode wrote:
           | I think you are missing the point, this is offered for
           | perspective, not as a "take".
           | 
           | I find this tweet insightful because it offered a perspective
           | that I (and it seems like you also) don't have which is
           | helpful in comprehending the situation.
           | 
           | As a developer, I am not particularly invested nor excited by
           | the announcements but I thought they were fine. I think
           | things may be a bit overhyped but I also enjoyed their
           | products for what they are as a consumer and subscriber.
           | 
           | With that said, to me, from the outside, things seemed to be
           | going fine, maybe even great, over there. So while I
           | understand the words in the reporting ("it's a disagreement
           | in direction"), I think I lack the perspective to actually
           | understand what that entails, and I thought this was an
           | insightful viewpoint to fill in the perspectives that I
           | didn't have.
           | 
           | The way this was handled still felt iffy to me but with the
           | perspective I can at least imagine what may have drove people
           | to want to take such drastic actions in the first place.
        
       | iamflimflam1 wrote:
       | Everyone I speak to who have have been building on top of OpenAI
       | - and I don't mean just stupid chat apps - feel like the rug has
       | just been pulled out from under them.
       | 
       | If as it seems, dev day was the last straw, what does that say to
       | all the devs?
        
         | cwillu wrote:
         | Company with an unusual corporate structure designed
         | specifically to be able to enforce an unpopular worldview,
         | enforced that unpopular worldview.
         | 
         | I get that people feel disappointed, but I can't help but feel
         | like those people were maybe being a bit wilfully blind to the
         | parts of the company that they didn't understand/believe-
         | in/believe-were-meant-seriously.
        
           | iamflimflam1 wrote:
           | It feels like they've had plenty of time to reset the
           | direction of the company if they thought it was going wrong.
           | 
           | Allowing it to go so far off course feels like they've really
           | dropped the ball.
        
             | cwillu wrote:
             | I think that's where the "not consistently candid in his
             | communications with the board, hindering its ability to
             | exercise its responsibilities" comes in.
        
         | Espressosaurus wrote:
         | It's almost like by wrapping someone else's service, you are at
         | their mercy.
         | 
         | So you better be planning an exit strategy in case something
         | changes slowly or quickly.
         | 
         | Nothing new here.
        
         | chasd00 wrote:
         | I work in consulting the genai hype machine is reaching
         | absurdity in my firm. I can't wait until Monday :)
        
       | pknerd wrote:
       | Prolly off topic but someone on Reddit's OpenAI's chat interface
       | shared his discussion screenshots with chatGPT which claims that
       | AGI status was achieved a long time back. You can still go and
       | read the entire series of screenshots
        
       | Animats wrote:
       | Huh. So that mixed nonprofit/profit structure came back to bite
       | them.
        
         | hughesjj wrote:
         | Bite who?
        
       | davesque wrote:
       | If this was an ideological battle of some kind, the only hope I
       | have is that OpenAI will now be truly more Open! However, if this
       | was motivated by safety concerns, that would mean OpenAI would
       | probably become more closed. And, if the only thing that really
       | distinguishes OpenAI from its competition is its so called data
       | moat, then slowing down for the sake of safety will only give
       | competitors time to catch up. Those competitors include companies
       | in China who are undoubtedly much less concerned about safety.
        
       | tdeck wrote:
       | Is anyone else suspicious of who these "insiders" are and what
       | their motive is? I notice the only concrete piece of information
       | we might get (what was Altman not "candid" about?) is simply
       | dismissed as a "power struggle" without any real detail. This is
       | an incomplete narrative that serves one person's image.
        
       | 1970-01-01 wrote:
       | Did ChatGPT suggest a big surprise?
        
       | g42gregory wrote:
       | My feeling is the the commercial side of the OpenAI brand is
       | gone. How could OpenAI customers depend on the company, when the
       | non-profit board goes against their interests (by slowing down
       | the development and giving them inferior product)?
       | 
       | On the other hand, the AGI side of the OpenAI brand is just fine.
       | They will continue the responsible AGI development, spearheaded
       | by Ilya Sutskever. My best wishes for them to succeed.
       | 
       | I suspect Microsoft will be filing a few lawsuits and sabotaging
       | OpenAI internally. It's an almost $3Tn company and they have an
       | army of lawyers. They can do a lot of damage, especially when
       | there may not be much sympathy for OpenAI in Silicon Valley's VC
       | circles.
        
         | TylerE wrote:
         | I wonder if this represents a shift away from the LLM being the
         | headline product. Their competitors are rapidly catching up in
         | that space.
        
         | croes wrote:
         | It's a bad idea to make yourself dependent on a new service
         | from the outset.
         | 
         | They could have gone bankrupt, been sued into the ground, taken
         | over by Microsoft...
         | 
         | Just look at the just because they fired their CEO.
         | 
         | Was the success based on GPT or the CEO?
         | 
         | The former is still their and didn't get inferior.
         | 
         | Slower growth doesn't mean shrinking
        
           | g42gregory wrote:
           | As an AI professional, I am very interested to hear about
           | OpenAI's ideas, directions, safety programs, etc...
           | 
           | As a commercial customer, the only things I am interested in
           | is the quality of the commercial product they provide to me.
           | Will they have my interests in mind going forward? Will they
           | devote all their energy in delivering the best, most advanced
           | product to me? Will robust support and availability be there
           | in the future? Given the board's publicly stated priorities
           | (which I was not aware of before!), I am not so sure anymore.
        
             | croes wrote:
             | >Will they have my interests in mind going forward? Will
             | they devote all their energy in delivering the best, most
             | advanced product to me?
             | 
             | Sorry to burt you bubble but the primary motivation of a
             | for-profit company is ... profit.
             | 
             | If they make more money in screwing you, they will. Amazon,
             | Google, Walmart, Microsoft, Oracle etc.
             | 
             | The customer is never a priority, just a means to an end.
        
               | g42gregory wrote:
               | Absolutely. I totally agree with the sentiment. But, at
               | least make an effort to pretend that you care! Give me
               | something... OpenAI does not even pretend anymore. :-)
               | The board was pretty clear. That's not a good sign for
               | the customers.
        
         | mhh__ wrote:
         | I am curious what happens to ChatGPT now.
         | 
         | If it's true that this is in part over Dev day and such, and
         | they may have a point, _however_ if useful stuff with AI that
         | helps people is gauche is OpenAI just going to turn into
         | increasingly insular cult? ClosedAI but this time you can 't
         | even pay for it?
        
       | iamleppert wrote:
       | I think OpenAI made the right choice. Just look at what has
       | become of many of the most successful YC companies. Do we really
       | want OpenAI to turn into another Airbnb? It's clear the biggest
       | priority of YC is profit.
       | 
       | They made a deal with Microsoft, who has a long history of
       | exploiting users and customers to make as much money as possible.
       | Just look at the latest version of Windows; Microsoft doesn't
       | care about AI only as much as it enables them to make more and
       | more money till no end through their existing products. They
       | rushed to integrate AI into all of their legacy products to prop
       | them up rather than offer something legitimately new. And they
       | did it not organically but by throwing their money around,
       | attracting the type of people who are primarily motivated by
       | money. Look at how the vibe of AI has changed in the past year
       | --- lots of fake influencers and the mad gold rush around it. And
       | we are hearing crazy stories like comp packages at OpenAI in the
       | millions, turning AI into a rich man's game.
       | 
       | For a company that has "Open" in their name, none of their best
       | and most valuable GPT models are open source. It feels as
       | disingenuous as the "We" in WeWork. Even Meta has them beat here.
       | 
       | Sam Altman, while good at building highly profitable SaaS,
       | consumer, & B2B tech startups and running a highly successful
       | tech accelerator, before this point, didn't have any kind of real
       | background in AI. One can only imagine how he must feel like an
       | outsider.
       | 
       | I think it's a hard decision to fire a CEO, but the company is
       | more than the CEO, it's the people who work there. A lot of the
       | time the company is structured in such a way that the CEO is
       | essentially not replaceable, we should be thankful OpenAI
       | fortunately had the right structure in place to not have a
       | dictator (even a benevolent one).
        
         | Nidhug wrote:
         | The problem is that it might unfortunately be necessary to have
         | this kind of funding to be able to develop AGI. And funding
         | will not come if there are no incentives for the investors to
         | fund.
         | 
         | What would you propose instead ?
        
       | ThinkBeat wrote:
       | I have spent time thinking about who would become the next CEO
       | and even without mushrooms my brain came up with a totally out of
       | context idea:
       | 
       | Bill Gates.
       | 
       | Microsoft is after all invested in OpenAI, and Bill Gates has
       | become "loved by all" (who dont remember evil Gates of the
       | yesteryears.
       | 
       | I am not saying it will happen, 99,999% it wont but still he is
       | well known and may be a good face to splash on top of OpenAI.
       | 
       | After all he is one of the biggest charity guys now right?
        
         | Nidhug wrote:
         | Is Bill Gates really loved by all ? I feel like it was the case
         | before COVID, but then his reputation seemed to go from loved
         | to hated
        
       | ThinkBeat wrote:
       | Will Sam and Greg now go and create NextStep? (The OpenAI
       | version)
        
       | Michelangelo11 wrote:
       | I've seen some discussion on HN in which people claimed that even
       | really important engineers aren't -too- important and that Ilya
       | is actually replaceable, using Apple's growth after Woz'
       | departure as an example. But I don't think that's the best
       | situation to compare this to. I think a much better one is John
       | Carmack firing Romero from id Software after the release of
       | Quake.
       | 
       | Some background: During a period of about 10 years, Carmack kept
       | making massive graphics advances by pushing cutting-edge
       | technology to the limit in ways nobody else had figured out,
       | starting with smooth horizontal scrolling in Commander Keen,
       | through Doom's pseudo-3D, through Quake's full 3D, to advances in
       | the Quake sequels, Doom 3, etc. It's really no exaggeration to
       | say that every new id game engine from 1991 to 1996 created a new
       | gaming genre, and the engines after that pushed forward the state
       | of the art. I don't think anybody who knows this history could
       | argue that John Carmack was replaceable.
       | 
       | At the time, the rest of id knew this, which gave Carmack a lot
       | of clout and eventually allowed him to fire co-founder John
       | Romero. Romero was considered the kinda flamboyant, and
       | omnipresent, public face of id -- he regularly went to cons,
       | worked the press, played deathmatch tournaments, and so on (to be
       | clear, he was a really talented level designer and programmer,
       | among other things, I only want to point out that he was
       | synonymous with id in the public eye). And what happened after
       | the firing? Romero was given a ton of money and absurd publicity
       | for new games ... and a few years later, it all went up in smoke
       | and his new company folded, as he didn't end up making anything
       | nearly as big as Doom or Quake. Meanwhile, id under Carmack kept
       | cranking out hit after hit for years, essentially shrugging off
       | Romero's firing like nothing happened.
       | 
       | The moral of the story to me is that, when your revenue massively
       | grows for every bit of extra performance you extract from
       | bleeding-edge technology, engineer expertise REALLY matters. In
       | the '90s, every minor improvement in PC graphics quality
       | translated to a giant bump in sales, and the same is true of LLM
       | output quality today. So, just like Carmack ultimately turned out
       | to be the absolute key driver behind id's growth, I think there's
       | a pretty good chance it's going to turn out that Ilya plays the
       | same role at OpenAI.
        
         | tasty_freeze wrote:
         | > Quake III's fast inverse square root algorithm
         | 
         | Carmack did not invent that trick; it had been around more than
         | a decade before he used it. I remember reading a Jim Blinn
         | column about that and other dirty tricks like it in an IEEE
         | magazine years before Carmack "invented" it.
         | 
         | https://en.wikipedia.org/wiki/Fast_inverse_square_root
        
           | Michelangelo11 wrote:
           | Yes, you're right -- I dug around in the Wikipedia article,
           | and it turns out he even confirmed in an email it definitely
           | wasn't him: https://www.beyond3d.com/content/articles/8/
           | 
           | Thanks for the correction, edited the post.
        
         | p1esk wrote:
         | Ilya might be too concerned with AI safety to make significant
         | progress on model quality improvement.
        
         | danenania wrote:
         | A difference in this case is how capital intensive AI research
         | is at the level OpenAI is operating. Someone who can keep the
         | capital rolling in (whether through revenue, investors, or
         | partners) and get access to GPUs and proprietary datasets is
         | essential.
         | 
         | Carmack could make graphics advances on his own with just a
         | computer and his brain. Ilya needs a lot more for OpenAI to
         | keep advancing. His giant brain isn't enough by itself.
        
           | Michelangelo11 wrote:
           | That's a really, really good point. Maybe OpenAI, at this
           | level of success, can keep the money coming in though.
        
             | __loam wrote:
             | We don't even know if they're profitable right now, or how
             | much runway they have left.
        
         | selimthegrim wrote:
         | I think the team that became Looking Glass Studios did a lot of
         | the same things in parallel so it's a little unfair to say no
         | one else had figured it out
        
           | quadcore wrote:
           | Not at the same level of quality. For example, their game,
           | ultima underworld if my memory doesnt fault me, didnt have
           | sub-pixel precision for texturing. Their texturing was a lot
           | uglier and unpolished compared to Wolf and especially Doom. I
           | remember I checked, they were behind. And their game crashed.
           | Never saw Doom crash, not even once.
        
         | quadcore wrote:
         | _Meanwhile, id under Carmack kept cranking out hit after hit
         | for years, essentially shrugging off Romero 's firing like
         | nothing happened._
         | 
         | I believe this is absolutely wrong. Quale 2, 3 and Doom 3 were
         | critical success, not commercial ones, which led ID to be
         | bought.
         | 
         | John and John were like Paul and John from the beatles, they
         | never made really great games anymore after their break up.
         | 
         | And to be clear, that's because the role of Romero in the
         | success of ID is often underrated like here. He invented those
         | games (Doom and Quake and Wolf) as much as Carmack did. For
         | example, Romero was the guy who invented percent-based life. He
         | removed the _score_. This guy invented the modern video game in
         | many ways. Games that werent based on Atari or Nintendo. He
         | invented Wolf, Doom and Quake setups which were considerably
         | more mature than Mario and Bomberman and it was new at the
         | time. Romero invented the deathmatch and its  "frag". And on
         | and on.
        
         | deanCommie wrote:
         | > Meanwhile, id under Carmack kept cranking out hit after hit
         | for years, essentially shrugging off Romero's firing like
         | nothing happened.
         | 
         | Romero was fired in 1996
         | 
         | Until this point, as you mentioned id had created multiple
         | legendary franchises with unique lore, attributes, and each one
         | groundbreaking tech breakthroughs: Commander Keen, Wolfenstein
         | 3D, Doom, Quake.
         | 
         | After Romero left, id released:
         | https://en.wikipedia.org/wiki/List_of_id_Software_games
         | 
         | * Quake 2
         | 
         | * Quake 3
         | 
         | * Doom 3
         | 
         | * And absolutely nothing else of any value or cultural impact.
         | The only "original" thing was Rage which again had no
         | footprint.
         | 
         | There were a lot of technical achievements, yes, but it turns
         | out that memorable games need more than interesting technology.
         | They were well-reviewed for their graphics at a time when that
         | was the biggest thing people expected from new id games -
         | interesting new advances in graphics. For a while, they were
         | THE ones pushing the industry forward until arguably Crysis.
         | 
         | But the point is for anyone experiencing or interacting with
         | these games today, Quake is Quake. Nobody remembers 1, 2 or 3 -
         | it's just Quake.
         | 
         | Now, was id a successful software company and business? Yes.
         | Would it have become the industry titan and shaped the future
         | all of all videogames based on their post Romero output?
         | Absolutely not.
         | 
         | So, while it is definitely justifiable to claim that Carmack
         | achieved more on his own than Romero did, the truth is at least
         | in the video game domain they needed each other to achieve the
         | real greatness that they will be remembered for.
         | 
         | It remains to be seen what history will say about ALtman and
         | Sutskever.
        
           | mlyle wrote:
           | > But the point is for anyone experiencing or interacting
           | with these games today, Quake is Quake. Nobody remembers 1, 2
           | or 3 - it's just Quake.
           | 
           | Quake 3 was unquestionably the pinnacle, the real beginning
           | of esports, and enormously influential on shooter design to
           | this day.
        
         | reissbaker wrote:
         | Three points:
         | 
         | 1. I don't think Ilya is equivalent to Carmack in this case --
         | he's been focused on safety and alignment research, not
         | building GPT-[n]. By most accounts Greg Brockman, who quit in
         | disgust over the move, was more impactful than Ilya in recent
         | years, as well as the senior researchers who quit yesterday.
         | 
         | 2. I think you are underselling what happened with id: while
         | they didn't blow up as fantastically as Ion Storm (Romero's
         | subsequent company), they slowly faded in prominence, and while
         | graphically advanced, their games no longer represented the
         | pinnacles of innovation that early Carmack+Romero id games
         | represented. They eventually got bought out by Zenimax. Carmack
         | alone was much better than Romero alone, but seemingly not as
         | good as the two combined.
         | 
         | 3. I don't think Sam Altman is equivalent to John Romero;
         | Romero's biggest issue at Ion Storm was struggling to ship
         | anything instead of endlessly spinning his wheels chasing
         | perfection -- for example, the endless Daikatana delays and
         | rewrites. Ilya's primary issue with Altman was he was shipping
         | _too fast,_ not that he was unable to motivate and push his
         | teams to ship impressive products quickly.
         | 
         | I hope Sam and Greg start a new foundational AI company, and if
         | they do, I am extremely excited to see what they ship. TBH,
         | much more excited than I am currently by OpenAI under a more
         | alignment-and-regulation regime that Ilya and Helen seems to
         | want.
        
           | cthalupa wrote:
           | Sutskever has shifted to safety and alignment research this
           | year. Previously he was directly in charge of the development
           | of GPT, from GPT-1 on.
           | 
           | Brockman did an entirely different type of work than
           | Sutskever. Brockman's primary focus was on the infrastructure
           | side of things - by all accounts the software he wrote to
           | manage the pre-training, training, etc., is all world-class
           | and a large part of why they were able to be as efficient as
           | they are, but that is not the same thing as being the brains
           | behind the ML portion.
        
         | quickthrower2 wrote:
         | You know a lot more than me on this subject but can it also be
         | that starting new company and for it to not die is quite hard.
         | Especially in gaming.
        
         | mvdtnz wrote:
         | Who's talking about replacing Ilya? What are you talking about?
        
       | summerlight wrote:
       | My take: in any world-class technology company, tech is above
       | everything. You cannot succeed with tech alone, but you will
       | never do without tech. Ilya was able to kick Sam out even with
       | all his significant works and presences because Sam was
       | fundamentally a business guy who lacks of tech ownership. You
       | don't go against the real tech owner, this is a binary choice
       | between either to build a strong tech ownership yourself or to
       | delegate a significant amount of business controls to the tech
       | owner.
        
       | biofunsf wrote:
       | What I'd really like to understand is why the board felt like
       | they had to this as a surprise coup, and not a slower more
       | dignified firing.
       | 
       | If they gave Altman 1 weeks notice and let him save face in the
       | media, what would they have lost? Is there a fear Altman would
       | take all the best engineers on the way out?
        
         | throw555chip wrote:
         | As someone else commented on this page, it wasn't a coup.
        
           | biofunsf wrote:
           | This seems a pedantic point. In the "not legal" sense I agree
           | since that seems part of a real coup. But it certainly was a
           | "surprise ousting of the current leadership", which I mean
           | when I say coup.
        
       | meroes wrote:
       | Why has no one on HN considered it has to do with sexually
       | assaulting his sister when they were young?
       | 
       | https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...
       | 
       | My other main guess is his push for government regulation being
       | seen as stifling AI growth or even collusion with unaligned
       | actors by the more scienc-y side and got him ousted by them.
        
       | Madmallard wrote:
       | Social value is king.
       | 
       | ability to do work < ability to manage others to do work <
       | ability to lead managers to success < ability to convince other
       | leaders that your vision is the right one and one they should
       | align with
       | 
       | The necessity of not saying the wrong thing goes up exponentially
       | with each rung. The necessity of saying the right things goes up
       | exponentially with each rung.
        
       | torstenvl wrote:
       | It isn't a coup. A coup is when power is taken and taken by
       | force, not when your constituents decide you no longer represent
       | their interests well. That's like describing voting out a
       | politician as a coup.
       | 
       | Calling it a coup falsely implies that OpenAI in some sense
       | _belongs_ to Sam Altman.
       | 
       | If anything is a coup, it's the idea that a founder can
       | incorporate a company and sell parts of it off, and nevertheless
       | still own it. It's the wresting of control from the actual owners
       | in favor of a public facing executive.
        
         | pdntspa wrote:
         | > voting out a politician as a coup.
         | 
         | That6 is literally a political coup
        
         | username332211 wrote:
         | It's not uncommon to describe the fall of a government as a
         | "parliamentary" coup, if the relevant proceedings of a
         | legislative assembly are characterized by haste and intrigue,
         | rather than debate and deliberation.
         | 
         | For example, the French revolution saw 3 such events commonly
         | descried as coups - the the fall of Robespierre on 9-th of
         | Thermidor and the Directory's (technically legal) annulment of
         | elections on the 18-th of Fructidor and 22-nd Floreal. The last
         | one was even somewhat bloodless.
        
         | Barrin92 wrote:
         | Yup. The only correct governance metaphor here is the opposite.
         | It's a defense of openAI's constitution. The company,
         | effectively like Mozilla, was deliberately structured as a non-
         | profit in which the for-profit arm exists to raise capital to
         | pursue the mission of the former. Worth paying attention to
         | what they have to say on their structure:
         | 
         | https://openai.com/our-structure
         | 
         | especially this part:
         | 
         | https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...
        
         | crazygringo wrote:
         | No, you're confusing business with politics. You're right that
         | a literal _coup d 'etat_ is the forced takeover of a state with
         | the backing of its own military.
         | 
         | But in the business and wider world, a _coup_ (without the _d
         | 'etat_ part) is, by analogy, any takeover of power that is
         | secretly planned and executed as a surprise. (We can similarly
         | talk about a company "declaring war" which means to compete by
         | mobilizing all resources towards a single purposes, not to fire
         | missiles and kill people.)
         | 
         | This is absolutely a coup. It was an action planned by a subset
         | of board members in secret, taken by a secret board meeting
         | missing two of its members (including the chair), where not
         | even Microsoft had any knowledge or say, despite their 49%
         | investment in the for-profit corporation.
         | 
         | I'm not arguing whether it's right or wrong. But this is one of
         | the great boardroom coups of all time -- one for the history
         | books. There's a reason it's front-page news, not just on HN
         | but in the NYT and WSJ as well.
        
           | torstenvl wrote:
           | Your post is internally inconsistent. Defining a coup as "any
           | takeover of power" is inconsistent with saying that firing
           | Sam Altman is a coup. CEOs _do not_ have and _should not_
           | have any power vis-a-vis the board. It 's right there in the
           | name.
           | 
           | Executives do not have any right to their position. They are
           | an officer, i.e., an agent of the stakeholders. The idea that
           | the executive is the holder of the power and it's a "coup" if
           | they aren't allowed to remain is disgustingly reminiscent of
           | Trumpian stop-the-steal rhetoric.
        
             | gfodor wrote:
             | All you're saying here is that it's never possible to
             | characterize a board ousting a ceo as a coup. People do,
             | because it's a useful way to characterize when this happens
             | in the way it did here vs many other ways that involve far
             | less deception and so on.
        
             | crazygringo wrote:
             | You're ignoring the rest of the definition I provided. I
             | did not say it was "any takeover of power". Please read the
             | definition I gave in full.
             | 
             | And I am not referring to the CEO status of Altman at all.
             | That's not the coup part.
             | 
             | What I'm referring to is the fact that beyond his firing as
             | CEO, _he and the chairman were removed from their board
             | seats, as a surprise planned and executed in secret_. That
             | 's the coup. This is not a board firing a CEO who was bad
             | at their job; this is two _factions_ at the company where
             | one orchestrates a total takeover of the other. That 's a
             | coup.
             | 
             | Again, I'm not saying whether this is good or bad. I'm just
             | saying, this is as clear-cut of a coup as there can be.
             | This has nothing in common with the normal firing of a CEO
             | accomplished out in the open. This is four board members
             | removing the other two in secret. That's a coup if there
             | ever was one.
        
       | w10-1 wrote:
       | It's hard to believe a Board that can't control itself or its
       | employees could responsibly manage AI. Or that anyone could
       | manage AGI.
       | 
       | There is a long history of governance problems in nonprofits (see
       | the transaction-cost economics literature on point). Their
       | ambiguous goals induce politics. One benefit of profit-driven
       | boards is that the goals make only well-understood risk trade-
       | off's between growth now or later, and the board members are
       | selected for their actual stake in that actual goal.
       | 
       | This is the problem with religious organizations and ideological
       | governments: they can't be trusted, because they will be captured
       | by their internal politics.
       | 
       | I think it would be much more rational to make AI/AGI an entirely
       | for-profit enterprise, BUT reverse the liability defaults and
       | require that they pay all external costs resulting from their
       | products.
       | 
       | Transaction cost economics shows that in theory that it doesn't
       | matter where liability is allocated so long as the transaction
       | cost of redistributing liability is near zero (i.e., contract in
       | advance and tort after are cheap), because then parties just work
       | it out. Government or laws are required only to make up for the
       | actual non-zero dispute transaction cost by establishing settled
       | expectation.
       | 
       | The internet and software generally has been a domain where
       | consumers have NO redress whatsoever for exported costs. It's
       | grown (and disrupted) fantastically as a result.
       | 
       | So to control AI/AGI, make it for-profit, but flip liability to
       | require all exported costs to be paid by the developer. That
       | would ensure applications are incredibly narrow AND have net-
       | positive social impact.
        
         | dividendpayee wrote:
         | Yeah that's right. There's a blogger in another post on HN that
         | makes the same point at the very end:
         | https://loeber.substack.com/p/a-timeline-of-the-openai-board
        
           | DebtDeflation wrote:
           | From that link:
           | 
           | >I could not find anything in the way of a source on when, or
           | under what circumstances, Tasha McCauley joined the Board.
           | 
           | I would add, "or why she's on the board or why anyone thought
           | she was qualified to be on the board".
           | 
           | At least with Helen Toner the intent was likely just to add a
           | token AI Safety academic to pacify "concerned" Congressmen.
           | 
           | I am kind of curious how Adam D'Angelo voted. If he voted
           | against removing Sam that would make this even more of a
           | farce.
        
         | photochemsyn wrote:
         | The solution is to replace the board members with AGI entities,
         | isn't it? Just have to figure out how to do the real-time
         | incorporation of current data into the model. I bet that's an
         | active thing at OpenAI. Seems to have been a hot discussion
         | topic lately:
         | 
         | https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...
         | 
         | The real risk is that some government will put the result in
         | charge of their national defense system, aka Skynet, not that
         | kids will ask it how to make illegal drugs. The curious silence
         | on military-industrial applications of LLMs makes me suspect
         | this is part of the OpenAI story... Good plot for a novel, at
         | least.
        
           | ethanbond wrote:
           | > The real risk is that some government will put the result
           | in charge of their national defense system, aka Skynet, not
           | that kids will ask it how to make illegal drugs.
           | 
           | These cannot possibly be the most realistic failure cases you
           | can imagine, are they? Who cares if "kids" "make illegal
           | drugs?" But yeah, if kids can make illegal drugs with this
           | tech, then actual bad actors can make actual dangerous
           | substances with this tech.
           | 
           | The real risk is manifold and totally unforeseeable the same
           | way that a 400 Elo chess player has zero conception of "the
           | risks" that a 2000 Elo player will exploit to beat them.
        
             | photochemsyn wrote:
             | Every bad actor who wants to make dangerous substances can
             | find that information in the scientific literature with
             | little difficulty. An LLM, however, is probably not going
             | to tell you that the mostly likely outcome of a wannabe
             | chemist trying to cook up something or other from an LLM
             | recipe is that they'll poison themselves.
             | 
             | This generally fits a notion I've heard expressed
             | repeatedly: today's LLMs are most useful to people who
             | already have some domain expertise, it just makes things
             | faster and easier. Tomorrow's LLMs, that's another
             | question, as you imply.
        
         | __loam wrote:
         | I appreciate this argument, but I also think naked profit
         | seeking is the cause of a lot of problems in our economy and
         | there are qualities that are hard to quantify when you
         | structure the organization around it. Blindly following the
         | economic argument can also cause problems, and it's a big
         | reason why American corporate culture moved away from building
         | a good product first towards maximizing shareholder value. The
         | OpenAI board certainly seems capricious and impulsive given
         | this decision though.
        
         | patcon wrote:
         | > Their ambiguous goals induce politics. [...] This is the
         | problem with religious organizations and ideological
         | governments: they can't be trusted, because they will be
         | captured by their internal politics.
         | 
         | Yes, of course. But that's because "doing good" is by
         | definition much more ambiguous than "making money". It's way
         | higher dimension, and it has uncountable definitions.
         | 
         | So nonprofits will by definition involve more politics at the
         | human level. I'd say we must accept that if we want to live
         | amongst the actions of nonprofits rather than just for-profits.
         | 
         | To claim that "politics" are a reason something "can't be
         | trusted" is akin to saying involvement of human affairs means
         | something can't be trusted (over computers). We must imagine
         | effective politics, or else we cannot imagine effective human
         | affairs -- only mechanistic affairs of simple optimization
         | systems (like capitalist markets)
        
       | sheepscreek wrote:
       | Here's another theory.
       | 
       | > the ousting was likely orchestrated by Chief Scientist Ilya
       | Sutskever over concerns about the safety and speed of OpenAI's
       | tech deployment.
       | 
       | Who was first to launch a marketplace for GPTs/agents? It wasn't
       | OpenAI, but Poe by Quora. Guess who sits on the OpenAI non-profit
       | board? Quora CEO. So at least we know where his interest lies
       | with respect to the vote against Altman and Greg.
        
         | svnt wrote:
         | This is a really good point. If a non profit whose board you
         | sit on releases a product that competes with a product from the
         | corporation you manage, how do you manage that conflict of
         | interest? Seems he should have stepped down.
        
           | loeber wrote:
           | Yeah, I just wrote about this as well on my substack. There
           | were two significant conflicts of interest on the OpenAI
           | board. Adam D'Angelo should've resigned once he started Poe.
           | The other conflict was that both Tasha McCauley and Helen
           | Toner were associated with another AI governance
           | organization.
        
           | jacquesm wrote:
           | Yes, he should have.
        
         | atleastoptimal wrote:
         | The current interim CEO also spearheaded ChatGPT's development.
         | Its the biggest product, consumer market based move the
         | company's ever made. I can't imagine it's simply a pure "Sam
         | wanted profits and Ilya/board wanted pure research" hard line
         | in the sand situation.
        
       | nprateem wrote:
       | I wouldn't be surprised if this is the chief scientist getting
       | annoyed the CEO is taking all the credit for the work and the
       | researchers aren't getting as much time in the limelight. It's
       | probably the classic 'Meatloaf vs the guy who actually wrote the
       | songs' thing.
        
       | speedylight wrote:
       | The real issue is that OpenAI is both a for profit and a non
       | profit organization. This structure creates a very significant
       | conflict of interest where maintaining balance between both of
       | them is very tricky business. The non-profit board shouldn't have
       | been in charge of the for-profit aspect of the company.
        
         | naveen99 wrote:
         | Balance is irrelevant. It's an accounting mechanism for irs
         | rules.
        
         | cthalupa wrote:
         | The for-profit would not exist if the non-profit was not able
         | to maintain control. The only reason it does exist is because
         | they were able to structure it in such a way that the for-
         | profit is completely beholden to the non-profit. There is no
         | requirement in the charter for the non-profit or the operating
         | agreement of the for-profit to maintain a balance - it
         | explicitly is the opposite of that. The operating agreement
         | that all investors in and employees of the for-profit _must_
         | sign explicitly states that investments should be considered
         | donations, no profits are obligated to be made or returned to
         | anyone, all money might be dumped into AGI R &D, etc. and that
         | the for-profit is specifically beholden to the charter and
         | mission of the non-profit.
         | 
         | https://openai.com/our-structure
        
           | DebtDeflation wrote:
           | >The for-profit would not exist if the non-profit was not
           | able to maintain control.
           | 
           | The non-profit will not exist at all if Microsoft walks away
           | and all the other investors follow Sam and Greg. Neither GPUs
           | nor researchers are free.
        
       | Apocryphon wrote:
       | Thought experiment: what if Mozilla had split between its
       | Corporation and Foundation years ago, when it was at its peak?
        
       | evolve2k wrote:
       | > Szymon Sidor, an open source baselines researcher
       | 
       | What does that title even mean. As we know Open AI is ironicly
       | not known for doing open source work. I'm left guessing he
       | 'research the open source competition' as it were.
       | 
       | Can anyone shed further light on the role/research?
        
       | chaostheory wrote:
       | I wonder if Altman, Brockman, and company will join Elon or
       | whether they will just start a new company?
        
       ___________________________________________________________________
       (page generated 2023-11-18 23:00 UTC)