[HN Gopher] Mira Murati leaves OpenAI
       ___________________________________________________________________
        
       Mira Murati leaves OpenAI
        
       Author : brianjking
       Score  : 367 points
       Date   : 2024-09-25 19:35 UTC (3 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | extr wrote:
       | The politics of leadership at OpenAI must be absolutely insane.
       | "Leaving to do my own exploration"? Come on. You have Sam making
       | blog posts claiming AI is going to literally be the second coming
       | of Christ and then this a day later.
        
       | Imnimo wrote:
       | It is hard for me to square "This company is a few short years
       | away from building world-changing AGI" and "I'm stepping away to
       | do my own thing". Maybe I'm just bad at putting myself in someone
       | else's shoes, but I feel like if I had spent years working
       | towards a vision of AGI, and thought that success was finally
       | just around the corner, it'd be very difficult to walk away.
        
         | orionsbelt wrote:
         | Maybe she thinks the _world_ is a few short years away from
         | building world-changing AGI, not just limited to OpenAI, and
         | she wants to compete and do her own thing (and easily raise $1B
         | like Ilya).
        
           | xur17 wrote:
           | Which is arguably a good thing (having AGI spread amongst
           | multiple entities rather than one leader).
        
             | tomrod wrote:
             | The show Person of Interest comes to mind.
        
               | tempodox wrote:
               | Samaritan will take us by the hand and lead us safely
               | through this brave new world.
        
             | HeatrayEnjoyer wrote:
             | How is that good? An arms race increases the pressure to go
             | fast and disregard alignment safety, non proliferation is
             | essential.
        
               | actionfromafar wrote:
               | I think that train left some time ago.
        
           | zooq_ai wrote:
           | I can't imagine investor pouring money on her. She has zero
           | credibility both hardcore STEM like Ilya or a visionary like
           | Jobs/Musk
        
             | phatfish wrote:
             | "Credibility" has nothing to do with how much money rich
             | people are willing to give you.
        
             | KoftaBob wrote:
             | She was the CTO, how does she not have STEM credibility?
        
         | jsheard wrote:
         | > It is hard for me to square "This company is a few short
         | years away from building world-changing AGI"
         | 
         | Altmans quote was that "it's possible that we will have
         | superintelligence in a few thousand days", which sounds a lot
         | more optimistic on the surface than it actually is. A few
         | thousand days could be interpreted as 10 years or more, and by
         | adding the "possibly" qualifier he didn't even really commit to
         | that prediction.
         | 
         | It's hype with no substance, but vaguely gesturing that
         | something earth-shattering is coming does serve to convince
         | investors to keep dumping endless $billions into his
         | unprofitable company, without risking the reputational damage
         | of missing a deadline since he never actually gave one. Just
         | keep signing those 9 digit checks and we'll totally build
         | AGI... eventually. Honest.
        
           | z7 wrote:
           | >Altmans quote was that AGI "could be just a few thousand
           | days away" which sounds a lot more optimistic on the surface
           | than it actually is.
           | 
           | I think he was referring to ASI, not AGI.
        
             | umeshunni wrote:
             | Isn't ASI > AGI?
        
               | CaptainFever wrote:
               | Is the S here referring to Sentient or Specialised?
        
               | romanhn wrote:
               | Super, whatever that means
        
               | ben_w wrote:
               | Super(human).
               | 
               | Old-school AI was already specialised. Nobody can agree
               | what "sentient" is, and if sentience includes a capacity
               | to feel emotions/qualia etc. then we'd only willingly
               | choose that over non-sentient for brain uploading not
               | "mere" assistants.
        
               | jrflowers wrote:
               | Scottish.
        
               | ben_w wrote:
               | Both are poorly defined.
               | 
               | By all the standards I had growing up, ChatGPT is already
               | AGI. It's almost certainly not as economically
               | transformative as it needs to be to meet OpenAI's stated
               | definition.
               | 
               | OTOH that may be due to limited availability rather than
               | limited quality: if all the 20 USD/month for Plus gets
               | spent on electricity to run the servers, at $0.10/kWh,
               | that's about 274 W average consumption. Scaled up to the
               | world population, that's approximately the entire global
               | electricity supply. Which is kinda why there's also all
               | the stories about AI data centres getting dedicated power
               | plants.
        
               | Spivak wrote:
               | Don't know why you're being downvoted, these models meet
               | the definition of AGI. It just looks different than
               | perhaps we expected.
               | 
               | We made a thing that exhibits the emergent property of
               | intelligence. A level of intelligence that trades blows
               | with humans. The fact that our brains do lots of other
               | things to make us into self-contained autonomous beings
               | is cool and maybe answers some questions about what being
               | sentient means but memory and self-learning aren't the
               | same thing as intelligence.
               | 
               | I think it's cool that we got there before simulating an
               | already existing brain and that intelligence can exist
               | separate from consciousness.
        
             | bottlepalm wrote:
             | Given that ChatGPT is already smarter and faster than
             | humans in many different metrics. Once the other metrics
             | catch up with humans it will still be better than humans in
             | the existing metrics. Therefore there will be no AGI, only
             | ASI.
        
           | ben_w wrote:
           | Between 1 and 10 thousands of days, so 3 to 27 years.
           | 
           | A range I'd agree with; for me, "pessimism" is the shortest
           | part of that range, but even then you have to be very
           | confident the specific metaphorical horse you're betting on
           | is going to be both victorious in its own right and not,
           | because there's no suitable existing metaphor, secretly an
           | ICBM wearing a patomime costume.
        
             | zooq_ai wrote:
             | 1 you use 1
             | 
             | 2 (or even 3) you use "a couple"
             | 
             | A few is almost always > 3 and one could argue that upper
             | limit 15
             | 
             | So, 10 years to 50 years
        
               | ben_w wrote:
               | Personally speaking, above 10 thousand I'd switch to
               | saying "a few tens of thousands".
               | 
               | But the mere fact you say 15 is arguable does indeed
               | broaden the range, just as me saying 1 broadens it in the
               | opposite extent.
        
               | fvv wrote:
               | You imply that he knows exactly when which imo is not and
               | could even be next year for what we knows.. Who know
               | every paper yet to be published??
        
           | vasco wrote:
           | OpenAI is a Microsoft play to get into power generation
           | business, specifically nuclear, which is a pet interest of
           | Bill Gates for many years.
           | 
           | There, that's my conspiracy theory quota for 2024 in one
           | comment.
        
           | petre wrote:
           | > it's possible that we will have superintelligence in a few
           | thousand days
           | 
           | Sure, a few thousand days and a few trillion $ away. We'll
           | also have full self driving next month. This is just like the
           | fusion is the energy of the future joke: it's 30 years away
           | and it will always be.
        
             | actionfromafar wrote:
             | Now it's 20 years away! It took 50 years for it to go from
             | 30 to 20 years away. So maybe, in another 50 years it will
             | be 10 years away?
        
         | aresant wrote:
         | I think the much more likely scenario than product roadmap
         | concerns is that Murati (and Ilya for that matter) took their
         | shot to remove Sam, lost, and in an effort to collectively
         | retain billion$ of enterprise value have been playing nice, but
         | were never seriously going to work together again after the
         | failed coup.
        
           | amenhotep wrote:
           | Failed coup? Altman managed to usurp the board's power, seems
           | pretty successful to me
        
             | xwowsersx wrote:
             | I think OP means the failed coup in which they attempted to
             | oust Altman?
        
               | jordanb wrote:
               | Yeah the GP's point is the board was acting within its
               | purview by dismissing the CEO. The coup was the
               | successful counter-campaign against the board by Altman
               | and the investors.
        
               | ethbr1 wrote:
               | Let's be honest: in large part by Microsoft.
        
               | jeremyjh wrote:
               | The successful coup was led by Satya Nadella.
        
           | bg24 wrote:
           | This is the likely scenario. Every conflict at exec level
           | comes with a "messaging" aspect, with there being a comms
           | team, and board to manage that part.
        
           | Barrin92 wrote:
           | >but were never seriously going to work together again after
           | the failed coup.
           | 
           | Just to clear one thing up, the designated function of a
           | board of directors is to appoint or replace the executive of
           | an organisation, and openAI in particular is structured such
           | that the non-profit part of the organisation controls the
           | LLC.
           | 
           | The coup was the executive, together with the investors,
           | effectively turning that on its head by force.
        
           | nopromisessir wrote:
           | Highly speculative.
           | 
           | Also highly cynical.
           | 
           | Some folks are professional and mature. In the best
           | organisations, the management team sets the highest possible
           | standard, in terms of tone and culture. If done well, this
           | tends to trickle down to all areas of the organization.
           | 
           | Another speculation would be that she's resigning for
           | complicated reasons which are personal. I've had to do the
           | same in my past. The real pro's give the benefit of the
           | doubt.
        
             | dfgtyu65r wrote:
             | This feels naive, especially given what we now know about
             | Open AI.
        
               | nopromisessir wrote:
               | If you care to detail supporting evidence, I'd be keen to
               | see.
               | 
               | Please no speculative pieces, rumor nor hearsay.
        
               | apwell23 wrote:
               | Well why was sam altman fired. it was never revealed.
               | 
               | CEOs get fired all the time and company puts out a
               | statement.
               | 
               | I've never seen "we won't tell you why we fired our CEO"
               | anywhere.
               | 
               | now he is back making totally ridiculous statments like
               | 'AI is going to solve all of physics' or that 'AI is
               | going to clone my brain by 2027'
               | 
               | This is a strange company.
        
               | alephnerd wrote:
               | > This is a strange company.
               | 
               | Because the old guard wanted it to remain a cliquey non-
               | profit filled to the brim with EA, AI Alignment, and
               | OpenPhilanthropy types, but the current OpenAI is now an
               | enterprise company.
               | 
               | This is just Sam Altman cleaning house after the
               | attempted corporate coup a year ago.
        
             | sverhagen wrote:
             | Did you also try to oust the CEO of a multi-billion dollar
             | juggernaut?
        
               | nopromisessir wrote:
               | Sure didn't.
               | 
               | Neither did she though... To my knowledge.
               | 
               | Can you provide any evidence that she tried to do that? I
               | would ask that it be non-speculative in nature please.
        
               | alephnerd wrote:
               | https://www.nytimes.com/2023/11/17/technology/openai-sam-
               | alt...
        
             | itsoktocry wrote:
             | What leads you to believe that OpenAI is one of the best
             | managed organizations?
        
               | nopromisessir wrote:
               | Many hours of interviews.
               | 
               | Organizational performance metrics.
               | 
               | Frequency of scientific breakthroughs.
               | 
               | Frequency and quality of product updates.
               | 
               | History of consistently setting the state of the art in
               | artificial intelligence.
               | 
               | Demonstrated ability to attract world class talent.
               | 
               | Released the fastest growing software product in the
               | history of humanity.
        
               | kranke155 wrote:
               | We have to see if they'll keep executing in a year,
               | considering the losses in staff and the non technical
               | CEO.
        
           | bookofjoe wrote:
           | "When you strike at a king, you must kill him." -- Emerson
        
             | sllewe wrote:
             | or an alternate - "Come at the king - you best not miss" --
             | Omar Little.
        
               | ionwake wrote:
               | the real OG comment here
        
           | deepGem wrote:
           | Why is it so hard to just accept this and be transparent
           | about motives ? It's fair to say 'we were not aligned with
           | Sam, we tried an ouster, didn't pan out so the best thing for
           | us to do is to leave and let Sam pursue his path", which the
           | entirely company has vouched for.
           | 
           | Instead, you get to see grey area after grey area.
        
             | widowlark wrote:
             | id imagine that level of honesty could still lead to
             | billions lost in shareholder value - thus the grey area.
             | Market obfuscation is a real thing.
        
             | stagger87 wrote:
             | It's in nobodies best interest to do this especially when
             | there is so much money at play.
        
               | rvnx wrote:
               | A bit ironic for a non-profit
        
               | mewpmewp2 wrote:
               | As I understand they are going to be stop being non-
               | profit soonish now?
        
             | startupsfail wrote:
             | "the entire company has vouched for" is inconsistent with
             | what we see now. Low/mid ranking employees were obviously
             | tweeting in alignment with their management and by request.
        
             | jjulius wrote:
             | Because, for some weird reason, our culture has
             | collectively decided that, even if most of us are capable
             | of reading between the lines to understand what's _really_
             | being said or is happening, it 's often wrong and bad to be
             | honest and transparent, and we should put the most positive
             | spin possible on it. It's everywhere, especially in
             | professional and political environments.
        
               | FactKnower69 wrote:
               | McKinsey MBA brain rot seeping into all levels of culture
        
               | cedws wrote:
               | That's giving too much credit to McKinsey. I'd argue it's
               | systemic brainrot. Never admit mistakes, never express
               | yourself, never be honest. Just make up as much bullshit
               | as possible on the fly, say whatever you have to pacify
               | people. Even just say bullshit 24/7.
               | 
               | Not to dunk on Mira Murati, because this note is pretty
               | cookie cutter, but it exemplifies this perfectly. It says
               | nothing about her motivations for resigning. It bends
               | over backwards to kiss the asses of the people she's
               | leaving behind. It could ultimately be condensed into two
               | words: "I've resigned."
        
             | mewpmewp2 wrote:
             | Because if you are a high level executive and you are
             | transparent on those things, and if it backfires, it will
             | backfire hard for your future opportunities, since all the
             | companies will view you as a potential liability. So it is
             | always safer and wiser option to not say anything in case
             | of any risk of it backfiring. So you do the polite PR
             | messaging every single time. There's nothing to be gained
             | on the individual level of being transparent, only to be
             | risked.
        
         | golergka wrote:
         | Among other perfectly reasonable theories mentioned here,
         | people burn out.
        
           | optimalsolver wrote:
           | This isn't a delivery app we're talking about.
           | 
           | "Burn out" doesn't apply when the issue at hand is AGI (and,
           | possibly, superintelligence).
        
             | kylehotchkiss wrote:
             | That isn't fair. People need a break. "AGI" /
             | "superintelligence" is not a cause with so much potential
             | we should just damage a bunch of people on the route to it.
        
             | jcranmer wrote:
             | Why would you think burnout doesn't apply? It should be a
             | possibility in pretty much any pursuit, since it's
             | primarily about investing too much energy into a direction
             | that you can't psychologically bring yourself to invest any
             | more into it.
        
             | minimaxir wrote:
             | Software is developed by humans, who can burn out for any
             | reason.
        
             | agentcoops wrote:
             | Burnout, which doesn't need scare quotes, very much still
             | applies for the humans involved in building AGI -- in fact,
             | the burnout potential in this case is probably an order of
             | magnitude higher than the already elevated chances when
             | working through the exponential growth phase of a startup
             | at such scale ("delivery apps" etc) since you'd have an
             | additional scientific or societal motivation to ignore
             | bodily limits.
             | 
             | That said, I don't doubt that this particular departure was
             | more the result of company politics, whether a product of
             | the earlier board upheaval, performance related or simply
             | the decision to bring in a new CTO with a different skill
             | set.
        
         | m3kw9 wrote:
         | A few short years is a prediction with lots of ifs and
         | unknowns.
        
         | romanovcode wrote:
         | Maybe she has inside info that it's not "around the corner".
         | Making bigger and bigger models does not make AGI, not to
         | mention exponential increase in power requirements for these
         | models which would be basically unfeasible for mass market.
         | 
         | Maybe, just maybe, we reached diminishing returns with AI, for
         | now at least.
        
           | steinvakt wrote:
           | People have been saying that we reached the limits of AI/LLMs
           | since GPT4. Using o1-preview (which is barely a few weeks
           | old) for coding, which is definitely an improvement, suggests
           | there's still solid improvements going on, don't you think?
        
             | samatman wrote:
             | Continued improvement is returns, making it inherently
             | compatible with a diminishing returns scenario. Which I
             | also suspect we're in now: there's no comparing the jump
             | between GPT3.5 and GPT4 with GPT4 and any of the subsequent
             | releases.
             | 
             | Whether or not we're leveling out, only time will tell.
             | That's definitely what it looks like, but it might just be
             | a plateau.
        
           | xabadut wrote:
           | + there are many untapped sources of data that contain
           | information about our physical world, such as video
           | 
           | the curse of dimensionality though...
        
         | tomrod wrote:
         | My take is that Altman recognizes LLM winter is coming and is
         | trying to entrench.
        
           | chinathrow wrote:
           | Looking at ChatGPT or Claude coding output, it's already
           | here.
        
             | criticalfault wrote:
             | Bad?
             | 
             | I just tried Gemini and it was useless.
        
               | andrewinardeer wrote:
               | Google ought to hang its head in utter disgrace over the
               | putrid swill they have the audacity to peddle under the
               | Gemini label.
               | 
               | Their laughably overzealous nanny-state censorship,
               | paired with a model so appallingly inept it would
               | embarrass a chatbot from the 90s, makes it nothing short
               | of highway robbery that this digital dumpster fire is
               | permitted to masquerade as a product fit for public
               | consumption.
               | 
               | The sheer gall of Google to foist this steaming pile of
               | silicon refuse onto unsuspecting users borders on
               | fraudulent.
        
               | mnk47 wrote:
               | Starting to wonder why this is so common in LLM
               | discussions at HN.
               | 
               | Someone says "X is the model that really impressive. Y is
               | good too."
               | 
               | Then someone responds "What?! I just used Z and it was
               | terrible!"
               | 
               | I see this at least once in practically every AI thread
        
           | dartos wrote:
           | I don't think we're gonna see a winter. LLMs are here to
           | stay. Natural language interfaces are great. Embeddings are
           | incredibly useful.
           | 
           | They just won't be the hottest thing since smartphones.
        
             | eastbound wrote:
             | It's a glorified grammar corrector?
        
               | CharlieDigital wrote:
               | Not really.
               | 
               | I think actually the best use case for LLMs is
               | "explainer".
               | 
               | When combined with RAG, it's fantastic at taking a
               | complex corpus of information and distilling it down into
               | more digestible summaries.
        
               | bot347851834 wrote:
               | Can you share an example of a use case you have in mind
               | of this "explainer + RAG" combo you just described?
               | 
               | I think that RAG and RAG-based tooling around LLMs is
               | gonna be the clear way forward for most companies with a
               | properly constructed knowledge base but I wonder what you
               | mean by "explainer"?.
               | 
               | Are you talking about asking an LLM something like "in
               | which way did the teams working on project X deal with Y
               | problem?" and then having it breaking it down for you? Or
               | is there something more to it?
        
               | nebula8804 wrote:
               | I'm not the OP but I got some fun ones that I think are
               | what you are asking? I would also love to hear others
               | interesting ideas/findings.
               | 
               | 1. I got this medical provider that has a webapp that
               | downloads graphql data(basically json) to the frontend
               | and shows _some_ of the data to the template as a result
               | while hiding the rest. Furthermore, I see that they hide
               | even more info after I pay the bill. I download all the
               | data, combine it with other historical data that I have
               | downloaded and dumped it into the LLM. It spits out
               | interesting insights about my health history, ways in
               | which I have been unusually charged by my insurance, and
               | the speed at which the company operates based on all the
               | historical data showing time between appointment and the
               | bill adjusted for the time of year. It then formats
               | everything into an open format that is easy for me to
               | self host. (HTML + JS tables). Its a tiny way to wrestle
               | back control from the company until they wise up.
               | 
               | 2. Companies are increasingly allowing customers to
               | receive a "backup" of all the data they have on
               | them(Thanks EU and California). For example Burger
               | King/Wendys allow this. What do they give you when you
               | request data? A zip file filled with just a bunch of crud
               | from their internal system. No worries: Dump it into the
               | LLM and it tells you everything that the company knows
               | about you in an easy to understand format (Bullet points
               | in this case). You know when the company managed to track
               | you, how much they "remember", how much money they got
               | out of you, your behaviors, etc.
        
               | stocknoob wrote:
               | TIL Math Olympiad problems are simple grammar exercises.
        
               | ben_w wrote:
               | If you consider responding to this:
               | 
               | "oi i need lik a scrip or somfing 2 take pic of me screen
               | evry sec for min, mac"
               | 
               | with an actual (and usually functional) script to be
               | "glorified grammar corrector", then sure.
        
             | ForHackernews wrote:
             | They're useful in some situations, but extremely expensive
             | to operate. It's unclear if they'll be profitable in the
             | near future. OpenAI seems to be claiming they need an extra
             | $XXX billion in investment before they can...?
        
             | xtracto wrote:
             | I just made a (IMHO) cool test with OpenAI/Linux/TCL-TK:
             | 
             | "write a TCL/tk script file that is a "frontend" to the ls
             | command: It should provide checkboxes and dropdowns for the
             | different options available in bash ls and a button "RUN"
             | to run the configured ls command. The output of the ls
             | command should be displayed in a Text box inside the
             | interface. The script must be runnable using tclsh"
             | 
             | It didn't get it right the first time (for some reason
             | wants to put a `mainloop` instruction) but after several
             | corrections I got an ugly but pretty functional UI.
             | 
             | Imagine a Linux Distro that uses some kind of LLM generated
             | interfaces to make its power more accessible. Maybe even
             | "self healing".
             | 
             | LLMs don't stop amazing me personally.
        
               | ethbr1 wrote:
               | The issue (and I think what's behind the thinking of AI
               | skeptics) is previous experience with the sharp edge of
               | the Pareto principle.
               | 
               | Current LLMs being 80% to being 100% useful doesn't mean
               | there's only 20% effort left.
               | 
               | It means we got the lowest-hanging 80% of utility.
               | 
               | Bridging that last 20% is going to take a ton of work.
               | Indeed, maybe 4x the effort that getting this far
               | required.
               | 
               | And people also overestimate the utility of a solution
               | that's randomly wrong. It's exceedingly difficult to
               | build reliable systems when you're stacking a 5% wrong
               | solution on another 5% wrong solution on another 5% wrong
               | solution...
        
               | nebula8804 wrote:
               | Thank You! You have explained the exact issue I (and
               | probably many others) are seeing trying to adopt AI for
               | work. It is because of this I don't worry about AI taking
               | our jobs for now. You still need somewhat foundational
               | knowledge in whatever you are trying to do in order to
               | get that remaining 20%. Sometimes this means pushing back
               | against the AI's solution, other times it means reframing
               | the question, and other times its just giving up and
               | doing the work yourself. I keep seeing all these
               | impressive toy demos and my experience (Angular and Flask
               | dev) seem to indicate that it is not going to replace any
               | subject matter expert anytime soon. (And I am referring
               | to all the three major AI players as I regularly and
               | religiously test all their releases).
               | 
               | >And people also overestimate the utility of a solution
               | that's randomly wrong. It's exceedingly difficult to
               | build reliable systems when you're stacking a 5% wrong
               | solution on another 5% wrong solution on another 5% wrong
               | solution...
               | 
               | I call this the merry go round of hell mixed with a cruel
               | hall of mirrors. LLM spits out a solution with some
               | errors, you tell it to fix the errors, it produces other
               | errors or totally forgets important context from one
               | prompt ago. You then fix those issues, it then introduces
               | other issues or messes up the original fix. Rinse and
               | repeat. God help you if you don't actually know what you
               | are doing, you'll be trapped in that hall of mirrors for
               | all of eternity slowly losing your sanity.
        
               | therouwboat wrote:
               | Why make tool when you can just ask AI to give you
               | filelist or files that you need?
        
             | Yizahi wrote:
             | LLMs as programs are here to stay. The issue is with
             | expenses/revenue ratio all these LLM corpos have. According
             | to Sequoia analyst (so not some anon on a forum) there is a
             | giant money hole in that industry, and "giant" doesn't even
             | begins to describe it (iirc it was 600bln this summer).
             | That whole industry will definitely see winter soon, even
             | if all things Altman says would be true.
        
         | hnthrowaway6543 wrote:
         | It's likely hard for them to look at what their life's work is
         | being used for. Customer-hostile chatbots, an excuse for
         | executives to lay off massive amounts of middle class workers,
         | propaganda and disinformation, regurgitated SEO blogspam that
         | makes Google unusable. The "good" use cases seem to be limited
         | to trivial code generation and writing boilerplate marketing
         | copy that nobody reads anyway. Maybe they realized that if AGI
         | were to be achieved, it would be squandered on stupid garbage
         | regardless.
         | 
         | Now I am become an AI language model, destroyer of the
         | internet.
        
         | f0e4c2f7 wrote:
         | There is one clear answer in my opinion:
         | 
         | There is a secondary market for OpenAI stock.
         | 
         | It's not a public market so nobody knows how much you're making
         | if you sell, but if you look at current valuations it must be a
         | lot.
         | 
         | In that context, it would be quite hard not to leave and sell
         | or stay and sell. What if oai loses the lead? What if open
         | source wins? Keeping the stock seems like the actual hard thing
         | to me and I expect to see many others leave (like early
         | googlers or Facebook employees)
         | 
         | Sure it's worth more if you hang on to it, but many think "how
         | many hundreds of M's do I actually need? Better to derisk and
         | sell"
        
           | chatcode wrote:
           | What would you do if
           | 
           | a) you had more money than you'll ever need in your lifetime
           | 
           | b) you think AI abundance is just around the corner, likely
           | making everything cheaper
           | 
           | c) you realize you still only have a finite time left on this
           | planet
           | 
           | d) you have non-AGI dreams of your own that you'd like to
           | work on
           | 
           | e) you can get funding for anything you want, based on your
           | name alone
           | 
           | Do you keep working at OpenAI?
        
         | Apocryphon wrote:
         | What if she believes AGI is imminent and is relocating to a
         | remote location to build a Faraday-shielded survival bunker.
        
           | wantsanagent wrote:
           | This is now my head-canon.
        
             | tempodox wrote:
             | Laputan machine!
        
           | ben_w wrote:
           | Then she hasn't ((read or watched) and (found plausible)) any
           | of the speculative fiction about how that's not enough to
           | keep you safe.
        
             | Apocryphon wrote:
             | No one knows how deep the bunker goes
        
               | ben_w wrote:
               | We can be reasonably confident of which side of the
               | Mohorovicic discontinuity it may be, as existing tools
               | would be necessary to create it in the first place.
        
         | paxys wrote:
         | Regardless of where AI currently is and where it is going, you
         | don't simply quit as CTO of the company that is leading the
         | space _by far_ in terms of technology, products, funding,
         | revenue, popularity, adoption and just about everything else.
         | She was fired, plain and simple.
        
           | rvnx wrote:
           | You can leave and be happy with 30M+ USD in stocks and
           | prospects of easy to find a job also.
        
         | lacker wrote:
         | It's easy to have missed this part of the story in all the
         | chaos, but from the NYTimes in March:
         | 
         |  _Ms. Murati wrote a private memo to Mr. Altman raising
         | questions about his management and also shared her concerns
         | with the board. That move helped to propel the board's decision
         | to force him out._
         | 
         | https://www.nytimes.com/2024/03/07/technology/openai-executi...
         | 
         | It should be no surprise if Sam Altman wants executives who
         | opposed his leadership, like Mira and Ilya, out of the company.
         | When you're firing a high-level executive in a polite way, it's
         | common to let them announce their own departure and frame it
         | the way they want.
        
           | startupsfail wrote:
           | Greg Brockman, OpenAI President and co-founder is also on
           | extended leave of absence.
           | 
           | And John Schulman, and Peter Deng are out already. Yet the
           | company is still shipping, like no other. Recent multimodal
           | integrations and benchmarks of o1 are outstanding.
        
             | fairity wrote:
             | Quite interesting that this comment is downvoted when the
             | content is factually correct and pertinent.
             | 
             | It's a very relevant fact that Greg Brockman recently left
             | on his own volition.
             | 
             | Greg was aligned with Sam during the coup. So, the fact
             | that Greg left lends more credence to the idea that Murati
             | is leaving on her own volition.
        
               | frakkingcylons wrote:
               | > It's a very relevant fact that Greg Brockman recently
               | left on his own volition.
               | 
               | Except that isn't true. He has not resigned from OpenAI.
               | He's on extended leave until the end of the year.
               | 
               | That could become an official resignation later, and I
               | agree that that seems more likely than not. But stating
               | that he's left for good as of right now is misleading.
        
               | meiraleal wrote:
               | > Quite interesting that this comment is downvoted when
               | the content is factually correct and pertinent.
               | 
               | >> Yet the company is still shipping, like no other.
               | 
               | this is factually wrong. Just today Meta (which I
               | despise) shipped more than openAI in a long time.
        
             | vasco wrote:
             | > Yet the company is still shipping, like no other
             | 
             | If executives / high level architects / researchers are
             | working on this quarter's features something is very wrong.
             | The higher you get the more ahead you need to be working,
             | C-level departures should only have an impact about a year
             | down the line, at a company of this size.
        
               | ttcbj wrote:
               | This is a good point. I had not thought of it this way
               | before.
        
             | FactKnower69 wrote:
             | >Recent multimodal integrations and benchmarks of o1 are
             | outstanding.
             | 
             | this is the model that says there are 2 Rs in "strawberry"?
        
               | fjdjshsh wrote:
               | Is that your test suite?
        
               | mckirk wrote:
               | To be fair, that question is one of the suggested
               | questions that OpenAI shows themselves in the UI, for the
               | o1-preview model.
               | 
               | (Together with 'Is a hot dog a sandwich?', which I
               | confess I will have to ask it now.)
        
               | magxnta wrote:
               | If you have a sandwich and cut it in half, do you have
               | one or two sandwiches?
        
             | ac29 wrote:
             | > the company is still shipping, like no other
             | 
             | Meta, Anthropic, Google, and others all are shipping state
             | of the art models.
             | 
             | I'm not trying to be dismissive of OpenAI's work, but they
             | are absolutely not the only company shipping very large
             | foundation models.
        
           | SkyMarshal wrote:
           | _> When you 're firing a high-level executive in a polite
           | way, it's common to let them announce their own departure and
           | frame it the way they want._
           | 
           | You also give them some distance in time from the drama so
           | the two appear unconnected under cursory inspection.
        
         | ren_engineer wrote:
         | most of the people seem to be leaving due to the direction
         | where Altman is taking OpenAI. It went from a charity to him
         | seemingly doing everything possible to monetize it for himself
         | both directly and indirectly by him trying to raise funds for
         | AI adjacent traditionally structured companies he controlled
         | 
         | probably not coincidence that she resigned at almost the same
         | time the rumors about OpenAI completely removing the non-profit
         | board are getting confirmed -
         | https://www.reuters.com/technology/artificial-intelligence/o...
        
           | ethbr1 wrote:
           | Afaik, he's exceedingly driven to do that, because if they
           | run out of money Microsoft gets to pick the carcass clean.
        
         | elAhmo wrote:
         | It would be definitely difficult thing to walk away.
         | 
         | This is just one more in a series of massive red flags around
         | this company, from the insanely convoluted governance scheme,
         | over the board drama, to many executives and key people leaving
         | afterwards. It feels like Sam is doing the cleanup and anyone
         | who opposes him has no place at OpenAI.
         | 
         | This, coming around the time where there are rumors of possible
         | change to the corporate structure to be more friendly to
         | investors, is an interesting timing.
        
         | shmatt wrote:
         | I feel like this is stating the obvious - but i guess not to
         | many - but a probabilistic syllable generator is not
         | intelligence, it does not understand us, it cannot reason, it
         | can only generate the next syllable
         | 
         | It makes us feel understood in the same ways John Edward used
         | to in daytime tv, its all about how language makes us feel
         | 
         | true AGI...unfortunately we're not even close
        
           | CooCooCaCha wrote:
           | I'm not saying you're wrong but you could use this reductive
           | rhetorical strategy to dismiss any AI algorithm. "It's just
           | X" is frankly shallow criticism.
        
             | iLoveOncall wrote:
             | And there's nothing wrong about that: the fact that
             | _artificial intelligence_ will never lead to general
             | intelligence isn't exactly a hot take.
        
               | CooCooCaCha wrote:
               | That's both a very general and very bold claim. I don't
               | think it's unreasonable to say that's too strong of a
               | claim given how we don't know what is possible yet and
               | there's frankly no good reason to completely dismiss the
               | idea of artificial general intelligence.
        
               | dr_dshiv wrote:
               | It's almost trolling at this point, though.
        
             | paxys wrote:
             | > to dismiss any AI algorithm
             | 
             | Or even human intelligence
        
             | timr wrote:
             | And you can dismiss any argument with your response.
             | 
             | "Your argument is just a reductive rhetorical strategy."
        
               | CooCooCaCha wrote:
               | Sure if you ignore context.
               | 
               | "a probabilistic syllable generator is not intelligence,
               | it does not understand us, it cannot reason" is a strong
               | statement and I highly doubt it's backed by any sort of
               | substance other than "feelz".
        
           | HeatrayEnjoyer wrote:
           | This overplayed knee jerk response is so dull.
        
           | svara wrote:
           | I truly think you haven't really thought this through.
           | 
           | There's a huge amount of circuitry between the input and the
           | output of the model. How do you know what it does or doesn't
           | do?
           | 
           | Humans brains "just" output the next couple milliseconds of
           | muscle activation, given sensory input and internal state.
           | 
           | Edit: Interestingly, this is getting downvotes even though 1)
           | my last sentence is a precise and accurate statement of the
           | state of the art in neuroscience and 2) it is completely
           | isomorphic to what the parent post presented as an argument
           | against current models being AGI.
           | 
           | To clarify, I don't believe we're very close to AGI, but
           | parent's argument is just confused.
        
           | ttul wrote:
           | While it's true that language models are fundamentally based
           | on statistical patterns in language, characterizing them as
           | mere "probabilistic syllable generators" significantly
           | understates their capabilities and functional intelligence.
           | 
           | These models can engage in multistep logical reasoning, solve
           | complex problems, and generate novel ideas - going far beyond
           | simply predicting the next syllable. They can follow
           | intricate chains of thought and arrive at non-obvious
           | conclusions. And OpenAI has now showed us that fine-tuning a
           | model specifically to plan step by step dramatically improves
           | its ability to solve problems that were previously the domain
           | of human experts.
           | 
           | Although there is no definitive evidence that state-of-the-
           | art language models have a comprehensive "world model" in the
           | way humans do, several studies and observations suggest that
           | large language models (LLMs) may possess some elements or
           | precursors of a world model.
           | 
           | For example, Tegmark and Gurnee [1] found that LLMs learn
           | linear representations of space and time across multiple
           | scales. These representations appear to be robust to
           | prompting variations and unified across different entity
           | types. This suggests that modern LLMs may learn rich
           | spatiotemporal representations of the real world, which could
           | be considered basic ingredients of a world model.
           | 
           | And even if we look at much smaller models like Stable
           | Diffusion XL, it's clear that they encode a rich
           | understanding of optics [2] within just a few billion
           | parameters (3.5 billion to be precise). Generative video
           | models like OpenAI's Sora clearly have a world model as they
           | are able to simulate gravity, collisions between objects, and
           | other concepts necessary to render a coherent scene.
           | 
           | As for AGI, the consensus on Metaculus is that it will arrive
           | in 2023. But consider that before GPT-4 arrived, the
           | consensus was that full AGI was not coming until 2041 [3].
           | The consensus for the arrival date of "weakly general" AGI is
           | 2027 [4] (i.e AGI that doesn't have a robotic physical world
           | component). The best tool for achieving AGI is the
           | transformer and its derivatives; its scaling keeps going with
           | no end in sight.
           | 
           | Citations:
           | 
           | [1] https://paperswithcode.com/paper/language-models-
           | represent-s...
           | 
           | [2] https://www.reddit.com/r/StableDiffusion/comments/15he3f4
           | /el...
           | 
           | [3] https://www.metaculus.com/questions/5121/date-of-
           | artificial-...
           | 
           | [4] https://www.metaculus.com/questions/3479/date-weakly-
           | general...
        
             | iLoveOncall wrote:
             | > Generative video models like OpenAI's Sora clearly have a
             | world model as they are able to simulate gravity,
             | collisions between objects, and other concepts necessary to
             | render a coherent scene.
             | 
             | I won't expand on the rest, but this is simply nonsensical.
             | 
             | The fact that Sora generates output that matches its
             | training data doesn't show that it has a concept of
             | gravity, collision between object, or anything else. It has
             | a "world model" the same way a photocopier has a "document
             | model".
        
               | svara wrote:
               | My suspicion is that you're leaving some important parts
               | in your logic unstated. Such as belief in a magical
               | property within humans of "understanding", which you
               | don't define.
               | 
               | The ability of video models to generate novel video
               | consistent with physical reality shows that they have
               | extracted important invariants - physical law - out of
               | the data.
               | 
               | It's probably better not to muddle the discussion with
               | ill defined terms such as "intelligence" or
               | "understanding".
               | 
               | I have my own beef with the AGI is nigh crowd, but this
               | criticism amounts to word play.
        
           | Erem wrote:
           | The only useful way to define an AGI is based on its
           | capabilities, not its implementation details.
           | 
           | Based on capabilities alone, current LLMs demonstrate many of
           | the capabilities practitioners ten years ago would have
           | tossed into the AGI bucket.
           | 
           | What are some top capabilities (meaning inputs and outputs)
           | you think are missing on the path between what we have now
           | and AGI?
        
         | insane_dreamer wrote:
         | What top executives write in these farewell letters often has
         | little to do with their actual reasons for leaving.
        
         | letitgo12345 wrote:
         | Maybe it is but it's not the only company that is
        
         | iLoveOncall wrote:
         | People still believe that a company that has only delivered
         | GenAI models is anywhere close to AGI?
         | 
         | Success in not around any corner. It's pure insanity to even
         | believe that AGI is possible, let alone close.
        
           | HeatrayEnjoyer wrote:
           | What can you confidently say AI will not be able to do in
           | 2029? What task can you declare, without hesitation, will not
           | be possible for automatic hardware to accomplish?
        
             | iLoveOncall wrote:
             | Easy: doing something that humans don't already do and
             | program it to do.
             | 
             | AI is incapable of any innovation. It accelerates human
             | innovation, just like any other piece of software, but
             | that's it. AI makes protein folding more efficient, but it
             | can't ever come up with the concept of protein folding on
             | its own. It's just software.
             | 
             | You simply cannot have general intelligence without self-
             | driven innovation. Not improvement, innovation.
             | 
             | But if we look at much more simple concepts, 2029 is only 5
             | years (not even) away, so I'm pretty confident that
             | anything that it cannot do right now it won't be able to do
             | in 2029 either.
        
         | goodluckchuck wrote:
         | I could see it being close, but also feeling an urgency to get
         | there first / believing you could do it better.
        
         | yieldcrv wrote:
         | easy for me to relate to that, my time is more interesting than
         | that
         | 
         | being in San Francisco for 6 years and success means getting
         | hauled in front of Congress and European Parliament
         | 
         | cant think of a worse occupational nightmare after having an
         | 8-figure nest egg already
        
         | apwell23 wrote:
         | Her rise didn't make sense to me. Product manager at tesla to
         | CTO at openAI with no technical background and a deleted
         | profile ?
         | 
         | This is a very strange company to say the least.
        
           | alephnerd wrote:
           | A significant portion of the old guard at OpenAI was part of
           | the Effective Altruism, AI Alignment, and Open Philanthropy
           | movement.
           | 
           | Most hiring in the foundational AI/model space is very
           | nepotistic and biased towards people in that clique.
           | 
           | Also, Elon Musk used to be the primary patron for OpenAI
           | before losing interest during the AI Winter in the late
           | 2010s.
        
           | nebula8804 wrote:
           | >Product manager at tesla to CTO at openAI with no technical
           | background and a deleted profile ?
           | 
           | Doesn't she have a dual bachelors in Mathematics and
           | Mechanical Engineering?
        
         | jappgar wrote:
         | I'm sure this isn't the actual reason, but one possible
         | interpretation is "I'm stepping away to enjoy my life+money
         | before it's completely altered by the singularity."
        
         | ikari_pl wrote:
         | unless you didn't see it as a success, and want to abandon the
         | ship before it gets torpedoed
        
         | aucisson_masque wrote:
         | It's corporate bullcrap, you're not supposed to believe it.
         | What really matters in these statement is what is not said.
        
         | dyauspitr wrote:
         | I doubt she's leaving to do her own thing, I don't think she
         | could. She probably got pushed out.
        
       | rvz wrote:
       | > "Leaving to do my own exploration"
       | 
       | Lets write this chapter and take some guesses, it's either going
       | to be:
       | 
       | 1. Anthropic.
       | 
       | 2. SSI Inc.
       | 
       | 3. Own AI Startup.
       | 
       | 4. Neither.
       | 
       | Only one is correct.
        
         | mikelitoris wrote:
         | The only thing your comment says is she won't be working
         | simultaneously for more than one company in {1,2,3}.
        
           | motoxpro wrote:
           | I know what I am going to say isn't of much value but the GPs
           | post is the most twitter comment ever and it made me chuckle.
        
         | Apocryphon wrote:
         | Premium wallpaper app.
        
       | VeejayRampay wrote:
       | that's a lot of core people leaving, especially since they're
       | apparently so close to a "revolution in AGI"
       | 
       | I feel like either they're not close at all and the people know
       | it's all lies or they're seeing some shady stuff and want nothing
       | to do with it
        
         | paxys wrote:
         | A simpler explanation is that SamA is consolidating power at
         | the company and pushing out everyone who hasn't been loyal to
         | him from the start.
        
           | rvz wrote:
           | And it also explains what Mira (and everyone else who left)
           | saw; the true cost of a failed coup and what Sam Altman is
           | really doing since he is consolidating power at OpenAI (and
           | getting equity)
        
             | steinvakt wrote:
             | So "What did Ilya see" might just be "Ilya actually saw
             | Sam"
        
       | aresant wrote:
       | It is unsuprising that Murati is leaving, she was reported to be
       | one of the principal advocates for pushing Sam out (1)
       | 
       | Of course everybody was quick to play nice once OpenAI insiders
       | got the reality check from Satya that he'd just crush them by
       | building an internal competing group, cut funding, and instantly
       | destroy lots of paper millionaires.
       | 
       | I'd imagine that Mira and others had 6 - 12 month agreeements in
       | place to let the dust settle and finish their latest round of
       | funding without further drama
       | 
       | The OpenAI soap opera is going to be a great book or movie
       | someday
       | 
       | (1) https://www.nytimes.com/2024/03/07/technology/openai-
       | executi...?
        
         | mcast wrote:
         | Trent Reznor and David Fincher need to team up again to make a
         | movie about this.
        
           | fb03 wrote:
           | I'd not complain if William Gibson got into the project as
           | well.
        
       | throwaway314155 wrote:
       | I've forgotten, did she play a role in the attempted Sam Altman
       | ouster?
        
         | blackeyeblitzar wrote:
         | She wasn't on the board right? So if she did play a role, it
         | wasn't through a vote I'd guess.
        
         | paxys wrote:
         | She was picked by the board to replace Sam in the interim after
         | his ouster, so we can draw some conclusions from that.
        
       | blackeyeblitzar wrote:
       | It doesn't make sense to me that someone in such a position at a
       | place like OpenAI would leave. So I assume that means she was
       | forced out, maybe due to underperformance, or the failed coup, or
       | something else. Anyone know what the story is on her background
       | and how she got into that position and what she contributed? I've
       | heard interesting stories, some positive and some negative, but
       | can't tell what's true. It seems like there generally is just a
       | lot of controversy around this "nonprofit".
        
         | mewse-hn wrote:
         | There are some good articles that explain what happened with
         | the coup, that's the main thing to read up on. As for the
         | reason she's leaving, you don't take a shot at the leader of
         | the organization, miss, and then expect to be able to remain at
         | the organization. She's probably been on house leave since it
         | happened for the sake of optics at OpenAI.
        
       | muglug wrote:
       | It's Sam's Club now.
        
         | paxys wrote:
         | Always has been
        
           | grey-area wrote:
           | Altman was not there at the start. He came in later, as he
           | did with YC.
        
             | paxys wrote:
             | He became CEO later, but was always part of the founding
             | team at OpenAI.
        
         | TMWNN wrote:
         | Murati and Sutskever discovered the high Costco of challenging
         | Altman.
        
         | romanovcode wrote:
         | It's CIA`s club since 2024.
        
       | Jayakumark wrote:
       | At this point no one except Sam from founding team is in the
       | company.
        
         | bansheeps wrote:
         | Mira wasn't a part of the founding team.
         | 
         | Wojicech Zaremba and Jakub are still at the company.
        
       | alexmolas wrote:
       | They can't spend more than 6 months without a drama...
        
       | Reimersholme wrote:
       | ...and Sam Altman once again posts a response including
       | uppercase, similar to when Ilya left. It's like he wants to let
       | everyone know that he didn't actually care enough to write it
       | himself but just asked chatGPT to write something for him.
        
         | pshc wrote:
         | I think it's just code switching. Serious announcements warrant
         | a more serious tone.
        
       | layer8 wrote:
       | Plain-text version for those who can't read images:
       | 
       | " _Hi all,
       | 
       | I have something to share with you. After much reflection, I have
       | made the difficult decision to leave OpenAI.
       | 
       | My six-and-a-half years with the OpenAI team have been an
       | extraordinary privilege. While I'll express my gratitude to many
       | individuals in the coming days, I want to start by thanking Sam
       | and Greg for their trust in me to lead the technical organization
       | and for their support throughout the years.
       | 
       | There's never an ideal time to step away from a place one
       | cherishes, yet this moment feels right. Our recent releases of
       | speech-to-speech and OpenAI o1 mark the beginning of a new era in
       | interaction and intelligence - achievements made possible by your
       | ingenuity and craftsmanship. We didn't merely build smarter
       | models, we fundamentally changed how AI systems learn and reason
       | through complex problems. We brought safety research from the
       | theoretical realm into practical applications, creating models
       | that are more robust, aligned, and steerable than ever before.
       | Our work has made cutting-edge AI research intuitive and
       | accessible, developing technology that adapts and evolves based
       | on everyone's input. This success is a testament to our
       | outstanding teamwork, and it is because of your brilliance, your
       | dedication, and your commitment that OpenAI stands at the
       | pinnacle of AI innovation.
       | 
       | I'm stepping away because I want to create the time and space to
       | do my own exploration. For now, my primary focus is doing
       | everything in my power to ensure a smooth transition, maintaining
       | the momentum we've built.
       | 
       | I will forever be grateful for the opportunity to build and work
       | alongside this remarkable team. Together, we've pushed the
       | boundaries of scientific understanding in our quest to improve
       | human well-being.
       | 
       | While I may no longer be in the trenches with you, I will still
       | be rooting for you all. With deep gratitude for the friendships
       | forged, the triumphs achieved, and most importantly, the
       | challenges overcome together.
       | 
       | Mira_"
        
         | squigz wrote:
         | I appreciate this, thank you.
        
       | m3kw9 wrote:
       | Not a big deal if you don't look too closely
        
       | seydor wrote:
       | They will all be replaced by ASIs soon, so it doesn't matter who
       | s coming and going
        
       | codingwagie wrote:
       | My bet is all of these people can raise 20-100M for their own
       | startups. And they are already rich enough to retire. OpenAI is
       | going corporate
        
         | keeptrying wrote:
         | If you keep working past $10M net worth (as all these people
         | undoubtedly are) its usually always for legacy.
         | 
         | I actually think Sam's vision probably scares them.
        
           | hiddencost wrote:
           | $10M doesn't go as far as you'd think in the Bay Area or NYC.
        
             | ForHackernews wrote:
             | ...the only two places on Earth.
        
             | _se wrote:
             | $10M is never work again money literally anywhere in the
             | world. Don't kid yourself. Buy a $3.5M house outright and
             | then collect $250k per year risk free after taxes. You're
             | doing whatever you want and still saving money.
        
               | mewpmewp2 wrote:
               | The problem is if you are the type of person able to get
               | to $10M, you'll probably want more, since the motivation
               | that got you there in the first place will keep you
               | unsatisfied with anything less. You'll constantly crave
               | for more in terms of magnitudes.
        
             | BoorishBears wrote:
             | Maybe it doesn't if you think you're just going to live off
             | $10M in your checking account... but that's generally not
             | how that works.
        
               | fldskfjdslkfj wrote:
               | at 5% rate that's a cushy 500k a year.
        
             | talldayo wrote:
             | Which is why smart retirees don't fucking live there.
        
             | FactKnower69 wrote:
             | hilarious logical end progression of all those idiotic
             | articles about $600k dual income households in the bay
             | living "paycheck to paycheck"
        
           | brigadier132 wrote:
           | > its usually always for legacy
           | 
           | Legacy is the dumbest reason to work and does not explain the
           | motivation of the vast majority of people that are wealthy.
           | 
           | edit: The vast majority of people with more than $10million
           | are completely unknown so the idea that they care about
           | legacy is stupid.
        
             | squigz wrote:
             | What do you think their motivations might be?
        
               | mr90210 wrote:
               | Speaking for myself, I'd keep working even if I had 100M.
               | As long as I am healthy, I plan to continue on being
               | productive towards something I find interesting.
        
               | mewpmewp2 wrote:
               | There's also addiction to success. If you don't keep
               | getting the success in magnitudes you did before, you
               | will get bored and depressed, so you have to keep going
               | and get it since your brain is wired to seek for that.
               | Your brain and emotions are calibrated to what you got
               | before, it's kind of like drugs.
               | 
               | If you don't have the 10M you won't understand, you would
               | think that "oh my if only I had the 10M I would just
               | chill", but it never works like that. Human appetite is
               | infinite.
               | 
               | The more highs you get from success, the more you expect
               | from the future achievements to get that same feeling,
               | and if you don't get any you will feel terrible. That's
               | it.
        
       | keeptrying wrote:
       | If OpenAI is the foremost in solving the AGI - possibly the
       | biggest invention of mankind - it's a little weird that
       | everyone's dropping out.
       | 
       | Does it not look like that no one wants to work with Sam in the
       | long run?
        
         | paxys wrote:
         | Or is it Sam who doesn't want to work with them?
        
         | ilrwbwrkhv wrote:
         | Open AI fired her. She didn't drop out.
        
         | lionkor wrote:
         | Maybe its marketing and LLMs are the peak of what they are
         | capable of.
        
         | uhtred wrote:
         | Artificial General Intelligence requires a bit more than
         | parsing and predicting text I reckon.
        
           | stathibus wrote:
           | at the very least you could say "parsing and predicting text,
           | images, and audio". and you would be correct - physical
           | embodiment and spatial reasoning are missing.
        
             | ben_w wrote:
             | Just spatial resoning, people have already demonstrated it
             | controlling robots.
        
           | ben_w wrote:
           | Yes, and transformer models can do more than text.
           | 
           | There's almost certainly better options out there given it
           | looks like we don't need so many examples to learn from,
           | though I'm not at all clear if we need those better ways or
           | if we can get by without due to the abundance of training
           | data.
        
         | onlyrealcuzzo wrote:
         | Doesn't (dyst)OpenAI have a clause that you can't say anything
         | bad about the company after leaving?
         | 
         | I'm not convinced these board members are able to say what they
         | want when leaving.
        
       | imjonse wrote:
       | I am glad most people do not talk in real life using the same
       | style this message was written in.
        
         | antoineMoPa wrote:
         | To me, this looks like something chatgpt would write.
        
           | squigz wrote:
           | Or, like, any PR person from the past... forever.
        
       | redbell wrote:
       | Sutskever [1], Karpathy [2], Schulman [3], and Murati today!
       | Who's next? _Altman_?!
       | 
       | _________________
       | 
       | 1. https://news.ycombinator.com/item?id=40361128
       | 
       | 2. https://news.ycombinator.com/item?id=39365935
       | 
       | 3. https://news.ycombinator.com/item?id=41168904
        
         | ren_engineer wrote:
         | you've also got Brockman taking a sabbatical, who knows if he
         | comes back at the end of it
        
       | LarsDu88 wrote:
       | She'll pop up working with Ilya
        
       | fairity wrote:
       | Everyone postulating that this was Sam's bidding is forgetting
       | that Greg also left this year, clearly on his own volition.
       | 
       | That makes it much more probable that these execs have simply
       | lost faith in OpenAI.
        
         | blackeyeblitzar wrote:
         | Or that they are losing a power struggle against Sam
        
       | jordanb wrote:
       | People are saying this is coup-related but it could also be due
       | to this horrible response to the a question about what they used
       | to train their Sora model:
       | 
       | https://youtu.be/mAUpxN-EIgU?feature=shared&t=263
        
       | ruddct wrote:
       | Related (possibly): OpenAI to remove non-profit control and give
       | Sam Altman equity
       | 
       | https://news.ycombinator.com/item?id=41651548
        
         | Recursing wrote:
         | Interesting that gwern predicted this as well yesterday
         | 
         | > Translation for the rest of us: "we need to fully privatize
         | the OA subsidiary and turn it into a B-corp which can raise a
         | lot more capital over the next decade, in order to achieve the
         | goals of the nonprofit, because the chief threat is not
         | anything like existential risk from autonomous agents in the
         | next few years or arms races, but inadequate commercialization
         | due to fundraising constraints".
         | 
         | > It's about laying the groundwork for the privatization and
         | establishing rhetorical grounds for how the privatization of OA
         | is consistent with the OA nonprofit's legally-required mission
         | and fiduciary duties. Altman is not writing to anyone here, he
         | is, among others, writing to the OA nonprofit board and to the
         | judge next year.
         | 
         | https://news.ycombinator.com/item?id=41629493
        
           | reducesuffering wrote:
           | With multiple correct predictions, do you think the rest of
           | HN will start to listen to Gwern's beliefs about OpenAI / AGI
           | problems?
           | 
           | Probably not.
        
         | johnneville wrote:
         | maybe they offered her little to no equity
        
         | teamonkey wrote:
         | That post seems to be in free-fall for some reason
        
       | textlapse wrote:
       | Maybe OpenAI is trying to enter a new enterprise phase past its
       | startup era?
       | 
       | They have hired CTO like figures from ex MSFt and so on ... which
       | would mean a natural exit for the startup era folks that we have
       | seen recently?
       | 
       | Every company wants to sell itself as some grandiose savior
       | initially 'organize the world's information and make it
       | universally accessible', 'solve AGI' but I guess the investors
       | and the top level people in reality are motivated by dollar signs
       | and ads and enterprise and so on.
       | 
       | Not that that's a bad thing but really it's a Potemkin village
       | though...
        
       | abecedarius wrote:
       | Gwern predicting this in March:
       | 
       | > Sam Altman has won. [...] Ilya Sutskever and Mira Murati will
       | leave OA or otherwise take on some sort of clearly diminished
       | role by year-end (90%, 75%; cf. Murati's desperate-sounding
       | internal note)
       | 
       | https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...
        
         | ilrwbwrkhv wrote:
         | Yup. Poor Mira. Fired from OpenAI.
        
       | OutOfHere wrote:
       | I mean no disrespect, but to me, she always felt like an interim
       | hire for her current role, like someone filling a position
       | because there wasn't anyone else.
        
         | elAhmo wrote:
         | Yes, for the CEO role, but she has been with the company for
         | more than six years, two and a half as a CTO.
        
       | aprilthird2021 wrote:
       | Most comments posit that if OpenAI is so close to AGI, why leave
       | and miss that payoff?
       | 
       | It's possible that the competitors to OpenAI have rendered future
       | improvements (yes even to the fabled AGI) less and less
       | profitable to the point that the more profitable thing to do
       | would be capitalize on your current fame and raise capital.
       | 
       | That's how I'm reading this. If the competition can be just as
       | usable as OpenAI's SOA models and free or close to it, the profit
       | starts vanishing in most predictions
        
         | hall0ween wrote:
         | I appreciate your insightful thoughts here :)
        
       | user90131313 wrote:
       | How many big names are still working on OpenAI at this point?
       | They lost all the edge this year. That drama from last year
       | literally broke all the core team.
        
       | isodev wrote:
       | Can someone share a non twitter link? For those of us who can't
       | access it.
        
         | hoherd wrote:
         | I actually had the same thought because I DNS block xitter.
         | 
         | Somebody else archived it before me: https://archive.li/0Mea1
        
       | simbas wrote:
       | https://x.com/miramurati/status/1726542556203483392
        
       | nopromisessir wrote:
       | She might just be stressed out. Happens all the time. She's in a
       | very demanding position.
       | 
       | She's a pro. Lots to learn from watching how she operates.
        
       | moralestapia wrote:
       | The right way to think about this is that every persona on that
       | team has a billion-dollar size blank check from VCs in front of
       | them.
       | 
       | OpenAI made them good money, yes; but if at some point there's a
       | new endeavor in the horizon with _another_ guaranteed billion-
       | dollar payout, they 'll just take it. Exhibit A: Ilya.
       | 
       | New razor: never attribute to AGI that which is adequately
       | explained by greed.
        
       | neom wrote:
       | Lots of speculation in the comments. Who knows, but if it was me,
       | I wouldn't be keeping all my eggs in the OpenAI basket, 6 years
       | and well vested with a long run of AI companies you could go to?
       | I'd start buying a few more lottery tickets personally
       | (especially at 35).
        
         | joshdavham wrote:
         | That was actually my first thought as well. If you've got your
         | vesting and don't wanna work in a large company setting
         | anymore, why not go do something else?
        
       | carimura wrote:
       | Once someone is independently wealthy, personal priorities
       | change. I guarantee she'll crop up again as founder CEO/CTO where
       | she calls the shots and gets the chance (even if slim) to turn
       | millions into billions.
        
       | paxys wrote:
       | I will never understand why people still take statements like
       | these at face value. These aren't her personal thoughts and
       | feelings. The letter was carefully crafted by OpenAI's PR team
       | under strict direction from Sam and the board. Whatever the real
       | story is is sitting under many layers of NDAs and threats of
       | clawing back/diluting her shares, and we will not know it for a
       | long time. What I can say for certain is no executive in her
       | position ever willingly resigns to pursue different
       | passions/spend more time with their family/enjoy retirement or
       | whatever else.
        
         | tasuki wrote:
         | > What I can say for certain is no executive in her position
         | ever willingly resigns to pursue different passions/spend more
         | time with their family/enjoy retirement or whatever else.
         | 
         | Do you think that's because executives are so exceedingly
         | ambitious, or because pursuing different passions is for some
         | reason less attractive?
        
           | paulcole wrote:
           | It's because they can't imagine themselves doing it so they
           | imagine that everyone must be like that. It's part hubris and
           | part lack of creativity/empathy.
           | 
           | Think about if you've ever known someone you've been envious
           | of for whatever reason who did something that just perplexed
           | you. "They dumped their gorgeous partner, how could they do
           | that?" "They quit a dream job, how could they do that?" "They
           | moved out of that awesome apartment, how could they do that?"
           | "They dropped out of that elite school, how could they do
           | that?"
           | 
           | Very easily actually.
           | 
           | You're seeing only part of the picture. Beautiful people are
           | just as annoying as everybody else. Every dream job has a
           | part that sucks.
           | 
           | If you can't imagine that, you're not trying hard enough.
           | 
           | You can see this in action in a lot of ways. One good one is
           | the Ultimatum Game:
           | 
           | https://www.core-econ.org/the-
           | economy/microeconomics/04-stra...
           | 
           | Most people will end up thinking that they have an ironclad
           | logical strategy but if you ask them about it, it'll end up
           | that their strategy is treating the other player as a carbon
           | copy of themselves.
        
           | mewpmewp2 wrote:
           | I would say that reaching this type of position requires
           | exceeding amount of ambition, drive and craving in the first
           | place, and all and any steps during the process of getting
           | there solidify that by giving the dopamine hits to be
           | addicted to such success, so it is not a case where you can
           | just stop and decide "I'll chill now".
        
         | davesque wrote:
         | > no executive in her position ever willingly resigns to pursue
         | different passions/spend more time with their family/enjoy
         | retirement or whatever else
         | 
         | Especially when they enjoy a position like hers at the most
         | important technology company in a generation.
        
       | hshshshsvsv wrote:
       | One possible explanation could be OpenAI has no clue on inventing
       | AGI. And since she has now fuck you money she might as well live
       | it instead of wasting away working for OpenAI.
        
       | nojvek wrote:
       | Prediction: OpenAI will implode by 2030 and become a smaller
       | shell of current as they run out of money by spending too much.
       | 
       | Prediction 2: Russia will implode by 2035, by also spending too
       | much money.
        
         | selimthegrim wrote:
         | Where is the magic lamp that summons thriftwy who will tell us
         | which countries or companies Russia/OpenAI will absorb
        
       | davesque wrote:
       | Maybe I'm just a rotten person, but I always find these overly
       | gracious exit letters by higher-ups to be pretty nauseating.
        
       | meow_catrix wrote:
       | Yada yada dump at ath
        
       | charlie0 wrote:
       | Will probably start her own company and raise a billy like her
       | old pal Iyla. I wouldn't blame her, there's been so many articles
       | that technical people should just start their own company instead
       | of being CTO.
        
       | reducesuffering wrote:
       | Former OpenAI interim CEO Emmett Shear on _this_ departure:
       | 
       | "You should, as a matter of course, read absolutely nothing into
       | departure announcements. They are fully glommerized as a default,
       | due to the incentive structure of the iterated game, and contain
       | ~zero information beyond the fact of the departure itself."
       | 
       | https://x.com/eshear/status/1839050283953041769
        
       | ford wrote:
       | How bad of a sign is it that so many people have left over the
       | last 12 months? Can anyone speak to how different things are?
        
       | archiepeach wrote:
       | When multiple senior people resign in protest, it's indicative
       | that they're not happy with someone among their own ranks who
       | they vehemently disagree with. John Schulman and Greg left in the
       | same week. Greg, opting to choose to take a sabbatical, may have
       | chosen that over full-on resigning which would align with how he
       | acted during the board-ousting - standing by Sam till the end.
       | 
       | If multiple key people were drastically unhappy with her, it
       | would have shaken confidence in herself and everyone working with
       | her. What else to do but let her go?
        
       | w10-1 wrote:
       | The disparity between size of the promise and the ambiguity of
       | the business model creates both necessity and advantage for
       | executives to leverage external forces to shape company
       | direction. Everyone in the C-suite would be seeking a foothold,
       | but it's unlikely any CTO or technologist would be the real nexus
       | for partner and now investor relations. So while there might be
       | circumstances, history, and personalities involved, OpenAI's
       | current situation basically dictates this.
       | 
       | With luck, Mr. Altman's overtures to bring in middle east
       | investors will get locals on board; either way, it's fair to say
       | he'll own whatever OpenAI becomes, whether he's an owner or not.
       | And if he loses control in the current scrum, I suspect his
       | replacement would be much worse (giving him yet another
       | advantage).
       | 
       | Best wishes to all.
        
       | ein0p wrote:
       | It was only a matter of time - IIRC she did try to stab Altman in
       | the back when he was pushed out, and that likely sealed her fate.
        
       ___________________________________________________________________
       (page generated 2024-09-25 23:00 UTC)