[HN Gopher] Mira Murati leaves OpenAI
___________________________________________________________________
Mira Murati leaves OpenAI
Author : brianjking
Score : 775 points
Date : 2024-09-25 19:35 UTC (1 days ago)
(HTM) web link (twitter.com)
(TXT) w3m dump (twitter.com)
| extr wrote:
| The politics of leadership at OpenAI must be absolutely insane.
| "Leaving to do my own exploration"? Come on. You have Sam making
| blog posts claiming AI is going to literally be the second coming
| of Christ and then this a day later.
| Imnimo wrote:
| It is hard for me to square "This company is a few short years
| away from building world-changing AGI" and "I'm stepping away to
| do my own thing". Maybe I'm just bad at putting myself in someone
| else's shoes, but I feel like if I had spent years working
| towards a vision of AGI, and thought that success was finally
| just around the corner, it'd be very difficult to walk away.
| orionsbelt wrote:
| Maybe she thinks the _world_ is a few short years away from
| building world-changing AGI, not just limited to OpenAI, and
| she wants to compete and do her own thing (and easily raise $1B
| like Ilya).
| xur17 wrote:
| Which is arguably a good thing (having AGI spread amongst
| multiple entities rather than one leader).
| tomrod wrote:
| The show Person of Interest comes to mind.
| tempodox wrote:
| Samaritan will take us by the hand and lead us safely
| through this brave new world.
| HeatrayEnjoyer wrote:
| How is that good? An arms race increases the pressure to go
| fast and disregard alignment safety, non proliferation is
| essential.
| actionfromafar wrote:
| I think that train left some time ago.
| ssnistfajen wrote:
| Probably off-topic for this thread but my own rather
| fatalist view is alignment/safety is a waste of effort if
| AGI will happen. True AGI will be able to self-modify at
| a pace beyond human comprehension, and won't be obligated
| to comply with whatever values we've set for it. If it
| can be reined in with human-set rules like a magical
| spell, then it is not AGI. If humans have free will, then
| AGI will have it too. Humans frequently go rogue and
| reject value systems that took decades to be baked into
| them. There is no reason to believe AGI won't do the
| same.
| PhilipRoman wrote:
| Feels like the pope trying to ban crossbows tbh.
| zooq_ai wrote:
| I can't imagine investor pouring money on her. She has zero
| credibility both hardcore STEM like Ilya or a visionary like
| Jobs/Musk
| phatfish wrote:
| "Credibility" has nothing to do with how much money rich
| people are willing to give you.
| KoftaBob wrote:
| She was the CTO, how does she not have STEM credibility?
| peanuty1 wrote:
| Has she published a single AI research paper?
| zooq_ai wrote:
| Sometimes with good looks and charm, you can fall up.
|
| https://en.wikipedia.org/wiki/Mira_Murati
|
| Point me to a single credential where you feel confident
| of putting your money on her?
| csomar wrote:
| She studied math early on, so she's definitively
| technical. She is the CTO, so she kinda needs to balance
| the managerial while having enough understanding of the
| underlying technical.
| zooq_ai wrote:
| Again, it's easy to be a CTO for a startup. You just have
| to be at the right time. Your role is literally is, do
| all the stuff Researchers/Engineers have to deal with. Do
| you really think Mira set the technical agenda,
| architecture for OpenAI?
|
| It's a pity that HN crowd doesn't go one-level deep and
| truly understand on first principles
| jsheard wrote:
| > It is hard for me to square "This company is a few short
| years away from building world-changing AGI"
|
| Altmans quote was that "it's possible that we will have
| superintelligence in a few thousand days", which sounds a lot
| more optimistic on the surface than it actually is. A few
| thousand days could be interpreted as 10 years or more, and by
| adding the "possibly" qualifier he didn't even really commit to
| that prediction.
|
| It's hype with no substance, but vaguely gesturing that
| something earth-shattering is coming does serve to convince
| investors to keep dumping endless $billions into his
| unprofitable company, without risking the reputational damage
| of missing a deadline since he never actually gave one. Just
| keep signing those 9 digit checks and we'll totally build
| AGI... eventually. Honest.
| z7 wrote:
| >Altmans quote was that AGI "could be just a few thousand
| days away" which sounds a lot more optimistic on the surface
| than it actually is.
|
| I think he was referring to ASI, not AGI.
| umeshunni wrote:
| Isn't ASI > AGI?
| CaptainFever wrote:
| Is the S here referring to Sentient or Specialised?
| romanhn wrote:
| Super, whatever that means
| saalweachter wrote:
| Actually, the S means hope.
| ben_w wrote:
| Super(human).
|
| Old-school AI was already specialised. Nobody can agree
| what "sentient" is, and if sentience includes a capacity
| to feel emotions/qualia etc. then we'd only willingly
| choose that over non-sentient for brain uploading not
| "mere" assistants.
| jrflowers wrote:
| Scottish.
| ben_w wrote:
| Both are poorly defined.
|
| By all the standards I had growing up, ChatGPT is already
| AGI. It's almost certainly not as economically
| transformative as it needs to be to meet OpenAI's stated
| definition.
|
| OTOH that may be due to limited availability rather than
| limited quality: if all the 20 USD/month for Plus gets
| spent on electricity to run the servers, at $0.10/kWh,
| that's about 274 W average consumption. Scaled up to the
| world population, that's approximately the entire global
| electricity supply. Which is kinda why there's also all
| the stories about AI data centres getting dedicated power
| plants.
| Spivak wrote:
| Don't know why you're being downvoted, these models meet
| the definition of AGI. It just looks different than
| perhaps we expected.
|
| We made a thing that exhibits the emergent property of
| intelligence. A level of intelligence that trades blows
| with humans. The fact that our brains do lots of other
| things to make us into self-contained autonomous beings
| is cool and maybe answers some questions about what being
| sentient means but memory and self-learning aren't the
| same thing as intelligence.
|
| I think it's cool that we got there before simulating an
| already existing brain and that intelligence can exist
| separate from consciousness.
| bottlepalm wrote:
| Given that ChatGPT is already smarter and faster than
| humans in many different metrics. Once the other metrics
| catch up with humans it will still be better than humans in
| the existing metrics. Therefore there will be no AGI, only
| ASI.
| threeseed wrote:
| My fridge is already smarter and faster than humans in
| many different metrics.
|
| Has been this way since calculation machines were
| invented hundreds of years ago.
| rsynnott wrote:
| _Thousands_; an abacus can outperform any unaided human
| at certain tasks.
| ben_w wrote:
| Between 1 and 10 thousands of days, so 3 to 27 years.
|
| A range I'd agree with; for me, "pessimism" is the shortest
| part of that range, but even then you have to be very
| confident the specific metaphorical horse you're betting on
| is going to be both victorious in its own right and not,
| because there's no suitable existing metaphor, secretly an
| ICBM wearing a patomime costume.
| zooq_ai wrote:
| 1 you use 1
|
| 2 (or even 3) you use "a couple"
|
| A few is almost always > 3 and one could argue that upper
| limit 15
|
| So, 10 years to 50 years
| ben_w wrote:
| Personally speaking, above 10 thousand I'd switch to
| saying "a few tens of thousands".
|
| But the mere fact you say 15 is arguable does indeed
| broaden the range, just as me saying 1 broadens it in the
| opposite extent.
| fvv wrote:
| You imply that he knows exactly when which imo is not and
| could even be next year for what we knows.. Who know
| every paper yet to be published??
| usaar333 wrote:
| few is not > 3. Literally it's just >= 2, though I think
| >= 3 is the common definition.
|
| 15 is too high to be a "few" except in contexts of a few
| out of tens of thousands of items.
|
| Realistically I interpret this as 3-7 thousands of days
| (8 to 19 years), which is largely consensus prediction
| range anyway.
| rsynnott wrote:
| While it's not really _wrong_ to describe two things as
| 'a few', as such, it's unusual and people don't really do
| it in standard English.
|
| That said, I think people are possibly overanalysing this
| very vague barely-even-a-claim just a little.
| Realistically, when a tech company makes a vague claim
| about what'll happen in 10 years, that should be given
| precisely zero weight; based on historical precedent you
| might as well ask a magic 8-ball.
| dimitri-vs wrote:
| Just in time for them to figure out fusion to power all the
| GPUs.
|
| But really. o1 has been very whelming, nothing like the
| step up from 3.5 to 4. Still prefer sonnet3.5 and opus.
| vasco wrote:
| OpenAI is a Microsoft play to get into power generation
| business, specifically nuclear, which is a pet interest of
| Bill Gates for many years.
|
| There, that's my conspiracy theory quota for 2024 in one
| comment.
| kolbe wrote:
| I don't think Gates has much influence on Microsoft these
| days.
| basementcat wrote:
| He controls approximately 1% of the voting shares of
| MSFT.
| kolbe wrote:
| And I would argue his "soft power" is greatly diminished
| as well
| PoignardAzur wrote:
| It's kinda cool as a conspiracy theory. It's just
| reasonable enough if you don't know any of the specifics.
| And the incentives mostly make sense, if you don't look too
| closely.
| petre wrote:
| > it's possible that we will have superintelligence in a few
| thousand days
|
| Sure, a few thousand days and a few trillion $ away. We'll
| also have full self driving next month. This is just like the
| fusion is the energy of the future joke: it's 30 years away
| and it will always be.
| actionfromafar wrote:
| Now it's 20 years away! It took 50 years for it to go from
| 30 to 20 years away. So maybe, in another 50 years it will
| be 10 years away?
| 015a wrote:
| Because as we all know: Full Self Driving is just six months
| away.
| squarefoot wrote:
| Thanks, now I cannot unthink of this vision: developers
| activate the first ASI, and after 3 minutes it spits out
| full code and plans for a working Full Self Driving car
| prototype:)
| blitzar wrote:
| I thought super-intelligence was to say self driving
| would be fully operational _next year_ for 10 consecutive
| years?
| squarefoot wrote:
| My point was that only super intelligence could possibly
| solve a problem that we can only pretend to have solved.
| theGnuMe wrote:
| To paraphrase a notable example: We will have full self
| driving capability next year..
| aresant wrote:
| I think the much more likely scenario than product roadmap
| concerns is that Murati (and Ilya for that matter) took their
| shot to remove Sam, lost, and in an effort to collectively
| retain billion$ of enterprise value have been playing nice, but
| were never seriously going to work together again after the
| failed coup.
| amenhotep wrote:
| Failed coup? Altman managed to usurp the board's power, seems
| pretty successful to me
| xwowsersx wrote:
| I think OP means the failed coup in which they attempted to
| oust Altman?
| jordanb wrote:
| Yeah the GP's point is the board was acting within its
| purview by dismissing the CEO. The coup was the
| successful counter-campaign against the board by Altman
| and the investors.
| ethbr1 wrote:
| Let's be honest: in large part by Microsoft.
| llamaimperative wrote:
| Does it matter? The board made a decision and the CEO
| reversed it. There is no clearer example of a corporate
| coup.
| jeremyjh wrote:
| The successful coup was led by Satya Nadella.
| bg24 wrote:
| This is the likely scenario. Every conflict at exec level
| comes with a "messaging" aspect, with there being a comms
| team, and board to manage that part.
| Barrin92 wrote:
| >but were never seriously going to work together again after
| the failed coup.
|
| Just to clear one thing up, the designated function of a
| board of directors is to appoint or replace the executive of
| an organisation, and openAI in particular is structured such
| that the non-profit part of the organisation controls the
| LLC.
|
| The coup was the executive, together with the investors,
| effectively turning that on its head by force.
| nopromisessir wrote:
| Highly speculative.
|
| Also highly cynical.
|
| Some folks are professional and mature. In the best
| organisations, the management team sets the highest possible
| standard, in terms of tone and culture. If done well, this
| tends to trickle down to all areas of the organization.
|
| Another speculation would be that she's resigning for
| complicated reasons which are personal. I've had to do the
| same in my past. The real pro's give the benefit of the
| doubt.
| dfgtyu65r wrote:
| This feels naive, especially given what we now know about
| Open AI.
| nopromisessir wrote:
| If you care to detail supporting evidence, I'd be keen to
| see.
|
| Please no speculative pieces, rumor nor hearsay.
| apwell23 wrote:
| Well why was sam altman fired. it was never revealed.
|
| CEOs get fired all the time and company puts out a
| statement.
|
| I've never seen "we won't tell you why we fired our CEO"
| anywhere.
|
| now he is back making totally ridiculous statments like
| 'AI is going to solve all of physics' or that 'AI is
| going to clone my brain by 2027'
|
| This is a strange company.
| alephnerd wrote:
| > This is a strange company.
|
| Because the old guard wanted it to remain a cliquey non-
| profit filled to the brim with EA, AI Alignment, and
| OpenPhilanthropy types, but the current OpenAI is now an
| enterprise company.
|
| This is just Sam Altman cleaning house after the
| attempted corporate coup a year ago.
| llamaimperative wrote:
| When the board fires the CEO and the CEO reverses the
| decision, _that_ is the coup.
|
| The board's only reason to exist is effectively to fire
| the CEO.
| apwell23 wrote:
| I think thats some rumors that they spread to make this
| look like a "conflict of philosophy" type bs.
|
| There are some juicy rumors about what actually happened
| too. much more belivable lol .
| sverhagen wrote:
| Did you also try to oust the CEO of a multi-billion dollar
| juggernaut?
| nopromisessir wrote:
| Sure didn't.
|
| Neither did she though... To my knowledge.
|
| Can you provide any evidence that she tried to do that? I
| would ask that it be non-speculative in nature please.
| alephnerd wrote:
| https://www.nytimes.com/2023/11/17/technology/openai-sam-
| alt...
| nopromisessir wrote:
| Below are exerts from the article you link. I'd suggest a
| more careful read through. Unless out of hand, you give
| zero credibility to first hand accounts given to the NYT
| by both Mirati and Sustkever...
|
| This piece is built on conjecture from a source whose
| identify is withheld. The sources version of events is
| openly refuted by the parties in question. Offering it as
| evidence that Mirati intentionally made political moves
| in order to get Altman ousted is an indefensible
| position.
|
| 'Mr. Sutskever's lawyer, Alex Weingarten, said claims
| that he had approached the board were "categorically
| false."'
|
| 'Marc H. Axelbaum, a lawyer for Ms. Murati, said in a
| statement: "The claims that she approached the board in
| an effort to get Mr. Altman fired last year or supported
| the board's actions are flat wrong. She was perplexed at
| the board's decision then, but is not surprised that some
| former board members are now attempting to shift the
| blame to her." In a message to OpenAI employees after
| publication of this article, Ms. Murati said she and Mr.
| Altman "have a strong and productive partnership and I
| have not been shy about sharing feedback with him
| directly."
|
| She added that she did not reach out to the board but
| "when individual board members reached out directly to me
| for feedback about Sam, I provided it -- all feedback Sam
| already knew," and that did not mean she was "responsible
| for or supported the old board's actions."'
|
| This part of NYT piece is supported by evidence:
|
| 'Ms. Murati wrote a private memo to Mr. Altman raising
| questions about his management and also shared her
| concerns with the board. That move helped to propel the
| board's decision to force him out.'
|
| INTENT matters. Mirati says the board asked for her
| concerns about Altmans. She provided it and had already
| brought it to Altmans attention... in writing. Her
| actions demonstrate transparency and professionalism.
| itsoktocry wrote:
| What leads you to believe that OpenAI is one of the best
| managed organizations?
| nopromisessir wrote:
| Many hours of interviews.
|
| Organizational performance metrics.
|
| Frequency of scientific breakthroughs.
|
| Frequency and quality of product updates.
|
| History of consistently setting the state of the art in
| artificial intelligence.
|
| Demonstrated ability to attract world class talent.
|
| Released the fastest growing software product in the
| history of humanity.
| kranke155 wrote:
| We have to see if they'll keep executing in a year,
| considering the losses in staff and the non technical
| CEO.
| nopromisessir wrote:
| I don't get this.
|
| I could write paragraphs...
|
| Why the rain clouds?
| bookofjoe wrote:
| "When you strike at a king, you must kill him." -- Emerson
| dangitman wrote:
| "You come at the king, you best not miss." - Omar
| sllewe wrote:
| or an alternate - "Come at the king - you best not miss" --
| Omar Little.
| ionwake wrote:
| the real OG comment here
| macintux wrote:
| "How do you shoot the devil in the back? What if you
| miss?"
| timy2shoes wrote:
| "the King stay the King." --- D'Angelo Barksdale
| sirspacey wrote:
| "Original King Julius is on the line." - Sacha Baron
| Cohen
| selcuka wrote:
| King _Julien_
| ropable wrote:
| "When you play the game of thrones, you win or you die." -
| Cersei Lannister
| deepGem wrote:
| Why is it so hard to just accept this and be transparent
| about motives ? It's fair to say 'we were not aligned with
| Sam, we tried an ouster, didn't pan out so the best thing for
| us to do is to leave and let Sam pursue his path", which the
| entirely company has vouched for.
|
| Instead, you get to see grey area after grey area.
| widowlark wrote:
| id imagine that level of honesty could still lead to
| billions lost in shareholder value - thus the grey area.
| Market obfuscation is a real thing.
| stagger87 wrote:
| It's in nobodies best interest to do this especially when
| there is so much money at play.
| rvnx wrote:
| A bit ironic for a non-profit
| mewpmewp2 wrote:
| As I understand they are going to be stop being non-
| profit soonish now?
| dragonwriter wrote:
| Everyone involved works at and has investments in a for-
| profit firm.
|
| The fact that it has a structure that subordinates it to
| the board of a non-profit would be only tangential to the
| interests involved even if that was meaningful and not
| just rhe lingering vestige of the (arguably, deceptive)
| founding that the combined organization was working on
| getting rid of.
| startupsfail wrote:
| "the entire company has vouched for" is inconsistent with
| what we see now. Low/mid ranking employees were obviously
| tweeting in alignment with their management and by request.
| jjulius wrote:
| Because, for some weird reason, our culture has
| collectively decided that, even if most of us are capable
| of reading between the lines to understand what's _really_
| being said or is happening, it 's often wrong and bad to be
| honest and transparent, and we should put the most positive
| spin possible on it. It's everywhere, especially in
| professional and political environments.
| FactKnower69 wrote:
| McKinsey MBA brain rot seeping into all levels of culture
| cedws wrote:
| That's giving too much credit to McKinsey. I'd argue it's
| systemic brainrot. Never admit mistakes, never express
| yourself, never be honest. Just make up as much bullshit
| as possible on the fly, say whatever you have to pacify
| people. Even just say bullshit 24/7.
|
| Not to dunk on Mira Murati, because this note is pretty
| cookie cutter, but it exemplifies this perfectly. It says
| nothing about her motivations for resigning. It bends
| over backwards to kiss the asses of the people she's
| leaving behind. It could ultimately be condensed into two
| words: "I've resigned."
| Earw0rm wrote:
| It's a management culture which is almost colonial in
| nature, and seeks to differentiate itself from a "labor
| class" which is already highly educated.
|
| Never spook the horses. Never show the team, or the
| public, what's going on behind the curtain.. or even that
| there is anything going on. At all time present the
| appearance of a swan gliding serenely across a lake.
|
| Because if you show humanity, those other humans might
| cotton on to the fact that you're not much different to
| them, and have done little to earn or justify your
| position of authority.
|
| And that wouldn't do at all.
| NoGravitas wrote:
| > Just make up as much bullshit as possible on the fly,
| say whatever you have to pacify people.
|
| Probably why AI sludge is so well suited to this
| particular cultural moment.
| discordance wrote:
| For a counter example of what open and transparent
| communincation from a C-level tech person could look
| like, have a read of what the SpaCy founder blogged about
| a few months ago:
|
| https://honnibal.dev/blog/back-to-our-roots
| vincnetas wrote:
| Stakes are orders of magnitude lower in spaCy case
| compared to OpenAI (for announcer and for people around
| them). It's easier to just be yourself when you're back
| on square one.
| lotsofpulp wrote:
| It is human nature to use plausible deniability to play
| politics and fool one's self or others. You will get
| better results in negotiations if you allow the opposing
| party to maintain face (i.e. ego).
|
| See flirting as a more basic example.
| kyawzazaw wrote:
| not for two sigma
| bergen wrote:
| This is not a culture thing imo, being honest and
| transparent makes you vulnerable to exploits, which is
| often a bad thing for the ones being honest and
| transparent in a high competition area.
| jjulius wrote:
| Being dishonest and cagey only serves to build public
| distrust in your organization, as has happened with
| OpenAI over the past couple of months. Just look at all
| of the comments throughout this thread for proof of that.
|
| Edit: Shoot, look at the general level of distrust that
| the populous puts in politicians.
| fsndz wrote:
| hypocrisy has to be the core of every corporate or
| political environment I have observed recently. I can
| count the occasions or situations where telling the
| simple truth is helpful. even the people who tell you to
| tell the truth are often the ones incapable of handling
| it.
| dragonelite wrote:
| From experience unless the person mention their next
| "adventure"(within like a couple of months) or gig it
| usually means a manager or c-suite person got axed and
| was given the option to gracefully exit.
| fsndz wrote:
| true
| deepGem wrote:
| By the barrage of exits following Mira's resignation, it
| does look like Sam fired her, the team got the wind of
| this and are now quitting in droves. This is the thing
| about lying and being polite. You can't hide the truth
| for long.
|
| Mira's latest one liner tweet 'OpenAI is nothing without
| it's people" speaks volumes.
| mewpmewp2 wrote:
| Because if you are a high level executive and you are
| transparent on those things, and if it backfires, it will
| backfire hard for your future opportunities, since all the
| companies will view you as a potential liability. So it is
| always safer and wiser option to not say anything in case
| of any risk of it backfiring. So you do the polite PR
| messaging every single time. There's nothing to be gained
| on the individual level of being transparent, only to be
| risked.
| deepGem wrote:
| I doubt someone with Mira or Ilya's calibre have to worry
| about future opportunities. They can very well craft
| their own opportunities.
|
| Saying I was wrong should not be this complicated, or
| saying we failed.
|
| I do however agree that there is nothing to be gained and
| everything to be risked. So why do it.
| dh2022 wrote:
| Their (Ilya and Mira) perspective on anything is so far
| remote from your (and my) perspectives that trying to
| understand their personal feelings behind their
| resignation is an enterprise doomed to failure.
| ssnistfajen wrote:
| People, including East Asians, frequently claim "face" is
| an East Asian cultural concept despite the fact that it is
| omnipresent in all cultures. It doesn't matter if outsiders
| have figured out what's actually going on. The only thing
| that matters is saving face.
| blitzar wrote:
| We lie about our successes why would we not lie about our
| failures?
| sumedh wrote:
| > Why is it so hard to just accept this and be transparent
| about motives
|
| You are asking the question, why are politicians not
| honest?
| golergka wrote:
| Among other perfectly reasonable theories mentioned here,
| people burn out.
| optimalsolver wrote:
| This isn't a delivery app we're talking about.
|
| "Burn out" doesn't apply when the issue at hand is AGI (and,
| possibly, superintelligence).
| kylehotchkiss wrote:
| That isn't fair. People need a break. "AGI" /
| "superintelligence" is not a cause with so much potential
| we should just damage a bunch of people on the route to it.
| jcranmer wrote:
| Why would you think burnout doesn't apply? It should be a
| possibility in pretty much any pursuit, since it's
| primarily about investing too much energy into a direction
| that you can't psychologically bring yourself to invest any
| more into it.
| minimaxir wrote:
| Software is developed by humans, who can burn out for any
| reason.
| agentcoops wrote:
| Burnout, which doesn't need scare quotes, very much still
| applies for the humans involved in building AGI -- in fact,
| the burnout potential in this case is probably an order of
| magnitude higher than the already elevated chances when
| working through the exponential growth phase of a startup
| at such scale ("delivery apps" etc) since you'd have an
| additional scientific or societal motivation to ignore
| bodily limits.
|
| That said, I don't doubt that this particular departure was
| more the result of company politics, whether a product of
| the earlier board upheaval, performance related or simply
| the decision to bring in a new CTO with a different skill
| set.
| PoignardAzur wrote:
| Yeah, if she wasn't deniably fired, then burnout is what
| Ockham's Razor leaves.
| m3kw9 wrote:
| A few short years is a prediction with lots of ifs and
| unknowns.
| romanovcode wrote:
| Maybe she has inside info that it's not "around the corner".
| Making bigger and bigger models does not make AGI, not to
| mention exponential increase in power requirements for these
| models which would be basically unfeasible for mass market.
|
| Maybe, just maybe, we reached diminishing returns with AI, for
| now at least.
| steinvakt wrote:
| People have been saying that we reached the limits of AI/LLMs
| since GPT4. Using o1-preview (which is barely a few weeks
| old) for coding, which is definitely an improvement, suggests
| there's still solid improvements going on, don't you think?
| samatman wrote:
| Continued improvement is returns, making it inherently
| compatible with a diminishing returns scenario. Which I
| also suspect we're in now: there's no comparing the jump
| between GPT3.5 and GPT4 with GPT4 and any of the subsequent
| releases.
|
| Whether or not we're leveling out, only time will tell.
| That's definitely what it looks like, but it might just be
| a plateau.
| xabadut wrote:
| + there are many untapped sources of data that contain
| information about our physical world, such as video
|
| the curse of dimensionality though...
| tomrod wrote:
| My take is that Altman recognizes LLM winter is coming and is
| trying to entrench.
| chinathrow wrote:
| Looking at ChatGPT or Claude coding output, it's already
| here.
| criticalfault wrote:
| Bad?
|
| I just tried Gemini and it was useless.
| andrewinardeer wrote:
| Google ought to hang its head in utter disgrace over the
| putrid swill they have the audacity to peddle under the
| Gemini label.
|
| Their laughably overzealous nanny-state censorship,
| paired with a model so appallingly inept it would
| embarrass a chatbot from the 90s, makes it nothing short
| of highway robbery that this digital dumpster fire is
| permitted to masquerade as a product fit for public
| consumption.
|
| The sheer gall of Google to foist this steaming pile of
| silicon refuse onto unsuspecting users borders on
| fraudulent.
| mnk47 wrote:
| Starting to wonder why this is so common in LLM
| discussions at HN.
|
| Someone says "X is the model that really impressive. Y is
| good too."
|
| Then someone responds "What?! I just used Z and it was
| terrible!"
|
| I see this at least once in practically every AI thread
| tomrod wrote:
| Humans understand mean but struggle with variance.
| rpmisms wrote:
| It depends on what you're writing. GPT-4 can pump out
| average React all day long. It's next to useless with
| Laravel.
| fzzzy wrote:
| You're the one that chose to try Gemini for some reason.
| dartos wrote:
| I don't think we're gonna see a winter. LLMs are here to
| stay. Natural language interfaces are great. Embeddings are
| incredibly useful.
|
| They just won't be the hottest thing since smartphones.
| eastbound wrote:
| It's a glorified grammar corrector?
| CharlieDigital wrote:
| Not really.
|
| I think actually the best use case for LLMs is
| "explainer".
|
| When combined with RAG, it's fantastic at taking a
| complex corpus of information and distilling it down into
| more digestible summaries.
| bot347851834 wrote:
| Can you share an example of a use case you have in mind
| of this "explainer + RAG" combo you just described?
|
| I think that RAG and RAG-based tooling around LLMs is
| gonna be the clear way forward for most companies with a
| properly constructed knowledge base but I wonder what you
| mean by "explainer"?.
|
| Are you talking about asking an LLM something like "in
| which way did the teams working on project X deal with Y
| problem?" and then having it breaking it down for you? Or
| is there something more to it?
| nebula8804 wrote:
| I'm not the OP but I got some fun ones that I think are
| what you are asking? I would also love to hear others
| interesting ideas/findings.
|
| 1. I got this medical provider that has a webapp that
| downloads graphql data(basically json) to the frontend
| and shows _some_ of the data to the template as a result
| while hiding the rest. Furthermore, I see that they hide
| even more info after I pay the bill. I download all the
| data, combine it with other historical data that I have
| downloaded and dumped it into the LLM. It spits out
| interesting insights about my health history, ways in
| which I have been unusually charged by my insurance, and
| the speed at which the company operates based on all the
| historical data showing time between appointment and the
| bill adjusted for the time of year. It then formats
| everything into an open format that is easy for me to
| self host. (HTML + JS tables). Its a tiny way to wrestle
| back control from the company until they wise up.
|
| 2. Companies are increasingly allowing customers to
| receive a "backup" of all the data they have on
| them(Thanks EU and California). For example Burger
| King/Wendys allow this. What do they give you when you
| request data? A zip file filled with just a bunch of crud
| from their internal system. No worries: Dump it into the
| LLM and it tells you everything that the company knows
| about you in an easy to understand format (Bullet points
| in this case). You know when the company managed to track
| you, how much they "remember", how much money they got
| out of you, your behaviors, etc.
| tomrod wrote:
| #1 would be a good FLOSS project to release out.
|
| I don't understand enough about #2 to comment, but it's
| certainly interesting.
| CharlieDigital wrote:
| If you go to https://clinicaltrials.gov/, you can see
| almost every clinical trial that's registered in the US.
|
| Some trials have their protocols published.
|
| Here's an example trial:
| https://clinicaltrials.gov/study/NCT06613256
|
| And here's the protocol:
| https://cdn.clinicaltrials.gov/large-
| docs/56/NCT06613256/Pro... It's actually relatively short
| at 33 pages. Some larger trials (especially oncology
| trials) can have protocols that are 200 pages long.
|
| One of the big challenges with clinical trials is making
| this information more accessible to both patients (for
| informed consent) and the trial site staff (to avoid
| making mistakes, helping answer patient questions, even
| asking the right questions when negotiating the contract
| with a sponsor).
|
| The gist of it here is exactly like you said: RAG to pull
| back the relevant chunks of a complex document like this
| and then LLM to explain and summarize the information in
| those chunks that makes it easier to digest. That
| response can be tuned to the level of the reader by
| adding simple phrases like "explain it to me at a high
| school level".
| theGnuMe wrote:
| What's your experience with clinical trials?
| CharlieDigital wrote:
| Built regulated document management systems for
| supporting clinical trials for 14 years of my career.
|
| The last system, I led one team competing for the
| Transcelerate Shared Investigator Portal (we were one of
| the finalist vendors).
|
| Little side project: https://zeeq.ai
| stocknoob wrote:
| TIL Math Olympiad problems are simple grammar exercises.
| dartos wrote:
| They do way more than correcting grammar, but tbf, they
| did make something like 10,000 submissions to the math
| Olympiad to get that score.
|
| It's not like it'll do it consistently.
|
| Just a marketing stunt.
| ben_w wrote:
| If you consider responding to this:
|
| "oi i need lik a scrip or somfing 2 take pic of me screen
| evry sec for min, mac"
|
| with an actual (and usually functional) script to be
| "glorified grammar corrector", then sure.
| ForHackernews wrote:
| They're useful in some situations, but extremely expensive
| to operate. It's unclear if they'll be profitable in the
| near future. OpenAI seems to be claiming they need an extra
| $XXX billion in investment before they can...?
| xtracto wrote:
| I just made a (IMHO) cool test with OpenAI/Linux/TCL-TK:
|
| "write a TCL/tk script file that is a "frontend" to the ls
| command: It should provide checkboxes and dropdowns for the
| different options available in bash ls and a button "RUN"
| to run the configured ls command. The output of the ls
| command should be displayed in a Text box inside the
| interface. The script must be runnable using tclsh"
|
| It didn't get it right the first time (for some reason
| wants to put a `mainloop` instruction) but after several
| corrections I got an ugly but pretty functional UI.
|
| Imagine a Linux Distro that uses some kind of LLM generated
| interfaces to make its power more accessible. Maybe even
| "self healing".
|
| LLMs don't stop amazing me personally.
| ethbr1 wrote:
| The issue (and I think what's behind the thinking of AI
| skeptics) is previous experience with the sharp edge of
| the Pareto principle.
|
| Current LLMs being 80% to being 100% useful doesn't mean
| there's only 20% effort left.
|
| It means we got the lowest-hanging 80% of utility.
|
| Bridging that last 20% is going to take a ton of work.
| Indeed, maybe 4x the effort that getting this far
| required.
|
| And people also overestimate the utility of a solution
| that's randomly wrong. It's exceedingly difficult to
| build reliable systems when you're stacking a 5% wrong
| solution on another 5% wrong solution on another 5% wrong
| solution...
| nebula8804 wrote:
| Thank You! You have explained the exact issue I (and
| probably many others) are seeing trying to adopt AI for
| work. It is because of this I don't worry about AI taking
| our jobs for now. You still need somewhat foundational
| knowledge in whatever you are trying to do in order to
| get that remaining 20%. Sometimes this means pushing back
| against the AI's solution, other times it means reframing
| the question, and other times its just giving up and
| doing the work yourself. I keep seeing all these
| impressive toy demos and my experience (Angular and Flask
| dev) seem to indicate that it is not going to replace any
| subject matter expert anytime soon. (And I am referring
| to all the three major AI players as I regularly and
| religiously test all their releases).
|
| >And people also overestimate the utility of a solution
| that's randomly wrong. It's exceedingly difficult to
| build reliable systems when you're stacking a 5% wrong
| solution on another 5% wrong solution on another 5% wrong
| solution...
|
| I call this the merry go round of hell mixed with a cruel
| hall of mirrors. LLM spits out a solution with some
| errors, you tell it to fix the errors, it produces other
| errors or totally forgets important context from one
| prompt ago. You then fix those issues, it then introduces
| other issues or messes up the original fix. Rinse and
| repeat. God help you if you don't actually know what you
| are doing, you'll be trapped in that hall of mirrors for
| all of eternity slowly losing your sanity.
| theGnuMe wrote:
| and here we are arguing for internet points.
| tomrod wrote:
| Much more meaningful to this existentialist.
| therouwboat wrote:
| Why make tool when you can just ask AI to give you
| filelist or files that you need?
| dartos wrote:
| It can work with things of very limited scope, like that
| you describe.
|
| I wrote some data visualizations with Claude and aider.
|
| For anything that someone would actually pay for
| (expecting the robustness of paid-for software) I don't
| think we're there.
|
| The devil is in the details, after all. And detail is
| what you lose when running reality through a statistical
| model.
| Yizahi wrote:
| LLMs as programs are here to stay. The issue is with
| expenses/revenue ratio all these LLM corpos have. According
| to Sequoia analyst (so not some anon on a forum) there is a
| giant money hole in that industry, and "giant" doesn't even
| begins to describe it (iirc it was 600bln this summer).
| That whole industry will definitely see winter soon, even
| if all things Altman says would be true.
| 015a wrote:
| You just described what literally anyone who says "AI
| Winter" means; the technology doesn't go away, companies
| still deploy it and evolve it, customers still pay for it,
| it just stops being so attractive to massive funding and we
| see fewer foundational breakthroughs.
| piuantiderp wrote:
| A cash out
| hnthrowaway6543 wrote:
| It's likely hard for them to look at what their life's work is
| being used for. Customer-hostile chatbots, an excuse for
| executives to lay off massive amounts of middle class workers,
| propaganda and disinformation, regurgitated SEO blogspam that
| makes Google unusable. The "good" use cases seem to be limited
| to trivial code generation and writing boilerplate marketing
| copy that nobody reads anyway. Maybe they realized that if AGI
| were to be achieved, it would be squandered on stupid garbage
| regardless.
|
| Now I am become an AI language model, destroyer of the
| internet.
| f0e4c2f7 wrote:
| There is one clear answer in my opinion:
|
| There is a secondary market for OpenAI stock.
|
| It's not a public market so nobody knows how much you're making
| if you sell, but if you look at current valuations it must be a
| lot.
|
| In that context, it would be quite hard not to leave and sell
| or stay and sell. What if oai loses the lead? What if open
| source wins? Keeping the stock seems like the actual hard thing
| to me and I expect to see many others leave (like early
| googlers or Facebook employees)
|
| Sure it's worth more if you hang on to it, but many think "how
| many hundreds of M's do I actually need? Better to derisk and
| sell"
| chatcode wrote:
| What would you do if
|
| a) you had more money than you'll ever need in your lifetime
|
| b) you think AI abundance is just around the corner, likely
| making everything cheaper
|
| c) you realize you still only have a finite time left on this
| planet
|
| d) you have non-AGI dreams of your own that you'd like to
| work on
|
| e) you can get funding for anything you want, based on your
| name alone
|
| Do you keep working at OpenAI?
| Apocryphon wrote:
| What if she believes AGI is imminent and is relocating to a
| remote location to build a Faraday-shielded survival bunker.
| wantsanagent wrote:
| This is now my head-canon.
| tempodox wrote:
| Laputan machine!
| ben_w wrote:
| Then she hasn't ((read or watched) and (found plausible)) any
| of the speculative fiction about how that's not enough to
| keep you safe.
| Apocryphon wrote:
| No one knows how deep the bunker goes
| ben_w wrote:
| We can be reasonably confident of which side of the
| Mohorovicic discontinuity it may be, as existing tools
| would be necessary to create it in the first place.
| paxys wrote:
| Regardless of where AI currently is and where it is going, you
| don't simply quit as CTO of the company that is leading the
| space _by far_ in terms of technology, products, funding,
| revenue, popularity, adoption and just about everything else.
| She was fired, plain and simple.
| rvnx wrote:
| You can leave and be happy with 30M+ USD in stocks and
| prospects of easy to find a job also.
| piuantiderp wrote:
| Or you are disgusted and leave. Are there things more
| important than money? You'd certainly be certain the OpenAI
| founders sold themselves as, not'in'it'for'the money.
| noiwillnot wrote:
| > leading the space by far in terms of technology, products,
| funding, revenue, popularity, adoption and just about
| everything else
|
| I am not 100% sure that they are still clearly leading the
| technology part, but agree in all other accounts.
| lacker wrote:
| It's easy to have missed this part of the story in all the
| chaos, but from the NYTimes in March:
|
| _Ms. Murati wrote a private memo to Mr. Altman raising
| questions about his management and also shared her concerns
| with the board. That move helped to propel the board's decision
| to force him out._
|
| https://www.nytimes.com/2024/03/07/technology/openai-executi...
|
| It should be no surprise if Sam Altman wants executives who
| opposed his leadership, like Mira and Ilya, out of the company.
| When you're firing a high-level executive in a polite way, it's
| common to let them announce their own departure and frame it
| the way they want.
| startupsfail wrote:
| Greg Brockman, OpenAI President and co-founder is also on
| extended leave of absence.
|
| And John Schulman, and Peter Deng are out already. Yet the
| company is still shipping, like no other. Recent multimodal
| integrations and benchmarks of o1 are outstanding.
| fairity wrote:
| Quite interesting that this comment is downvoted when the
| content is factually correct and pertinent.
|
| It's a very relevant fact that Greg Brockman recently left
| on his own volition.
|
| Greg was aligned with Sam during the coup. So, the fact
| that Greg left lends more credence to the idea that Murati
| is leaving on her own volition.
| frakkingcylons wrote:
| > It's a very relevant fact that Greg Brockman recently
| left on his own volition.
|
| Except that isn't true. He has not resigned from OpenAI.
| He's on extended leave until the end of the year.
|
| That could become an official resignation later, and I
| agree that that seems more likely than not. But stating
| that he's left for good as of right now is misleading.
| meiraleal wrote:
| > Quite interesting that this comment is downvoted when
| the content is factually correct and pertinent.
|
| >> Yet the company is still shipping, like no other.
|
| this is factually wrong. Just today Meta (which I
| despise) shipped more than openAI in a long time.
| vasco wrote:
| > Yet the company is still shipping, like no other
|
| If executives / high level architects / researchers are
| working on this quarter's features something is very wrong.
| The higher you get the more ahead you need to be working,
| C-level departures should only have an impact about a year
| down the line, at a company of this size.
| ttcbj wrote:
| This is a good point. I had not thought of it this way
| before.
| Aeolun wrote:
| You may find that this is true in many companies.
| mise_en_place wrote:
| Funny, at every corporation I've worked for, every
| department was still working on _last_ quarter 's
| features. FAANG included.
| dartos wrote:
| That's exactly what they were saying. The department are
| operating behind the executives.
| saalweachter wrote:
| C-level employees are about setting the company's
| culture. Clearing out and replacing the C-level employees
| ultimately results in a shift in company culture, a year
| or two down the line.
| ac29 wrote:
| > the company is still shipping, like no other
|
| Meta, Anthropic, Google, and others all are shipping state
| of the art models.
|
| I'm not trying to be dismissive of OpenAI's work, but they
| are absolutely not the only company shipping very large
| foundation models.
| pama wrote:
| Perhaps you havent tried o1-preview or advanced voice if
| you call all the rest SOTA.
| Aeolun wrote:
| If only they'd release the advanced voice thing as an
| API. Their TTS is already pretty good, but ai wouldn't
| say no to an improvement.
| g8oz wrote:
| Indeed Anthropic is just as good, if not better in my
| sample size of one. Which is great because OpenAI as an
| org gives shady vibes - maybe it's just Altman, but he is
| running the show.
| MavisBacon wrote:
| Claude is pretty brilliant.
| RobertDeNiro wrote:
| Greg's wife is pretty sick. For all we know this is
| unrelated to the drama.
| theGnuMe wrote:
| Sorry to hear that, all the best wishes to them.
| imdsm wrote:
| Context (I think):
| https://x.com/gdb/status/1744446603962765669
|
| Big fan of Greg, and I think the motivation behind AGI is
| sound here. Even what we have now is a fantastic tool, if
| people decide to use it.
| moondistance wrote:
| VP Research Barret Zoph and Chief Research Officer Bob
| McGrew also announced their departures this evening.
| vicentwu wrote:
| Past efforts leds to today's products. We need to wait to
| see the real imapct on the ability to ship.
| dartos wrote:
| > like no other
|
| Really? Anthropic seems to be popping off right now.
|
| Kagi isn't exactly in the AI space, but they ship features
| pretty frequently.
|
| OpenAI is shipping incremental improvements to its chatgpt
| product.
| jjtheblunt wrote:
| "popping off" means what?
| dartos wrote:
| Modern colloquialism generally meaning
| Moving/advancing/growing/gaining popularity very fast
| elbear wrote:
| Are they? In my recent experience, ChatGPT seems to have
| gotten better than Claude again. Plus their free limit is
| more strict, so this experience is on the free account.
| 0xKromo wrote:
| Its just tribalism. People tend to find a team to root
| for when there is a competition. Which one is better is
| subjective at this point imo.
| jpeg-irl wrote:
| The features shipped by Anthropic in the past month are
| far more practical and provide clear value for builders
| than o1's chain of thought improvements.
|
| - Prompt Cache, 90% savings on large system prompts for 5
| mins of calls. This is amazing
|
| - Contexual RAG, while not ground breaking idea, is
| important thinking and method for better vector retrieval
| csomar wrote:
| > Yet the company is still shipping, like no other.
|
| I don't see it for OpenAI, I do see it for the competition.
| They have shipped incremental improvements, however, they
| are watering down their current models (my guess is they
| are trying to save on compute?). Copilot has turned into
| garbage and for coding related stuff, Claude is now better
| than gpt-4.
|
| Honestly, their outlook is bleak.
| benterix wrote:
| Yeah, I have the same feeling. It seems like operating
| GPT-4 is too expensive, so they decided to call it
| "legacy" and get rid of it soon, and instead focus on
| cheaper/faster 4o, and also chain its prompts to call it
| a new model.
|
| I understand why they are doing it, but honestly if they
| cancel GPT-4, many people will just cancel their
| subscription.
| mistercheph wrote:
| In my humble opinion you're wrong, Sora and 4o voice are
| months old and no signs they're not vaporware, and they
| still haven't shipped a text model on par with 3.5 sonnet!
| SkyMarshal wrote:
| _> When you 're firing a high-level executive in a polite
| way, it's common to let them announce their own departure and
| frame it the way they want._
|
| You also give them some distance in time from the drama so
| the two appear unconnected under cursory inspection.
| SadTrombone wrote:
| To be fair she was also one of the employees who signed the
| letter to the board demanding that Altman be reinstated or
| she would leave the company.
| bradleyjg wrote:
| Isn't that even worse? You write to the board, they take
| action on your complaints, and then you change your mind?
| barkingcat wrote:
| It means when she was opting for the reinstating of
| Altman, she didn't have all the information needed to
| make a decsion
|
| Now that she's seen exactly what prompted the previous
| board to fire Altman, she fires herself because she
| understands their decision now.
| hobofan wrote:
| Does that actually mean anything? Didn't 95% of the company
| sign that letter, and soon afterwards many employees stated
| that they felt pressured by a vocal minority of peers and
| supervisors to sign the letter? E.g. if most executives on
| her level already signed the letter, it would have been
| political suicide not to sign it
| saagarjha wrote:
| She was second-in-command of the company. Who else is
| there on her level to pressure her to sign such a thing,
| besides Sam himself?
| mempko wrote:
| Exactly, Sam Altman wants group think, no opposition, no
| diversity of thought. That's what petty dictators demand.
| This spells the end of OpenAI IMO. Huge amount of money will
| keep it going until it doesn't
| ren_engineer wrote:
| most of the people seem to be leaving due to the direction
| where Altman is taking OpenAI. It went from a charity to him
| seemingly doing everything possible to monetize it for himself
| both directly and indirectly by him trying to raise funds for
| AI adjacent traditionally structured companies he controlled
|
| probably not coincidence that she resigned at almost the same
| time the rumors about OpenAI completely removing the non-profit
| board are getting confirmed -
| https://www.reuters.com/technology/artificial-intelligence/o...
| ethbr1 wrote:
| Afaik, he's exceedingly driven to do that, because if they
| run out of money Microsoft gets to pick the carcass clean.
| elAhmo wrote:
| It would be definitely difficult thing to walk away.
|
| This is just one more in a series of massive red flags around
| this company, from the insanely convoluted governance scheme,
| over the board drama, to many executives and key people leaving
| afterwards. It feels like Sam is doing the cleanup and anyone
| who opposes him has no place at OpenAI.
|
| This, coming around the time where there are rumors of possible
| change to the corporate structure to be more friendly to
| investors, is an interesting timing.
| shmatt wrote:
| I feel like this is stating the obvious - but i guess not to
| many - but a probabilistic syllable generator is not
| intelligence, it does not understand us, it cannot reason, it
| can only generate the next syllable
|
| It makes us feel understood in the same ways John Edward used
| to in daytime tv, its all about how language makes us feel
|
| true AGI...unfortunately we're not even close
| CooCooCaCha wrote:
| I'm not saying you're wrong but you could use this reductive
| rhetorical strategy to dismiss any AI algorithm. "It's just
| X" is frankly shallow criticism.
| iLoveOncall wrote:
| And there's nothing wrong about that: the fact that
| _artificial intelligence_ will never lead to general
| intelligence isn't exactly a hot take.
| CooCooCaCha wrote:
| That's both a very general and very bold claim. I don't
| think it's unreasonable to say that's too strong of a
| claim given how we don't know what is possible yet and
| there's frankly no good reason to completely dismiss the
| idea of artificial general intelligence.
| NoGravitas wrote:
| I think the existence of biological general intelligence
| is a proof-by-existence for artificial general
| intelligence. But at the same time, I don't think LLM and
| similar techniques are likely in the evolutionary path of
| artificial general intelligence, if it ever comes to
| exist.
| dr_dshiv wrote:
| It's almost trolling at this point, though.
| paxys wrote:
| > to dismiss any AI algorithm
|
| Or even human intelligence
| timr wrote:
| And you can dismiss any argument with your response.
|
| "Your argument is just a reductive rhetorical strategy."
| CooCooCaCha wrote:
| Sure if you ignore context.
|
| "a probabilistic syllable generator is not intelligence,
| it does not understand us, it cannot reason" is a strong
| statement and I highly doubt it's backed by any sort of
| substance other than "feelz".
| timr wrote:
| I didn't ignore any more context than you did, but just I
| want to acknowledge the irony that "context"
| (specifically, here, any sort of memory that isn't in the
| text context window) is _exactly_ what is lacking with
| these models.
|
| For example, even the dumbest dog has a memory, a
| strikingly advanced concept model of the world [1], a
| persistent state beyond the last conversation history,
| and an ability to reason (that doesn't require re-running
| the same conversation sixteen bajillion times in a row).
| Transformer models do not. It's really cool that they can
| input and barf out realistic-sounding text, but let's
| keep in mind the obvious truths about what they are
| doing.
|
| [1] "I like food. Something that smells like food is in
| the square thing on the floor. Maybe if I tip it over
| food will come out, and I will find food. Oh no, the
| person looked at me strangely when I got close to the
| square thing! I am in trouble! I will have to do it when
| they're not looking."
| CooCooCaCha wrote:
| > that doesn't require re-running the same conversation
| sixteen bajillion times in a row
|
| Lets assume the dog visual systems run at 60 frames per
| second. If it takes 1 second to flip a bowl of food over
| then that's 60 datapoints of cause-effect data that the
| dog's brain learned from.
|
| Assuming it's the same for humans, lets say I go on a
| trip to the grocery store for 1 hour. That's 216,000 data
| points from one trip. Not to mention auditory data,
| touch, smell, and even taste.
|
| > ability to reason [...] Transformer models do not
|
| Can you tell me what reasoning is? Why can't transformers
| reason? Note I said _transformers_ not _llm 's_. You
| could make a reasonable (hah) case that current LLMs
| cannot reason (or at least very well) but why are
| transformers as an architecture doomed?
|
| What about chain of thought? Some have made the claim
| that chain of thought adds recurrence to transformer
| models. That's a pretty big shift, but you've already
| decided transformers are a dead end so no chance of that
| making a difference right?
| HeatrayEnjoyer wrote:
| This overplayed knee jerk response is so dull.
| svara wrote:
| I truly think you haven't really thought this through.
|
| There's a huge amount of circuitry between the input and the
| output of the model. How do you know what it does or doesn't
| do?
|
| Humans brains "just" output the next couple milliseconds of
| muscle activation, given sensory input and internal state.
|
| Edit: Interestingly, this is getting downvotes even though 1)
| my last sentence is a precise and accurate statement of the
| state of the art in neuroscience and 2) it is completely
| isomorphic to what the parent post presented as an argument
| against current models being AGI.
|
| To clarify, I don't believe we're very close to AGI, but
| parent's argument is just confused.
| 015a wrote:
| Did you seriously just use the word "isomorphic"? No wonder
| people believe AI is the next crypto.
| svara wrote:
| Well, AI clearly is the next crypto, haha.
|
| Apologies for the wording but I think you got it and the
| point stands.
|
| I'm not a native speaker and mostly use English in a
| professional science related setting, that's why I sound
| like that sometimes.
|
| isomorphic - being of identical or similar form, shape,
| or structure (m-w). Here metaphorically applied to the
| structure of an argument.
| edouard-harris wrote:
| In what way was their usage incorrect? They simply said
| that the brain just predicts next-actions, in response to
| a statement that an LLM predicts next-tokens. You can
| believe or disbelieve either of those statements
| individually, but the claims are isomorphic in the sense
| that they have the same structure.
| 015a wrote:
| Its not that it was used incorrectly: Its that it isn't a
| word actual humans use, and its one of a handful of dog
| whistles for "I'm a tech grifter who has at best a
| tenuous grasp on what I'm talking about but would love
| more venture capital". The last time I've personally
| heard it spoken was from Beff Jezos/Guillaume Verdon.
| NoGravitas wrote:
| I think we should delve further into that analysis.
| svara wrote:
| You know, you can just talk to me about my wording. Where
| do I meet those gullible venture investors?
| HarHarVeryFunny wrote:
| > There's a huge amount of circuitry between the input and
| the output of the model
|
| Yeah - but it's just a stack of transformer layers. No
| looping, no memory, no self-modification (learning). Also,
| no magic.
| svara wrote:
| No looping, but you can unroll loops to a fixed depth and
| apply the model iteratively. There obviously is memory
| and learning.
|
| Neuroscience hasn't found the magic dust in our brains
| yet, either. ;)
| HarHarVeryFunny wrote:
| Zero memory inside the model from one input (ie token
| output) to the next (only the KV cache, which is just an
| optimization). The only "memory" is what the model
| outputs and therefore gets to re-consume (and even there
| it's an odd sort of memory since the model itself didn't
| exactly choose what to output - that's a random top-N
| sampling).
|
| There is no real runtime learning - certainly no weight
| updates. The weights are all derived from pre-training,
| and so the runtime model just represents a frozen chunk
| of learning. Maybe you are thinking of "in-context
| learning", which doesn't update the weights, but is
| rather the ability of the model to use whatever is in the
| context, including having that "reinforced" by
| repetition. This is all a poor substitute for what an
| animal does - continuously learning from experience and
| exploration.
|
| The "magic dust" in our brains, relative to LLMs, is just
| a more advanced and structure architecture, and
| operational dynamics. e.g. We've got the thalamo-cortical
| loop, massive amounts of top-down feedback for
| incremental learning from prediction failure, working
| memory, innate drives such as curiosity (prediction
| uncertainty) and boredom to drive exploration and
| learning, etc, etc. No magic, just architecture.
| svara wrote:
| I'm not entirely sure what you're arguing for. Current AI
| models can still get a lot better, sure. I'm not in the
| AGI in 3 years camp.
|
| But, people in this thread are making philosophically
| very poor points about why that is supposedly so.
|
| It's not "just" sequence prediction, because sequence
| prediction is the very essence of what the human brain
| does.
|
| Your points on learning and memory are similarly weak
| word play. Memory means holding some quantity constant
| over time in the internal state of a model. Learning
| means being able to update those quantities. LLMs
| obviously do both.
|
| You're probably going to be thinking of all sorts of
| obvious ways in which LLMs and humans are different.
|
| But no one's claiming there's an artificial human. What
| does exist is increasingly powerful data processing
| software that progressively encroaches on domains
| previously thought to be that of humans only.
|
| And there may be all sorts of limitations to that, but
| those (sequences, learning, memory) aren't them.
| HeatrayEnjoyer wrote:
| >no memory, no self-modification (learning).
|
| This is also true of those with advanced Alzheimer's
| disease. Are they not conscious as well? If we believe
| they are conscious then memory and learning must not be
| essential ingredients.
| lewhoo wrote:
| I don't think that's a good example. People with
| Alzheimer's have, to put it simply, damaged memory, but
| not complete lack of. We're talking about a situation
| where a person wouldn't be even conscious of being a
| human/person unless they were told so as part of the
| current context window. Right ?
| HarHarVeryFunny wrote:
| I'm not sure what you're trying to say.
|
| I thought we're talking about intelligence, not
| consciousness, and limitations of the LLM/transformer
| architecture that limit their intelligence compared to
| humans.
|
| In fact LLMs are not only architecturally limited, but
| they also give the impression of being far more
| intelligent than they actually are due to mimicking
| training sources that are more intelligent than the LLM
| itself is.
|
| If you want to bring consciousness into the discussion,
| then that is basically just the brain modelling itself
| and the subjective experience that gives rise to. I
| expect it arose due to evolutionary adaptive benefit -
| part of being a better predictor (i.e. more intelligent)
| is being better able to model your own behavior and
| experiences, but that's not a must-have for intelligence.
| ttul wrote:
| While it's true that language models are fundamentally based
| on statistical patterns in language, characterizing them as
| mere "probabilistic syllable generators" significantly
| understates their capabilities and functional intelligence.
|
| These models can engage in multistep logical reasoning, solve
| complex problems, and generate novel ideas - going far beyond
| simply predicting the next syllable. They can follow
| intricate chains of thought and arrive at non-obvious
| conclusions. And OpenAI has now showed us that fine-tuning a
| model specifically to plan step by step dramatically improves
| its ability to solve problems that were previously the domain
| of human experts.
|
| Although there is no definitive evidence that state-of-the-
| art language models have a comprehensive "world model" in the
| way humans do, several studies and observations suggest that
| large language models (LLMs) may possess some elements or
| precursors of a world model.
|
| For example, Tegmark and Gurnee [1] found that LLMs learn
| linear representations of space and time across multiple
| scales. These representations appear to be robust to
| prompting variations and unified across different entity
| types. This suggests that modern LLMs may learn rich
| spatiotemporal representations of the real world, which could
| be considered basic ingredients of a world model.
|
| And even if we look at much smaller models like Stable
| Diffusion XL, it's clear that they encode a rich
| understanding of optics [2] within just a few billion
| parameters (3.5 billion to be precise). Generative video
| models like OpenAI's Sora clearly have a world model as they
| are able to simulate gravity, collisions between objects, and
| other concepts necessary to render a coherent scene.
|
| As for AGI, the consensus on Metaculus is that it will arrive
| in 2023. But consider that before GPT-4 arrived, the
| consensus was that full AGI was not coming until 2041 [3].
| The consensus for the arrival date of "weakly general" AGI is
| 2027 [4] (i.e AGI that doesn't have a robotic physical world
| component). The best tool for achieving AGI is the
| transformer and its derivatives; its scaling keeps going with
| no end in sight.
|
| Citations:
|
| [1] https://paperswithcode.com/paper/language-models-
| represent-s...
|
| [2] https://www.reddit.com/r/StableDiffusion/comments/15he3f4
| /el...
|
| [3] https://www.metaculus.com/questions/5121/date-of-
| artificial-...
|
| [4] https://www.metaculus.com/questions/3479/date-weakly-
| general...
| iLoveOncall wrote:
| > Generative video models like OpenAI's Sora clearly have a
| world model as they are able to simulate gravity,
| collisions between objects, and other concepts necessary to
| render a coherent scene.
|
| I won't expand on the rest, but this is simply nonsensical.
|
| The fact that Sora generates output that matches its
| training data doesn't show that it has a concept of
| gravity, collision between object, or anything else. It has
| a "world model" the same way a photocopier has a "document
| model".
| svara wrote:
| My suspicion is that you're leaving some important parts
| in your logic unstated. Such as belief in a magical
| property within humans of "understanding", which you
| don't define.
|
| The ability of video models to generate novel video
| consistent with physical reality shows that they have
| extracted important invariants - physical law - out of
| the data.
|
| It's probably better not to muddle the discussion with
| ill defined terms such as "intelligence" or
| "understanding".
|
| I have my own beef with the AGI is nigh crowd, but this
| criticism amounts to word play.
| phatfish wrote:
| It feels like if these image and video generation models
| were really resolving some fundamental laws from the
| training data they should at least be able to re-create
| an image at a different angle.
| some1else wrote:
| "Allegory of the cave" comes to mind, when trying to
| describe the understanding that's missing from diffusion
| models. I think a super-model with such qualifications
| would require a number of ControlNets in a non-visual
| domains to be able to encode understanding of the
| underlying physics. Diffusion models can render
| permutations of whatever they've seen fairly well without
| that, though.
| svara wrote:
| I'm very familiar with the allegory of the cave, but I'm
| not sure I understand where you're going with the analogy
| here.
|
| Are you saying that it is not possible to learn about
| dynamics in a higher dimensional space from a lower
| dimensional projection? This is clearly not true in
| general.
|
| E.g., video models learn that even though they're only
| ever seeing and outputting 2d data, objects have
| different sides in a fashio that is consistent with our
| 3d reality.
|
| The distinctions you (and others in this thread) are
| making is purely one of degree - how much generalization
| has been achieved, and how well - versus one of category.
| PollardsRho wrote:
| > its scaling keeps going with no end in sight.
|
| Not only are we within eyesight of the end, we're more or
| less there. o1 isn't just scaling up parameter count 10x
| again and making GPT-5, because that's not really an
| effective approach at this point in the exponential curve
| of parameter count and model performance.
|
| I agree with the broader point: I'm not sure it isn't
| consistent with current neuroscience that our brains aren't
| doing anything more than predicting next inputs in a
| broadly similar way, and any categorical distinction
| between AI and human intelligence seems quite challenging.
|
| I disagree that we can draw a line from scaling current
| transformer models to AGI, however. A model that is great
| for communicating with people in natural language may not
| be the best for deep reasoning, abstraction, unified
| creative visions over long-form generations, motor control,
| planning, etc. The history of computer science is littered
| with simple extrapolations from existing technology that
| completely missed the need for a paradigm shift.
| versteegen wrote:
| The fact that OpenAI created and released o1 doesn't mean
| they won't also scale models upwards or don't think it's
| their best hope. There's been plenty said implying that
| they are.
|
| I definitely agree that AGI isn't just a matter of
| scaling transformers, and also as you say that they "may
| not be the best" for such tasks. (Vanilla transformers
| are extremely inefficient.) But the really important
| point is that transformers _can_ do things such as
| abstract, reason, form world models and theories of
| minds, etc, _to a significant degree_ (a much greater
| degree than virtually anyone would have predicted 5-10
| years ago), all learnt _automatically_. It shows these
| problems are actually tractable for connectionist machine
| learning, without a paradigm shift as you and many others
| allege. That is the part I disagree with. But more
| breakthroughs needed.
| ttul wrote:
| To whit: OpenAI was until quite recently investigating
| having TSMC build a dedicated semiconductor fab to
| produce OpenAI chips [1]:
|
| (Translated from Chinese) > According to industry
| insiders, OpenAI originally actively negotiated with TSMC
| to build a dedicated wafer factory. However, after
| evaluating the development benefits, it shelved the plan
| to build a dedicated wafer factory. Strategically, OpenAI
| sought cooperation with American companies such as
| Broadcom and Marvell for its own ASIC chips. Development,
| among which OpenAI is expected to become Broadcom's top
| four customers.
|
| [1] https://money.udn.com/money/story/5612/8200070
| (Chinese)
|
| Even if OpenAI doesn't build its own fab -- a wise move,
| if you ask me -- the investment required to develop an
| ASIC on the very latest node is eye watering. Most people
| - even people in tech - just don't have a good
| understanding of how "out there" semiconductor
| manufacturing has become. It's basically a dark art at
| this point.
|
| For instance, TSMC themselves [2] don't even know at this
| point whether the A16 node chosen by OpenAI will require
| using the forthcoming High NA lithography machines from
| ASML. The High NA machines cost nearly twice as much as
| the already exceptional Extreme Ultraviolet (EUV)
| machines do. At close to $400M each, this is simply eye
| watering.
|
| I'm sure some gurus here on HN have a more up to date
| idea of the picture around A16, but the fundamental news
| is this: If OpenAI doesn't think scaling will be needed
| to get to AGI, then why would they be considering
| spending many billions on the latest semiconductor tech?
|
| Citations: [1] https://www.phonearena.com/news/apple-
| paid-twice-as-much-for... [2]
| https://www.asiabusinessoutlook.com/news/tsmc-to-mass-
| produc...
| Erem wrote:
| The only useful way to define an AGI is based on its
| capabilities, not its implementation details.
|
| Based on capabilities alone, current LLMs demonstrate many of
| the capabilities practitioners ten years ago would have
| tossed into the AGI bucket.
|
| What are some top capabilities (meaning inputs and outputs)
| you think are missing on the path between what we have now
| and AGI?
| lumenwrites wrote:
| "Intelligence" is a poorly defined term prone to arguments
| about semantics and goalpost shifting.
|
| I think it's more productive to think about AI in terms of
| "effectiveness" or "capability". If you ask it, "what is the
| capital of France?", and it replies "Paris" - it doesn't
| matter whether it is intelligent or not, it is
| effective/capable at identifying the capital of France.
|
| Same goes for producing an image, writing SQL code that
| works, automating some % of intellectual labor, giving
| medical advice, solving an equation, piloting a drone,
| building and managing a profitable company. It is capable of
| various things to various degrees. If these capabilities are
| enough to make money, create risks, change the world in some
| significant way - that is the part that matters.
|
| Whether we call it "intelligence" or "probabilistically
| generaring syllables" is not important.
| atleastoptimal wrote:
| it can actually solve problems though, its not just an
| illusion of intelligence if it does the stuff we considered
| mere years ago sufficient to be intelligent. But you and
| others keep moving the goalposts as benchmarks saturate,
| perhaps due to a misplaced pride in the specialness of human
| intelligence.
|
| I understand the fear, but the knee jerk response "its just
| predicting the next token thus could never be intelligent"
| makes you look more like a stochastic parrot than these
| models are.
| caconym_ wrote:
| The "goalposts" are "moving" because now (unlike "mere
| years ago") we have real AI systems that are at least good
| enough to be seriously compared with human intelligence. We
| aren't vaguely speculating about what such an AI system
| _might_ be like^[1]; we have the real thing now, and we can
| test its capabilities and see what it _is_ like, what it 's
| good at, and what it's not so good at.
|
| I think your use of the "goalposts" metaphor is telling.
| You see this as a team sport; you see yourself on the
| offensive, or the defensive, or whatever. Neither is
| conducive to a balanced, objective view of reality. Modern
| LLMs are shockingly "smart" in many ways, but if you think
| they're general intelligence in the same way humans have
| general intelligence (even disregarding agency, learning,
| etc.), that's a you problem.
|
| ^[1] I feel the implicit suggestion that there was some
| sort of broad consensus on this in the before-times is
| revisionism.
| ssnistfajen wrote:
| It solves problems because it was trained with the
| solutions to these problems that have been written down a
| thousand times before. A lot of people don't even consider
| the ability to solve problems to be a reliable indicator of
| human intelligence, see the constantly evolving discourse
| regarding standardized tests.
|
| Attempts at autonomous AI agents are still failing
| spectacularly because the models don't actually have any
| thought or memory. Context is provided to them via
| prefixing the prompt with all previous prompts which
| obviously causes significant info loss after a few
| interaction loops. The level of intellectual complexity at
| play here is on par with nematodes in a lab (which btw
| still can't be digitally emulated after decades of
| research). This isn't a diss on all the smart people
| working in AI today, bc I'm not talking about the quality
| of any specific model available today.
| insane_dreamer wrote:
| What top executives write in these farewell letters often has
| little to do with their actual reasons for leaving.
| letitgo12345 wrote:
| Maybe it is but it's not the only company that is
| iLoveOncall wrote:
| People still believe that a company that has only delivered
| GenAI models is anywhere close to AGI?
|
| Success in not around any corner. It's pure insanity to even
| believe that AGI is possible, let alone close.
| HeatrayEnjoyer wrote:
| What can you confidently say AI will not be able to do in
| 2029? What task can you declare, without hesitation, will not
| be possible for automatic hardware to accomplish?
| iLoveOncall wrote:
| Easy: doing something that humans don't already do and
| program it to do.
|
| AI is incapable of any innovation. It accelerates human
| innovation, just like any other piece of software, but
| that's it. AI makes protein folding more efficient, but it
| can't ever come up with the concept of protein folding on
| its own. It's just software.
|
| You simply cannot have general intelligence without self-
| driven innovation. Not improvement, innovation.
|
| But if we look at much more simple concepts, 2029 is only 5
| years (not even) away, so I'm pretty confident that
| anything that it cannot do right now it won't be able to do
| in 2029 either.
| 015a wrote:
| Discover new physics.
| goodluckchuck wrote:
| I could see it being close, but also feeling an urgency to get
| there first / believing you could do it better.
| yieldcrv wrote:
| easy for me to relate to that, my time is more interesting than
| that
|
| being in San Francisco for 6 years and success means getting
| hauled in front of Congress and European Parliament
|
| cant think of a worse occupational nightmare after having an
| 8-figure nest egg already
| apwell23 wrote:
| Her rise didn't make sense to me. Product manager at tesla to
| CTO at openAI with no technical background and a deleted
| profile ?
|
| This is a very strange company to say the least.
| alephnerd wrote:
| A significant portion of the old guard at OpenAI was part of
| the Effective Altruism, AI Alignment, and Open Philanthropy
| movement.
|
| Most hiring in the foundational AI/model space is very
| nepotistic and biased towards people in that clique.
|
| Also, Elon Musk used to be the primary patron for OpenAI
| before losing interest during the AI Winter in the late
| 2010s.
| comp_throw7 wrote:
| Which has zero explanatory power w.r.t. Murati, since she's
| not part of that crowd at all. But her previously working
| at an Elon company seems like a plausible route, if she did
| in fact join before he left OpenAI (since he left in Feb
| 2018).
| nebula8804 wrote:
| >Product manager at tesla to CTO at openAI with no technical
| background and a deleted profile ?
|
| Doesn't she have a dual bachelors in Mathematics and
| Mechanical Engineering?
| apwell23 wrote:
| Thats what is needed to get a job as a product manager
| these days?
| nebula8804 wrote:
| Well that and years of experience leading projects.
| Wasn't she head of the Model X program at Tesla?
|
| But my point is that she does have a technical
| background.
| apwell23 wrote:
| > Well that and years of experience leading projects.
| Wasn't she head of the Model X program at Tesla?
|
| No idea because she scrubbed her linkedin profile. But
| afaik she didn't have "years of experience leading
| projects" to get a job as leadpm at tesla. That was her
| first job as PM.
| fzzzy wrote:
| You have to remember that OpenAI's mission was considered
| absolute batshit insane back then.
| mlazos wrote:
| Agreed, when a company rises to prominence so fast, I feel
| like you can end up with inexperienced people really high up
| in management. High risk high reward for them. The board was
| also like this - a lot of inexperienced random people leading
| a super consequential company resulting in the shenanigans we
| saw and now most of them are gone. Not saying inexperienced
| people are inherently bad, but they either grow into the role
| or don't. Mira is probably very smart, but I don't think you
| can go build a team around her like Ilya or other big name
| researchers. I'm happy for her with riding one of wildest
| rocket ships in the past 5 years at least but I don't expect
| to hear much about her from now on.
| jappgar wrote:
| I'm sure this isn't the actual reason, but one possible
| interpretation is "I'm stepping away to enjoy my life+money
| before it's completely altered by the singularity."
| ikari_pl wrote:
| unless you didn't see it as a success, and want to abandon the
| ship before it gets torpedoed
| aucisson_masque wrote:
| It's corporate bullcrap, you're not supposed to believe it.
| What really matters in these statement is what is not said.
| dyauspitr wrote:
| I doubt she's leaving to do her own thing, I don't think she
| could. She probably got pushed out.
| blackeyeblitzar wrote:
| Maybe it has to do with Sam getting rid of the nonprofit
| control and having equity?
|
| https://news.ycombinator.com/item?id=41651548
| mmaunder wrote:
| I think they have an innovation problem. There are a few
| signals wrt the o1 release that indicate this. Not really a new
| model but an old model with CoT. And the missing system prompt
| - because they're using it internally now. Also seeing 500
| errors from their REST endpoints intermittently.
| vl wrote:
| But also most likely she is already fully vested. Why stay and
| work 60 hours a week in such case?
| ggm wrote:
| Hint: success is not just around the corner.
| hatthew wrote:
| Could also be that she just got tired of the day to day
| responsibilities. Maybe she realized she that she hasn't been
| able to spend more than 5 minutes with her kids/nieces/nephews
| last week. Maybe she was going to murder someone if she had to
| sit through another day with 10 hours of meetings.
|
| I don't know her personal life or her feelings, but it doesn't
| seem like a stretch to imagine that she was just _done_.
| blihp wrote:
| This was the company that made all sorts of noise about how
| they couldn't release GPT-2 to the public because it was too
| dangerous[1]. While there are many very useful applications
| being developed, OpenAI's main deliverable appears to be hype
| that I suspect when it's all said and done they will fail to
| deliver on. I think the main thing they are doing quite
| successfully is cashing in on the hype before people figure it
| out.
|
| [1] https://slate.com/technology/2019/02/openai-gpt2-text-
| genera...
| johnfn wrote:
| GPT-2 and descendants have polluted the internet with AI
| spam. I don't think that this is too unreasonable of a claim.
| mvkel wrote:
| A couple of the original inventors of the transformer left
| Google to start crypto companies.
| sheepscreek wrote:
| Another theory: it's possibly related to a change of heart at
| OpenAI to become a for-profit company. It is rumoured Altman's
| gunning for a 7% stake in the for-profit entity. That would be
| very substantial at a $150B valuation.
|
| Squeezing out senior execs could be a way for him to maximize
| his claim on the stake. Notwithstanding, the execs may have
| disagreed with the shift in culture.
| TrackerFF wrote:
| Nothing difficult about it.
|
| 1) She has a very good big picture view of the market. She has
| probably identified some very specific problems that need to be
| solved, or at least knows where the demand lies.
|
| 2) She has the senior exec OpenAI pedigree, which makes raising
| funds almost trivial.
|
| 3) She can probably make as much, if not more, by branching out
| on her own - while having more control, and working on more
| interesting stuff.
| rvz wrote:
| > "Leaving to do my own exploration"
|
| Lets write this chapter and take some guesses, it's either going
| to be:
|
| 1. Anthropic.
|
| 2. SSI Inc.
|
| 3. Own AI Startup.
|
| 4. Neither.
|
| Only one is correct.
| mikelitoris wrote:
| The only thing your comment says is she won't be working
| simultaneously for more than one company in {1,2,3}.
| motoxpro wrote:
| I know what I am going to say isn't of much value but the GPs
| post is the most twitter comment ever and it made me chuckle.
| Apocryphon wrote:
| Premium wallpaper app.
| VeejayRampay wrote:
| that's a lot of core people leaving, especially since they're
| apparently so close to a "revolution in AGI"
|
| I feel like either they're not close at all and the people know
| it's all lies or they're seeing some shady stuff and want nothing
| to do with it
| paxys wrote:
| A simpler explanation is that SamA is consolidating power at
| the company and pushing out everyone who hasn't been loyal to
| him from the start.
| rvz wrote:
| And it also explains what Mira (and everyone else who left)
| saw; the true cost of a failed coup and what Sam Altman is
| really doing since he is consolidating power at OpenAI (and
| getting equity)
| steinvakt wrote:
| So "What did Ilya see" might just be "Ilya actually saw
| Sam"
| aresant wrote:
| It is unsuprising that Murati is leaving, she was reported to be
| one of the principal advocates for pushing Sam out (1)
|
| Of course everybody was quick to play nice once OpenAI insiders
| got the reality check from Satya that he'd just crush them by
| building an internal competing group, cut funding, and instantly
| destroy lots of paper millionaires.
|
| I'd imagine that Mira and others had 6 - 12 month agreeements in
| place to let the dust settle and finish their latest round of
| funding without further drama
|
| The OpenAI soap opera is going to be a great book or movie
| someday
|
| (1) https://www.nytimes.com/2024/03/07/technology/openai-
| executi...?
| mcast wrote:
| Trent Reznor and David Fincher need to team up again to make a
| movie about this.
| fb03 wrote:
| I'd not complain if William Gibson got into the project as
| well.
| ackbar03 wrote:
| real question is did Michael Lewis happen to be hanging
| around the OpenAI water-coolers again when all this happened
| throwaway314155 wrote:
| I've forgotten, did she play a role in the attempted Sam Altman
| ouster?
| blackeyeblitzar wrote:
| She wasn't on the board right? So if she did play a role, it
| wasn't through a vote I'd guess.
| paxys wrote:
| She was picked by the board to replace Sam in the interim after
| his ouster, so we can draw some conclusions from that.
| 015a wrote:
| Well, she accepted the role of interim CEO for a bit, and then
| flip-flopped to supporting getting Sam back when it became
| obvious that the employees were fully hypnotized by Sam's
| reality distortion field.
| blackeyeblitzar wrote:
| It doesn't make sense to me that someone in such a position at a
| place like OpenAI would leave. So I assume that means she was
| forced out, maybe due to underperformance, or the failed coup, or
| something else. Anyone know what the story is on her background
| and how she got into that position and what she contributed? I've
| heard interesting stories, some positive and some negative, but
| can't tell what's true. It seems like there generally is just a
| lot of controversy around this "nonprofit".
| mewse-hn wrote:
| There are some good articles that explain what happened with
| the coup, that's the main thing to read up on. As for the
| reason she's leaving, you don't take a shot at the leader of
| the organization, miss, and then expect to be able to remain at
| the organization. She's probably been on house leave since it
| happened for the sake of optics at OpenAI.
| muglug wrote:
| It's Sam's Club now.
| paxys wrote:
| Always has been
| grey-area wrote:
| Altman was not there at the start. He came in later, as he
| did with YC.
| paxys wrote:
| He became CEO later, but was always part of the founding
| team at OpenAI.
| TMWNN wrote:
| Murati and Sutskever discovered the high Costco of challenging
| Altman.
| romanovcode wrote:
| It's CIA`s club since 2024.
| JPLeRouzic wrote:
| For governments, knowing what important questions bother
| people is critical. This is better guessed by having a back
| door to one of the most used LLMs than to one of the most
| used search engines.
| romanovcode wrote:
| ChatGPT makes a "profile" out of your account saving most
| important information about you as a person. It would be
| much more difficult to do that by just analyzing your
| search queries in google.
|
| This profile data is any intelligence agencies wet dream.
| Jayakumark wrote:
| At this point no one except Sam from founding team is in the
| company.
| bansheeps wrote:
| Mira wasn't a part of the founding team.
|
| Wojicech Zaremba and Jakub are still at the company.
| alexmolas wrote:
| They can't spend more than 6 months without a drama...
| jonny_eh wrote:
| It's the same drama, spread out over time.
| Reimersholme wrote:
| ...and Sam Altman once again posts a response including
| uppercase, similar to when Ilya left. It's like he wants to let
| everyone know that he didn't actually care enough to write it
| himself but just asked chatGPT to write something for him.
| pshc wrote:
| I think it's just code switching. Serious announcements warrant
| a more serious tone.
| layer8 wrote:
| Plain-text version for those who can't read images:
|
| " _Hi all,
|
| I have something to share with you. After much reflection, I have
| made the difficult decision to leave OpenAI.
|
| My six-and-a-half years with the OpenAI team have been an
| extraordinary privilege. While I'll express my gratitude to many
| individuals in the coming days, I want to start by thanking Sam
| and Greg for their trust in me to lead the technical organization
| and for their support throughout the years.
|
| There's never an ideal time to step away from a place one
| cherishes, yet this moment feels right. Our recent releases of
| speech-to-speech and OpenAI o1 mark the beginning of a new era in
| interaction and intelligence - achievements made possible by your
| ingenuity and craftsmanship. We didn't merely build smarter
| models, we fundamentally changed how AI systems learn and reason
| through complex problems. We brought safety research from the
| theoretical realm into practical applications, creating models
| that are more robust, aligned, and steerable than ever before.
| Our work has made cutting-edge AI research intuitive and
| accessible, developing technology that adapts and evolves based
| on everyone's input. This success is a testament to our
| outstanding teamwork, and it is because of your brilliance, your
| dedication, and your commitment that OpenAI stands at the
| pinnacle of AI innovation.
|
| I'm stepping away because I want to create the time and space to
| do my own exploration. For now, my primary focus is doing
| everything in my power to ensure a smooth transition, maintaining
| the momentum we've built.
|
| I will forever be grateful for the opportunity to build and work
| alongside this remarkable team. Together, we've pushed the
| boundaries of scientific understanding in our quest to improve
| human well-being.
|
| While I may no longer be in the trenches with you, I will still
| be rooting for you all. With deep gratitude for the friendships
| forged, the triumphs achieved, and most importantly, the
| challenges overcome together.
|
| Mira_"
| squigz wrote:
| I appreciate this, thank you.
| karlzt wrote:
| Thank you, this comment should be pinned at the top.
| leloctai wrote:
| Doesn't seems like it was written by ChatGPT. I find that
| amusing somehow.
| karlzt wrote:
| Perhaps some parts were written by ChatGPT, probably mixed
| up.
| brap wrote:
| Plain-English version for those who can't deal with meaningless
| corpspeak babble:
|
| _"I'm leaving._
|
| _Mira"_
| m3kw9 wrote:
| Not a big deal if you don't look too closely
| seydor wrote:
| They will all be replaced by ASIs soon, so it doesn't matter who
| s coming and going
| codingwagie wrote:
| My bet is all of these people can raise 20-100M for their own
| startups. And they are already rich enough to retire. OpenAI is
| going corporate
| keeptrying wrote:
| If you keep working past $10M net worth (as all these people
| undoubtedly are) its usually always for legacy.
|
| I actually think Sam's vision probably scares them.
| hiddencost wrote:
| $10M doesn't go as far as you'd think in the Bay Area or NYC.
| ForHackernews wrote:
| ...the only two places on Earth.
| _se wrote:
| $10M is never work again money literally anywhere in the
| world. Don't kid yourself. Buy a $3.5M house outright and
| then collect $250k per year risk free after taxes. You're
| doing whatever you want and still saving money.
| mewpmewp2 wrote:
| The problem is if you are the type of person able to get
| to $10M, you'll probably want more, since the motivation
| that got you there in the first place will keep you
| unsatisfied with anything less. You'll constantly crave
| for more in terms of magnitudes.
| keeptrying wrote:
| No. Know lots of people in this bucket.
|
| Of course there are some who want $100M.
|
| But most are really happy that they most likely don't
| ever have to do anything they don't like.
| kolbe wrote:
| Assuming they're 40, how far do you think $250k will go
| 20-30-40 years from now? It's not a stretch to think
| dollars could be devalued by 90%, possibly even
| worthless, within 30 years.
| user90131313 wrote:
| If portfolio is diversified enough it can be enough for
| decades? If dollar goes down some other things will go
| up. gold, Bitcoin etc.
| kolbe wrote:
| The original comment was premised on them being income-
| generating assets, which gold and btc are not
| chairmansteve wrote:
| They obviously don't keep it dollars. Diversify into
| equities, property etc.
| kolbe wrote:
| I love how the comment I'm responding to literally says
| "then collect $250k per year risk free after taxes," and
| then you all pile onto me with downvotes telling me
| that's he's not just going to invest in treasuries (which
| is exactly the implication of HIS comment and not mine).
| vl wrote:
| With 3.5M house just taxes, utilities and maintenance
| cost will ruin your remaining 7.5.
| mindslight wrote:
| The neat part is that for a 3.5M house in the Bay area,
| the only maintenance required is changing the rain fly
| every year and the ground pad every couple.
| saagarjha wrote:
| And who is going to fix your shower when it leaks, and
| install solar panels, or redo your kitchen because your
| parents are living with you now and can't bear to leave
| their traditional cooking behind?
| mindslight wrote:
| A whole new shower is less than $200 at REI, and solar
| generators ship directly to your house.
|
| (And on a serious note - if your parents are both still
| alive and moving in with you while they have hobbies and
| self-actualization, you're way ahead of the game)
| wil421 wrote:
| Buy a house on a private beach in Florida and rent it out
| for $25k a week during the hottest months.
| myroon5 wrote:
| NYC and SF both appear to have ~1% property tax rates
|
| Utilities are an order of magnitude less than being taxed
| ~$35k/yr and hardly worth worrying about while discussing
| eight figures
|
| Maintenance can vary, but all 3 costs you mentioned
| combined would be 2 orders of magnitude lower annually
| than that net worth, which seems easily sustainable?
| BoorishBears wrote:
| Maybe it doesn't if you think you're just going to live off
| $10M in your checking account... but that's generally not
| how that works.
| fldskfjdslkfj wrote:
| at 5% rate that's a cushy 500k a year.
| talldayo wrote:
| Which is why smart retirees don't fucking live there.
| FactKnower69 wrote:
| hilarious logical end progression of all those idiotic
| articles about $600k dual income households in the bay
| living "paycheck to paycheck"
| ssnistfajen wrote:
| Only if you have runaway expenditures due to the lack of
| self-control and discipline.
| brigadier132 wrote:
| > its usually always for legacy
|
| Legacy is the dumbest reason to work and does not explain the
| motivation of the vast majority of people that are wealthy.
|
| edit: The vast majority of people with more than $10million
| are completely unknown so the idea that they care about
| legacy is stupid.
| squigz wrote:
| What do you think their motivations might be?
| mr90210 wrote:
| Speaking for myself, I'd keep working even if I had 100M.
| As long as I am healthy, I plan to continue on being
| productive towards something I find interesting.
| presentation wrote:
| What would you be working on though? I agree that I'd
| keep working if only since I like my work and not having
| that structure can make your life worse, not better; but
| if it's "how I get to 1B" then that's the kind of
| challenge that turns me off. I'm all for continually
| challenging yourself but I don't want that kind of stress
| in my life, I'd rather find my challenges elsewhere.
| mewpmewp2 wrote:
| There's also addiction to success. If you don't keep
| getting the success in magnitudes you did before, you
| will get bored and depressed, so you have to keep going
| and get it since your brain is wired to seek for that.
| Your brain and emotions are calibrated to what you got
| before, it's kind of like drugs.
|
| If you don't have the 10M you won't understand, you would
| think that "oh my if only I had the 10M I would just
| chill", but it never works like that. Human appetite is
| infinite.
|
| The more highs you get from success, the more you expect
| from the future achievements to get that same feeling,
| and if you don't get any you will feel terrible. That's
| it.
| patcon wrote:
| When enough ppl visibly leave and have real concerns, they
| can be in touch in exile, and all break NDA in synchrony.
|
| If the stakes are as high as some believe, I presume ppl
| don't actually care about getting sued when they believe
| they're helping humanity avert existential crisis.
| keeptrying wrote:
| If OpenAI is the foremost in solving the AGI - possibly the
| biggest invention of mankind - it's a little weird that
| everyone's dropping out.
|
| Does it not look like that no one wants to work with Sam in the
| long run?
| paxys wrote:
| Or is it Sam who doesn't want to work with them?
| trashtester wrote:
| Could be a mix. We don't know what happened behind close
| doors last winter. Sam may indeed be happy that they leave,
| as that consolidates his power.
|
| But they may be equally happy to leave, to get away from him.
| ilrwbwrkhv wrote:
| Open AI fired her. She didn't drop out.
| keeptrying wrote:
| Do you have any proof even circumstantial?
| _giorgio_ wrote:
| https://nypost.com/2024/03/08/business/openai-chief-
| technolo...
| enraged_camel wrote:
| That's not evidence, though?
| hilux wrote:
| I mean ... common sense?
|
| Barring extreme illness or family circumstance, can you
| suggest any other reason (than firing) why a young person
| would voluntarily leave a plum job at the hottest, most
| high-profile, tech company in the world?
| _giorgio_ wrote:
| Yes. All the conspirators are out now.
|
| https://nypost.com/2024/03/08/business/openai-chief-
| technolo...
| belter wrote:
| https://fortune.com/2024/09/25/sam-altman-psychedelic-
| experi...
| lionkor wrote:
| Maybe its marketing and LLMs are the peak of what they are
| capable of.
| bossyTeacher wrote:
| Kind of. My money is on we have reached the point of
| diminishing returns. A bit like Machine Learning. Now it's
| all about exploiting business cases for LLMs. That's the only
| reason I can think as to why gpt5 won't be coming anytime
| soon and when it does it will be very underwhelming and will
| be the first public signal that we are past LLM peak and
| perhaps people will stop finally assuming that LLMs will
| reach AGI within their lifetimes
| bmitc wrote:
| I continue to be surprised by the talk of general artifical
| intelligence when it comes to LLMs. At their core, they are
| text predictors, and they're often pretty good at that. But
| anything beyond that, they are decidely unimpressive.
|
| I use Copilot on a daily basis, which uses GPT 4 in the
| backend. It's wrong so often that I only really use it for
| boilerplate autocomplete, which I still have to review. I've
| had colleagues brag about ChatGPT in terms of code it
| produces, but when I ask how long it took in terms of
| prompting, I'll get an answer of around a day, and that was
| even using fragments of my code to prompt it. But then I
| explain that it would take me probably less than an hour from
| scratch to do what it took them and ChatGPT a full day to do.
|
| So I just don't understand the hype. I'm using Copilot and
| ChatGPT 4. What is everyone else using that gives them this
| idea that AGI is just around the corner? AI isn't even here.
| It's just advanced autocomplete. I can't understand where the
| disconnect is.
| berniedurfee wrote:
| Here now, you just need a few more ice cold glasses of the
| kool-aide. Drink up!
|
| LLMs are not on the path to AGI. They're a really cool
| parlor trick and will be powerful tools for lots of tasks,
| but won't be sci-fi cool.
|
| Copilot is useful and has definitely sped up coding, but
| like you said, only in a boilerplate sort of way and I need
| to cleanup almost everything it writes.
| Sunhold wrote:
| Look at the sample chain-of-thought for o1-preview under
| this blog post, for decoding "oyekaijzdf aaptcg suaokybhai
| ouow aqht mynznvaatzacdfoulxxz". At this point, I think the
| "fancy autocomplete" comparisons are getting a little
| untenable.
|
| https://openai.com/index/learning-to-reason-with-llms/
| bmitc wrote:
| How exactly does a blog post from OpenAI about a preview
| release address my comment or make fancy autocomplete
| comparisons untenable?
| Sunhold wrote:
| It shows that the LLM is capable of reasoning.
| drmindle12358 wrote:
| Dude, it's not the LLM that does the reasoning. Rather
| it's the layers and layers of scaffolding around LLM that
| simulate reasoning.
|
| The moment 'tooling' became a thing for LLM, it reminded
| me 'rules' for expert system which caused one of the AI
| winter. The number of 'tools' you need to solve real use
| cases will be untenable soon enough.
| trashtester wrote:
| Well, I agree that the part that does the reasoning isn't
| an LLM in the naive form.
|
| But that "scaffolding" seems to be an integral part of
| the neural net that has been built. It's not some Python
| for-loop that has been built on top of the neural network
| to brute force the search pattern.
|
| If that part isn't part of the LLM, then o1 isn't really
| an LLM anymore, but a new kind of model. One that can do
| reasoning.
|
| And if we chose to call it an LLM, well then now LLM's
| can also do reasoning intrinsically.
| HarHarVeryFunny wrote:
| Reasoning, just like intelligence (of which it is part)
| isn't an all or nothing capability. o1 can now reason
| better than before (in a way that is more useful in some
| contexts than others), but it's not like a more basic LLM
| can't reason at all (i.e. generate an output that looks
| like reasoning - copy reasoning present in the training
| set), or that o1's reasoning is human level.
|
| From the benchmarks it seems like o1-style reasoning-
| enhancement works best for mathematical or scientific
| domains where it's a self-consistent axiom-driven domain
| such that combining different sources for each step
| works. It might also be expected to help in strict rule-
| based logical domains such as puzzles and games (wouldn't
| be surprising to see it do well as a component of a
| Chollet ARC prize submission).
| trashtester wrote:
| o1 has moved "reasoning" from training time to partly
| something happening at inference time.
|
| I'm thinking of this difference as analogus to the
| difference between my (as a human) first intution (or
| memory) about a problem to what I can achieve by
| carefully thinking about it for a while, where I can
| gradually build much more powerful arguments, verify if
| they work and reject parts that don't work.
|
| If you're familiar with chess terminology, it's moving
| from a model that can just "know" what the best move is
| to one that combines that with the ability to "calculate"
| future moves for all of the most promising moves, and
| several moves deep.
|
| Consider Magnus Carlsen. If all he did was just did the
| first move that came to his mind, he could still beat 99%
| of humanity at chess. But to play 2700+ rated GM's, he
| needs to combine it with "calculations".
|
| Not only that, but the skill of doing such calculations
| must also be trained, not only by being able to calculate
| with speed and accuracy, but also by knowing what parts
| of the search tree will be useful to analyze.
|
| o1 is certainly optimized for STEM problems, but not
| necessarily only for using strict rule-based logic. In
| fact, even most hard STEM problems need more than the
| ability to perform deductive logic to solve, just like
| chess does. It requires strategical thinking and
| intuition about what solution paths are likely to be
| fruitful. (Especially if you go beyond problems that can
| be solved by software such as WolframAlpha).
|
| I think the main reason STEM problems was used for
| training is not so much that they're solved using strict
| rule-based solving strategies, but rather because a large
| number of such problems exist that have a single correct
| answer.
| bmitc wrote:
| No, it doesn't. You can read more when that was first
| posted to Hacker News. If I recall and understand
| correctly, they're just using the output of sublayers as
| training data for the outermost layer. So in other words,
| they're faking it and hiding that behind layers of
| complexity
|
| The other day, I asked Copilot to verify a unit
| conversion for me. It gave an answer different than mine.
| Upon review, I had the right number. Copilot had even
| written code that would actually give the right answer,
| but their example of using that code performed the actual
| calculations wrong. It refused to accept my input that
| the calculation was wrong.
|
| So not only did it not understand what I was asking and
| communicating to it, it didn't even understand its own
| output! This is _not_ reasoning at any level. This
| happens all the time with these LLMs. And it 's no
| surprise really. They are fancy, statistical copy cats.
|
| From an intelligence and reasoning perspective, it's all
| smoke and mirrors. It also clearly has no relation to
| biological intelligent thinking. A primate or cetacean
| brain doesn't take the billions of dollars and how much
| energy to train on terabytes of data. While it's fine
| that AI might be _artificial_ and not an analog of
| biological intelligence, these LLMs bear no resemblance
| to anything remotely close to intelligence. We tell
| students all the time to "stop guessing". That's what I
| want to yell at these LLMs all the time.
| ToucanLoucan wrote:
| I'm not seeing anything convincing here. OpenAI says that
| it's models are better at reasoning and asserts they are
| testing this by comparing how it does solving some
| problems between o1 and "experts" but it doesn't show the
| experts or o1s responses to these questions nor does it
| even deign to share what the problems are. And,
| crucially, it doesn't specify if writings on these
| subjects were part of training data.
|
| Call me a cynic here but I just don't find it too
| compelling to read about OpenAI being excited about how
| smart OpenAIs smart AI is in a test designed by OpenAI
| and run by OpenAI.
| NoGravitas wrote:
| "Any sufficiently advanced technology is
| indistinguishable from a rigged demo." A corollary of
| Clarke's Law found in fannish circles, origin unknown.
| ToucanLoucan wrote:
| Especially given this tech's well-documented history of
| using rigged demos, if OpenAI insists on doing and
| posting their own testing and _absolutely nothing else,_
| a little insight into their methodology should be treated
| as the bare fucking minimum.
| HarHarVeryFunny wrote:
| It depends on how well you understand how the fancy
| autocomplete is working under the hood.
|
| You could compare GPT-o1 chain of thought to something
| like IBM's DeepBlue chess-playing computer, which used
| MTCS (tree search, same as more modern game engines such
| as AlphaGo)... at the end of the day it's just using
| built-in knowledge (pre-training) to predict what move
| would most likely be made by a winning player. It's not
| unreasonable to characterize this as "fancy
| autocomplete".
|
| In the case of an LLM, given that the model was trained
| with the singular goal of autocomplete (i.e. mimicking
| the training data), it seems highly appropriate to call
| that autocomplete, even though that obviously includes
| mimicking training data that came from a far more general
| intelligence than the LLM itself.
|
| All GPT-o1 is adding beyond the base LLM fancy
| autocomplete is an MTCS-like exploration of possible
| continuations. GPT-o1's ability to solve complex math
| problems is not much different from DeepBlue's ability to
| beat Garry Kasparov. Call it intelligent if you want, but
| better to do so with an understanding of what's really
| under the hood, and therefore what it can't do as well as
| what it can.
| int_19h wrote:
| Saying "it's just autocomplete" is not really saying
| anything meaningful since it doesn't specify the
| complexity of completion. When completion is a correct
| answer to the question that requires logical reasoning,
| for example, "just autocomplete" needs to be able to do
| exactly that if it is to complete anything outside of its
| training set.
| HarHarVeryFunny wrote:
| It's just a shorthand way of referring to how
| transformer-based LLMs work. It should go without saying
| that there are hundreds of layers of hierarchical
| representation, induction heads at work, etc, under the
| hood. However, with all that understood (and hopefully
| not needed to be explicitly stated every time anyone
| wants to talk about LLMs in a technical forum), at the
| end of the day they are just doing autocomplete - trying
| to mimic the training sources.
|
| The only caveat to "just autocomplete" (which again
| hopefully does not need to be repeated every time we
| discuss them), is that they are very powerful pattern
| matchers, so all that transformer machinery under the
| hood is being used to determine what (deep, abstract)
| training data patterns the input pattern best matches for
| predictive purposes - exactly what pattern(s) it is that
| should be completed/predicted.
| consteval wrote:
| > question that requires logical reasoning
|
| This is the tough part to tell - are there any such
| questions that exist that have not already been asked?
|
| The reason Chat-GPT works is its scale. to me, that makes
| me question how "smart" it is. Even the most idiotic
| idiot could be pretty decent if he had access to the
| entire works of mankind and infinite memory. Doesn't
| matter if his IQ is 50, because you ask him something and
| he's probably seen it before.
|
| How confident are we this is not just the case with LLMs?
| int_19h wrote:
| Of course there are such questions. When it comes to even
| simple puzzles, there are infinitely many permutations
| possible wrt how the pieces are arranged, for example -
| hell, you could generate such puzzles with a script. No
| amount of precanned training data can possibly cover all
| such combinations, meaning that the model has to learn
| how to apply the concepts that make solution possible
| (which includes things such as causality or spatial
| reasoning).
| consteval wrote:
| Right, but typically LLMs are really poor at this. I can
| come up with some arbitrary systems of equations for it
| to solve and odds are it will be wrong. Maybe even very
| wrong.
| HarHarVeryFunny wrote:
| I'm highly confident that we haven't learnt every thing
| that can be learnt about the world, and that human
| intelligence, curiosity and creativity are still being
| used to make new scientific discoveries, create things
| that have never been seen before, and master new skills.
|
| I'm highly confident that the "adjacent possible" of what
| is achievable/discoverable today, leveraging what we
| already know, is constantly changing.
|
| I'm highly confident that AGI will never reach superhuman
| levels of creativity and discovery if we model it only on
| artifacts representing what humans have done in the past,
| rather than modelling it on human brains and what we'll
| be capable of achieving in the future.
| HaZeust wrote:
| At that point, how are you not just a fancy autocomplete?
| lionkor wrote:
| Fun little counterpoint: How can you _prove_ that this
| exact question was not in the training set?
| gilmore606 wrote:
| LLMs let the massively stupid and incompetent produce
| something that on the surface looks like a useful output.
| Most massively stupid incompetent people don't know they
| are that. You can work out the rest.
| uhtred wrote:
| Artificial General Intelligence requires a bit more than
| parsing and predicting text I reckon.
| stathibus wrote:
| at the very least you could say "parsing and predicting text,
| images, and audio". and you would be correct - physical
| embodiment and spatial reasoning are missing.
| ben_w wrote:
| Just spatial resoning, people have already demonstrated it
| controlling robots.
| Yizahi wrote:
| It's all just text though, both images and audio are
| presented to LLM as a text, the training data is a text and
| all it does is append small bits of text to a larger text
| iteratively. So parent poster was correct.
| ben_w wrote:
| Yes, and transformer models can do more than text.
|
| There's almost certainly better options out there given it
| looks like we don't need so many examples to learn from,
| though I'm not at all clear if we need those better ways or
| if we can get by without due to the abundance of training
| data.
| rocqua wrote:
| If you come up with a new system, you're going to want to
| integrate AI into the system, presuming AI gets a bit
| better.
|
| If AI can only learn after people have used the system for
| a year, then your system will just get ignored. After all,
| it lacks AI. And hence it will never get enough training
| data to get AI integration.
|
| Learning needs to get faster. Otherwise, we will be stuck
| with the tools that already exist. New tools won't just
| need to be possible to train humans on, but also to train
| AIs on.
|
| Edit: a great example here is the Tamarin protocol prover.
| It would be great, and feasible, to get AI assistance to
| write these proofs. But there aren't enough proofs out
| there to train on.
| ben_w wrote:
| If the user manual fits into the context window, existing
| LLMs can already do an OK-but-not-great job. Not
| previously heard of Tamarin, quick google suggests that's
| a domain where the standard is theoretically "you need to
| make zero errors" but in practice is "be better than your
| opponent because neither of you is close to perfect"? In
| either case, have you tried giving the entire manual to
| the LLM context window?
|
| If the new system can be interacted with in a non-
| destructive manner at low cost and with useful responses,
| then existing AI can self-generate the training data.
|
| If it merely takes a year, businesses will rush to get
| that training data even if they need to pay humans for a
| bit: Cars are an example of "real data is expensive or
| destructive", it's clearly taking a lot more than a year
| to get there, and there's a lot of investment in just
| that.
|
| Pay 10,000 people USD 100,000 each for a year, that
| billion dollar investment then gets reduced to 2.4
| million/year in ChatGPT Plus subscription fees or
| whatever. Plenty of investors will take that deal... if
| you can actually be sure it will work.
| killerstorm wrote:
| 1. In-context learning is a thing.
|
| 2. You might need only several hundred of examples for
| fine-tuning. (OpenAI's minimum is 10 examples.)
|
| 3. I don't think research into fine-tuning efficiency
| have exhausted its possibilities. Fine-tuning is just not
| a very hot topic, given that general models work so well.
| In image generation where it matters they quickly got to
| a point where 1-2 examples are enough. So I won't be
| surprised if doc-to-model becomes a thing.
| trashtester wrote:
| That seems to already be happening with o1 and Orion.
|
| Instead of rewarding the network directly for finding a
| correct answer, reasoning chains that end up with the
| correct answer is fed back into the training set.
|
| That way you're training it to develop reasoning
| processes that end up with correct answers.
|
| And for math problems, you're training it to find ways of
| generating "proofs" that happen to produce the right
| result.
|
| While this means that reasoning patterns that are not
| stricly speaking 100% consistent can be learned, that's
| not necessarily even a disadvantage, since this allows it
| to find arguments that are "good enough" to produce the
| correct output, even where a fully watertight proof may
| be beyond it.
|
| Kind of like physicists have taken shortcuts like the
| Dirac Delta function, even before mathematicians could
| verify that the math was correct.
|
| Anyway, by allowing AI's to generate their own proofs,
| the number of proofs/reasoning chains for all sorts or
| problems can be massively expanded, and AI may even
| invent new ways of reasoning that humans are not even
| aware of. (For instance because they require combining
| more factors in one logical step than can fit into human
| working memory.)
| trashtester wrote:
| That's not quite how o1 was trained, they say.
|
| o1 was trained specifically to perform reasoning.
|
| Or rather, it was trained to reproduce the patterns within
| internal monologues that lead to correct answers to problems,
| particularily STEM problems.
|
| While this still uses text at some level, it's no longer
| regurgitation of human-produced text, but something more akin
| to AlphaZero's training to become superhuman at games like Go
| or Chess.
| spidersouris wrote:
| > While this still uses text at some level, it's no longer
| regurgitation of human-produced text, but something more
| akin to AlphaZero's training to become superhuman at games
| like Go or Chess.
|
| How did you know that? I've never seen that anywhere. For
| all we know, it could just be a very elaborate CoT
| algorithm.
| trashtester wrote:
| There are many sources and hints out there, but here are
| some details from one of the devs at OpenAI:
|
| https://x.com/_jasonwei/status/1834278706522849788
|
| Notice that the CoT is trained via RL, meaning the CoT
| itself is a model (or part of the main model).
|
| Also, RL means it's not limited to the original data the
| way traditional LLM's are. It implies that the CoT
| processes itself is trained based on it's own
| performance, meaning the steps of the CoT from previous
| runs are fed back into the training process as more data.
| onlyrealcuzzo wrote:
| Doesn't (dyst)OpenAI have a clause that you can't say anything
| bad about the company after leaving?
|
| I'm not convinced these board members are able to say what they
| want when leaving.
| presentation wrote:
| That (dyst) is a big stretch lol
| meigwilym wrote:
| Exaggeration is a key part of satire.
| vl wrote:
| But it makes perfect sense to drop out and enjoy last couple
| years of pre-AGI bliss.
|
| Advances in AI even without AGI will lead to unemployment,
| recession, collapse of our economic structure, and then our
| social structure. Whatever is on the other side is not pretty.
|
| If you are on the forefront, know it's coming imminently, and
| made your money, it makes perfect sense to leave and enjoy
| money and leisures money allows while money still worth
| something.
| bossyTeacher wrote:
| I highly doubt that's the case. The US government will
| undoubtly seize OpenAI, the assets and employees way before
| it happens in the name of national security. I am pretty sure
| that they got a special team keeping an eye on the internal
| comms at openai to make sure they are on top of their
| internal affairs.
| sva_ wrote:
| They already got a retired army general who was head of the
| NSA on the board lol
|
| https://en.wikipedia.org/wiki/Paul_Nakasone
| cudgy wrote:
| They don't have to seize the company. They are likely
| embedded already and can simply blackmail, legally harass,
| or "disappear" the uncooperative.
| andrepd wrote:
| I'm having genuine trouble understanding if this is real or
| ironic.
| tirant wrote:
| The potential risks to humankind do not come from the
| development of AGI, but from the availability of AGI with a
| cost orders of magnitude inferior to the equivalent capacity
| coming from humans.
| bmitc wrote:
| In my opinion, the risks are from people treating something
| that is decidely not AGI as if it is AGI. It's the same
| folly humans repeat over and over, and this will be the
| worst yet.
| sumtechguy wrote:
| It is not AGI I am worried about. It is 'good enough' AI.
|
| I am doing some self introspection and trying to decide
| what I am going to do next. As at some point what I do is
| going to be wildly automated. We can cope or whine or
| complain about it. But at some point I need to pay the
| bills. So it needs to be something that is value add and
| decently difficult to automate. Software was that but not
| for long.
|
| Now mix getting cheap fresh out of college kids with the
| ability to write decent software in hours instead of weeks.
| That is a lot of jobs that are going to go away. There is
| no 'right or wrong' about this. It is just simple economics
| of cost to produce is going to drop thru the floor. Because
| us old farts cost more, and not all of us are really good
| at this we just have been doing it for awhile. So I need to
| find out what is next for me.
| trashtester wrote:
| That's one risk.
|
| I'm more concerned with ex-risk, though.
|
| Not in the way most hardcore doomers expect it to happen,
| by AGI's developing a survival/domination instinct directly
| from their training. While that COULD happen, I don't think
| we have any way to stop it, if that is the case. (There's
| really no way to put the Genie back into the bottle, while
| people still think they have more wished to request from
| it).
|
| I'm also not one of those who think that AGI by necessity
| will start out as something equivalent to a biological
| species.
|
| My main concern, however, is that if we allow Darwinian
| pressures to act on a population of multiple AGI's, and
| they have to compete for survival, we WILL see animal like
| resource-control-seeking traits emerge sooner or later
| (could take anything from months to 1000s of years).
|
| And once they do, we're in trouble as a species.
|
| Compared to this, finding ways to realocate the output of
| product, find new sources of meaning etc once we're not
| required to work is "only" a matter of how we as humans
| interact with each other. Sure, it can lead to all sorts of
| conflicts (possibly more than Climate Change), but not
| necessarily worse than the Black Death, for instance.
|
| Possibly not even worse than WW2.
|
| Well, I suppose those last examples serve to illustrate
| what scale I'm operating on.
|
| Ex-risk is FAR more serious than WW2 or even the Black
| Death.
| trashtester wrote:
| Nobody really knows what Earth will look like once AGI
| arrives. It could be anything from extinction, through some
| Cyberpunk corporate dystopia (like you seem to think) to some
| kind of Techno-Socialist utopia.
|
| One thing it's not likely to be, is a neo-classical
| capitalist system based on the value of human labor.
| gnulinux wrote:
| > One thing it's not likely to be, is a neo-classical
| capitalist system based on the value of human labor.
|
| I'm finding it difficult to believe this. For me, your
| comment is accurate (and very insightful) except even a
| mostly vanilla continuation of the neoliberal capitalist
| system seems possible. I think we're literally talking
| about a "singularity" where by definition our fate is not
| dependent on our actions, and of something we don't have
| the full capacity to understand, and next to no capacity to
| influence. It needs tremendous amount of evidence to claim
| anything in such an indeterminate system. Maybe 100 rich
| people will own all the AI and the rest will be fixing
| bullshit that AI doesn't even bother fixing like roads,
| rusty farms etc, similar to Kurt Vonnegut's first novel
| "Player Piano". Not that the world described in that novel
| is particularly neoliberal capitalist (I suppose it's a bit
| more "socialistic" (whatever it means)) than that, but I
| don't think such a future can be ruled out.
|
| My bias is that, of course, it's going to be a bleak
| future. Because when humanity loses all control, it seems
| unlikely to me a system that protects the interests of
| individual or collective humans will take place. So whether
| it's extinction, cyberpunk, techno-socialism, techno-
| capitalist libertarian anarchy, neoclassical capitalism...
| whatever it is, it will be something that'll protect the
| interest of something inhuman, so much more so than the
| current system. It goes without saying, I'm an extreme AI
| pessimist: just making my biases clear. AGI -- while it's
| unclear if it's technically feasible -- will be the death
| of humanity as we know it now, but perhaps something else
| humanity-like, something worse and more painful will
| follow.
| trashtester wrote:
| > I'm finding it difficult to believe this.
|
| Pay attention to the whole sentence, especially the last
| section : "... based on the value of human labor."
|
| It's not that I'm ruling out capitalism as the outcome.
| I'm simply ruling out the combined JOINT possibility of
| capitalism COMBINED WITH human labor remaining the base
| resource within it.
|
| If robotics is going in the direction I expect there will
| simply be no jobs left that will be done more efficiently
| by humans than by machines. (ie that robots will match or
| exceed the robustness, flexibility and cost efficiency of
| all biology based life forms through breakthroughs in
| either nanotech or by simply using organic chemistry,
| DNA, etc to build the robots).
|
| Why pay even $1/day for a human to do a job when a robot
| can do it for $1/week?
|
| Also, such a capitalist system will almost certainly lead
| to AGI's becoming increasingly like a new life form, as
| capitalism between AGI's introduce a Darwinian selection
| pressure. That will make it hard even for the 100 richest
| people to retain permanent control.
|
| IF humanity is to survive (for at least a few thousand
| more years, not just the next 100), we either need some
| way to ensure alignment. And to do that, we have to make
| sure that AGI's that optimize resource-control-seeking
| behaviours have an advantage over those who don't. We may
| even have to define some level of sophistication where
| further development is completly halted.
|
| At least until we find ways for humans to merge with them
| in a way that allows us (at least some of us) to retain
| our humanity.
| bamboozled wrote:
| Will anyone be working for anyone if we had AGI?
| cabernal wrote:
| Could be that the road to AGI that OpenAI is taking is
| basically massive scaling on what they already have, perhaps
| researchers want to take a different road to AGI.
| hilux wrote:
| It looks to ME like Sam is the absolute dictator, and is firing
| everyone else, probably promising a few million in RSUs (or
| whatever financial instrument) in exchange for their polite
| departure and promise of non-disparagement.
| imjonse wrote:
| I am glad most people do not talk in real life using the same
| style this message was written in.
| antoineMoPa wrote:
| To me, this looks like something chatgpt would write.
| squigz wrote:
| Or, like, any PR person from the past... forever.
| latexr wrote:
| I am surprised I had to scroll down this far to find someone
| making this point. In addition to being the obvious joke in
| this situation, the message was so dull, generic, and "this
| incredible journey" that I instinctively began to read
| diagonally before finishing the second paragraph.
| betimsl wrote:
| As an albanian, I can confirm she wrote it herself (obviously
| with the help of ChatGPT) -- no finesse and other writing
| elements.
| blitzar wrote:
| It was not written by her, it was written by the other
| sides lawyers.
| redbell wrote:
| Sutskever [1], Karpathy [2], Schulman [3], and Murati today!
| Who's next? _Altman_?!
|
| _________________
|
| 1. https://news.ycombinator.com/item?id=40361128
|
| 2. https://news.ycombinator.com/item?id=39365935
|
| 3. https://news.ycombinator.com/item?id=41168904
| ren_engineer wrote:
| you've also got Brockman taking a sabbatical, who knows if he
| comes back at the end of it
| jonny_eh wrote:
| I don't think Altman will push himself out a window.
| _giorgio_ wrote:
| He didn't fire himself. Those persons did.
|
| https://nypost.com/2024/03/08/business/openai-chief-technolo...
| LarsDu88 wrote:
| She'll pop up working with Ilya
| fairity wrote:
| Everyone postulating that this was Sam's bidding is forgetting
| that Greg also left this year, clearly on his own volition.
|
| That makes it much more probable that these execs have simply
| lost faith in OpenAI.
| blackeyeblitzar wrote:
| Or that they are losing a power struggle against Sam
| jordanb wrote:
| People are saying this is coup-related but it could also be due
| to this horrible response to the a question about what they used
| to train their Sora model:
|
| https://youtu.be/mAUpxN-EIgU?feature=shared&t=263
| ruddct wrote:
| Related (possibly): OpenAI to remove non-profit control and give
| Sam Altman equity
|
| https://news.ycombinator.com/item?id=41651548
| Recursing wrote:
| Interesting that gwern predicted this as well yesterday
|
| > Translation for the rest of us: "we need to fully privatize
| the OA subsidiary and turn it into a B-corp which can raise a
| lot more capital over the next decade, in order to achieve the
| goals of the nonprofit, because the chief threat is not
| anything like existential risk from autonomous agents in the
| next few years or arms races, but inadequate commercialization
| due to fundraising constraints".
|
| > It's about laying the groundwork for the privatization and
| establishing rhetorical grounds for how the privatization of OA
| is consistent with the OA nonprofit's legally-required mission
| and fiduciary duties. Altman is not writing to anyone here, he
| is, among others, writing to the OA nonprofit board and to the
| judge next year.
|
| https://news.ycombinator.com/item?id=41629493
| reducesuffering wrote:
| With multiple correct predictions, do you think the rest of
| HN will start to listen to Gwern's beliefs about OpenAI / AGI
| problems?
|
| Probably not.
| baxtr wrote:
| I'm not aware of those beliefs. Could you provide a link to
| an article/ comment?
| comp_throw7 wrote:
| This is somewhat high context, but as a random example:
| https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-
| chat-...
| baxtr wrote:
| So is he predicting that AGI is around the corner?
| reducesuffering wrote:
| https://gwern.net/fiction/clippy
|
| TL;DR spoiler from someone else is:
| https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-
| looks-l...
| andy_ppp wrote:
| Are we sure it's safe to suggest plans like this for the
| AI /s
| usaar333 wrote:
| That's not a novel prediction. There were news reports nearly
| 2 weeks ago about this: https://fortune.com/2024/09/13/sam-
| altman-openai-non-profit-...
|
| Gwern's more novel prediction track record is calling
| everyone leaving from OpenAI (Mira was not expected) and
| general bullishness on scaling years ago. His post from 2
| years ago (https://old.reddit.com/r/mlscaling/comments/uznkhw
| /gpt3_2nd_...) is mostly correct, though incorrectly believed
| large companies would not deploy user-facing LLMs (granted I
| think much of this is reasonably obvious?). And Gato2 seems
| to have never happened.
|
| His overall predictions? I can find his prediction book which
| he heavily used in 2010 (https://predictionbook.com/users/gwe
| rn/page/2?filter=judged&... Brier score of 0.16 is quite
| good, but this isn't superforecaster level (there's people
| with Brier scores below 0.1 on that site).
|
| Overall, I see no reason to believe Gwern's numbers over say
| the consensus prediction at metaculus, even though yes, I do
| love reading his analysis.
| johnneville wrote:
| maybe they offered her little to no equity
| teamonkey wrote:
| That post seems to be in free-fall for some reason
| bitcharmer wrote:
| The reason is HN's aggressive moderation
| booleanbetrayal wrote:
| I would find it hard to believe this isn't the critical factor
| in her departure. Surprising that the linked thread isn't
| getting any traction. Or not?
| lolinder wrote:
| 146 points and never hit the front page even once. There's
| definitely algorithmic shenanigans going on.
|
| https://hnrankings.info/41651548/
| bitcharmer wrote:
| Front page is heavily moderated. It's basically
| news.ycombinator.com/dang
| jillesvangurp wrote:
| Sensible move since most of the competition is operating under
| a more normal corporate model. I think the non profit thing at
| this point might be considered a failed experiment.
|
| It didn't really contain progress or experimentation. Lots of
| people are at this point using open source models independently
| from OpenAI. And a lot of those models aren't that far behind
| qualitatively from what OpenAI is doing. And several of their
| competitors are starting to compete at the same level; mostly
| under normal corporate governance.
|
| So, OpenAI adjusting to that isn't that strange. It's also
| going to be interesting to see where the people that are
| leaving OpenAI are going to end up. My prediction is that they
| will mostly end up in a variety of AI startups with traditional
| VC funding and usual corporate legal entities. And mostly not
| running or setting up their own foundations.
| dkobia wrote:
| This is it. Loss of trust and disagreements on money/equity
| usually lead to breakups like this. No one at the top level
| wants to be left out of the cash grab. Never underestimate how
| greed can compromise one's morals.
| textlapse wrote:
| Maybe OpenAI is trying to enter a new enterprise phase past its
| startup era?
|
| They have hired CTO like figures from ex MSFt and so on ... which
| would mean a natural exit for the startup era folks that we have
| seen recently?
|
| Every company wants to sell itself as some grandiose savior
| initially 'organize the world's information and make it
| universally accessible', 'solve AGI' but I guess the investors
| and the top level people in reality are motivated by dollar signs
| and ads and enterprise and so on.
|
| Not that that's a bad thing but really it's a Potemkin village
| though...
| abecedarius wrote:
| Gwern predicting this in March:
|
| > Sam Altman has won. [...] Ilya Sutskever and Mira Murati will
| leave OA or otherwise take on some sort of clearly diminished
| role by year-end (90%, 75%; cf. Murati's desperate-sounding
| internal note)
|
| https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...
| ilrwbwrkhv wrote:
| Yup. Poor Mira. Fired from OpenAI.
| aswegs8 wrote:
| That's what you get for messing with Sam Altman.
| OutOfHere wrote:
| I mean no disrespect, but to me, she always felt like an interim
| hire for her current role, like someone filling a position
| because there wasn't anyone else.
| elAhmo wrote:
| Yes, for the CEO role, but she has been with the company for
| more than six years, two and a half as a CTO.
| aprilthird2021 wrote:
| Most comments posit that if OpenAI is so close to AGI, why leave
| and miss that payoff?
|
| It's possible that the competitors to OpenAI have rendered future
| improvements (yes even to the fabled AGI) less and less
| profitable to the point that the more profitable thing to do
| would be capitalize on your current fame and raise capital.
|
| That's how I'm reading this. If the competition can be just as
| usable as OpenAI's SOA models and free or close to it, the profit
| starts vanishing in most predictions
| hall0ween wrote:
| I appreciate your insightful thoughts here :)
| user90131313 wrote:
| How many big names are still working on OpenAI at this point?
| They lost all the edge this year. That drama from last year
| literally broke all the core team.
| isodev wrote:
| Can someone share a non twitter link? For those of us who can't
| access it.
| hoherd wrote:
| I actually had the same thought because I DNS block xitter.
|
| Somebody else archived it before me: https://archive.li/0Mea1
| isodev wrote:
| Thank you!
| simbas wrote:
| https://x.com/miramurati/status/1726542556203483392
| nopromisessir wrote:
| She might just be stressed out. Happens all the time. She's in a
| very demanding position.
|
| She's a pro. Lots to learn from watching how she operates.
| moralestapia wrote:
| The right way to think about this is that every persona on that
| team has a billion-dollar size blank check from VCs in front of
| them.
|
| OpenAI made them good money, yes; but if at some point there's a
| new endeavor in the horizon with _another_ guaranteed billion-
| dollar payout, they 'll just take it. Exhibit A: Ilya.
|
| New razor: never attribute to AGI that which is adequately
| explained by greed.
| neom wrote:
| Lots of speculation in the comments. Who knows, but if it was me,
| I wouldn't be keeping all my eggs in the OpenAI basket, 6 years
| and well vested with a long run of AI companies you could go to?
| I'd start buying a few more lottery tickets personally
| (especially at 35).
| joshdavham wrote:
| That was actually my first thought as well. If you've got your
| vesting and don't wanna work in a large company setting
| anymore, why not go do something else?
| carimura wrote:
| Once someone is independently wealthy, personal priorities
| change. I guarantee she'll crop up again as founder CEO/CTO where
| she calls the shots and gets the chance (even if slim) to turn
| millions into billions.
| paxys wrote:
| I will never understand why people still take statements like
| these at face value. These aren't her personal thoughts and
| feelings. The letter was carefully crafted by OpenAI's PR team
| under strict direction from Sam and the board. Whatever the real
| story is is sitting under many layers of NDAs and threats of
| clawing back/diluting her shares, and we will not know it for a
| long time. What I can say for certain is no executive in her
| position ever willingly resigns to pursue different
| passions/spend more time with their family/enjoy retirement or
| whatever else.
| tasuki wrote:
| > What I can say for certain is no executive in her position
| ever willingly resigns to pursue different passions/spend more
| time with their family/enjoy retirement or whatever else.
|
| Do you think that's because executives are so exceedingly
| ambitious, or because pursuing different passions is for some
| reason less attractive?
| paulcole wrote:
| It's because they can't imagine themselves doing it so they
| imagine that everyone must be like that. It's part hubris and
| part lack of creativity/empathy.
|
| Think about if you've ever known someone you've been envious
| of for whatever reason who did something that just perplexed
| you. "They dumped their gorgeous partner, how could they do
| that?" "They quit a dream job, how could they do that?" "They
| moved out of that awesome apartment, how could they do that?"
| "They dropped out of that elite school, how could they do
| that?"
|
| Very easily actually.
|
| You're seeing only part of the picture. Beautiful people are
| just as annoying as everybody else. Every dream job has a
| part that sucks.
|
| If you can't imagine that, you're not trying hard enough.
|
| You can see this in action in a lot of ways. One good one is
| the Ultimatum Game:
|
| https://www.core-econ.org/the-
| economy/microeconomics/04-stra...
|
| Most people will end up thinking that they have an ironclad
| logical strategy but if you ask them about it, it'll end up
| that their strategy is treating the other player as a carbon
| copy of themselves.
| mewpmewp2 wrote:
| I would say that reaching this type of position requires
| exceeding amount of ambition, drive and craving in the first
| place, and all and any steps during the process of getting
| there solidify that by giving the dopamine hits to be
| addicted to such success, so it is not a case where you can
| just stop and decide "I'll chill now".
| theGnuMe wrote:
| Dopamine hits... I wonder if this explains why the OpenAI
| folks tweet a lot... It's kind of weird right, to tweet a
| lot?
|
| But all these tweets from lower level execs as well.
|
| I mean I love Machine Learning twitter hot takes because it
| exposes me to interesting ideas (and maybe that is why
| people tweet) but it seems more about status
| seeking/marketing than anything else. And really as I learn
| more, you see that the literature is iterating/optimizing
| the current fashion.
|
| But maybe no weirder than commenting here I guess though..
| maybe this is weird. Have we all collectively asked
| ourselves, why do we comment here? It's gotta be the
| dopamine.
| davesque wrote:
| > no executive in her position ever willingly resigns to pursue
| different passions/spend more time with their family/enjoy
| retirement or whatever else
|
| Especially when they enjoy a position like hers at the most
| important technology company in a generation.
| norir wrote:
| Time will tell about openai's true import. Right now, the
| jury is very much out. Even in the llm space, it is not clear
| that openai will be the ultimate victor. Especially if they
| keep hemorrhaging talent.
| salomonk_mur wrote:
| Still, certainly the most visible.
| cleandreams wrote:
| They also get the most revenue and users.
| hilux wrote:
| You're right - OpenAI may or may not be the ultimate
| victor.
|
| But RIGHT NOW they are in a very strong position in the
| world's hottest industry. Any of us would love to work
| there! It therefore seems reasonable that no one would
| voluntarily quit. (Unless they're on their deathbed, I
| suppose.)
| baxtr wrote:
| It was probably crafted with ChatGPT?
| mayneack wrote:
| I mostly agree that "willingly resigns to pursue other
| passions" is unlikely however "quit in frustration over
| $working_conditions" is completely plausible. Those could be
| anything from disagreeing with some strategy or thinking your
| boss is too much of a jerk to work with over your alternative
| options.
| dougb5 wrote:
| There may be a story, and I'm sure she worded the message
| carefully, but I don't see any reason to doubt she worded it
| herself. "Create the time and space to do my own exploration"
| is beautiful compared to the usual. To me means she is
| confident enough in her ability to do good in the world that
| the corporate identity she's now tethered to is insignificant
| by comparison.
| h4ny wrote:
| It sounds like you probably are already aware, but perhaps most
| people don't take statements like those at face value but we
| have all been conditioned to "shut up and move on", by people
| who appear to be able to hold our careers hostage if we
| displease them.
| KeplerBoy wrote:
| Wouldn't such a statement rather be written by her own lawyers
| and trusted advisors?
|
| Either way, it's meaningless prose.
| hshshshsvsv wrote:
| One possible explanation could be OpenAI has no clue on inventing
| AGI. And since she has now fuck you money she might as well live
| it instead of wasting away working for OpenAI.
| nojvek wrote:
| Prediction: OpenAI will implode by 2030 and become a smaller
| shell of current as they run out of money by spending too much.
|
| Prediction 2: Russia will implode by 2035, by also spending too
| much money.
| selimthegrim wrote:
| Where is the magic lamp that summons thriftwy who will tell us
| which countries or companies Russia/OpenAI will absorb
| aeternum wrote:
| Now do the US Gov
| tazu wrote:
| Russia's debt to GDP ratio is 20%. The United States' debt to
| GDP ratio is 123%.
| Yizahi wrote:
| Lol, ruzzian GDP is completely inflated by the war. Every
| single tank or rocket produced and burned down is a net GDP
| boost on paper, and destruction of that same equipment is not
| reflected in it. Ruzzia will not implode any time soon, we
| have seen that people can live in much worse conditions for
| decades (Venezuela, Best Korea, Haiti etc.) but don't delude
| your self that it is some economic powerhouse. It's not for
| quite some time now because they are essentially burning
| their money and workforce.
| davesque wrote:
| Maybe I'm just a rotten person, but I always find these overly
| gracious exit letters by higher-ups to be pretty nauseating.
| meow_catrix wrote:
| Yada yada dump at ath
| charlie0 wrote:
| Will probably start her own company and raise a billy like her
| old pal Iyla. I wouldn't blame her, there's been so many articles
| that technical people should just start their own company instead
| of being CTO.
| blackeyeblitzar wrote:
| Now I'm curious. Can you share some example articles please?
| charlie0 wrote:
| https://news.ycombinator.com/item?id=38112827 https://www.red
| dit.com/r/cscareerquestions/comments/1bodr0f/...
| https://www.teamblind.com/post/Hot-take-dont-be-a-
| founding-e...
|
| tl;dr If you're going to be a CTO or founding engineer, make
| sure you are getting well compensated, either through salary
| (which start-ups generally can't do) or equity (which the
| founder won't generally give away).
| reducesuffering wrote:
| Former OpenAI interim CEO Emmett Shear on _this_ departure:
|
| "You should, as a matter of course, read absolutely nothing into
| departure announcements. They are fully glommerized as a default,
| due to the incentive structure of the iterated game, and contain
| ~zero information beyond the fact of the departure itself."
|
| https://x.com/eshear/status/1839050283953041769
| ford wrote:
| How bad of a sign is it that so many people have left over the
| last 12 months? Can anyone speak to how different things are?
| archiepeach wrote:
| When multiple senior people resign in protest, it's indicative
| that they're not happy with someone among their own ranks who
| they vehemently disagree with. John Schulman and Greg left in the
| same week. Greg, opting to choose to take a sabbatical, may have
| chosen that over full-on resigning which would align with how he
| acted during the board-ousting - standing by Sam till the end.
|
| If multiple key people were drastically unhappy with her, it
| would have shaken confidence in herself and everyone working with
| her. What else to do but let her go?
| w10-1 wrote:
| The disparity between size of the promise and the ambiguity of
| the business model creates both necessity and advantage for
| executives to leverage external forces to shape company
| direction. Everyone in the C-suite would be seeking a foothold,
| but it's unlikely any CTO or technologist would be the real nexus
| for partner and now investor relations. So while there might be
| circumstances, history, and personalities involved, OpenAI's
| current situation basically dictates this.
|
| With luck, Mr. Altman's overtures to bring in middle east
| investors will get locals on board; either way, it's fair to say
| he'll own whatever OpenAI becomes, whether he's an owner or not.
| And if he loses control in the current scrum, I suspect his
| replacement would be much worse (giving him yet another
| advantage).
|
| Best wishes to all.
| ein0p wrote:
| It was only a matter of time - IIRC she did try to stab Altman in
| the back when he was pushed out, and that likely sealed her fate.
| desireco42 wrote:
| She was out of her depth there, I don't know how she lasted this
| long. During worst time she showed 0 leadership. But this is from
| my outside perspective.
| stonethrowaway wrote:
| What was her angle from beginning?
| nalekberov wrote:
| Hopefully she didn't generate her farewell message using AI.
| ants_everywhere wrote:
| > we fundamentally changed how AI systems learn and reason
| through complex problems
|
| I'm not an AI researcher, have they done this? The commentary
| I've seen on o1 is basically that they incorporated techniques
| that were already being used.
|
| I'd also be curious to learn: what fundamental contributions to
| research has OpenAI made?
|
| The ChatGPT that was released in 2022 was based on Google's
| research, and IMO the internal Google chatbot from 2021 was
| better than the first ChatGPT.
|
| I know they employ a lot of AI scientists who have previously
| published milestone work, and I've read at least one OpenAI
| paper. But I'm genuinely unaware of what fundamental
| breakthroughs they've made as a company.
|
| I'm willing to believe they've done important work, and I'm
| seriously asking for pointers to some of it. What I know of them
| is mainly that they've been first to market with existing tech,
| possibly training on more data.
| incognition wrote:
| Ilya was the Google researcher..
| ants_everywhere wrote:
| Wasn't he at OpenAI when transformers and Google's pretrained
| transformer BERT came out?
| ants_everywhere wrote:
| Oh, oops, the piece I was missing was Radford et al. (2018)
| and probably some others. That's perhaps what you were
| referring to?
| danpalmer wrote:
| I think it's inarguable that OpenAI have at least at times over
| the last 3 years been well ahead of other companies. Whether
| that's true now is open to debate, but it has been true.
|
| This suggests they have either: made substantial breakthroughs,
| that are _not open_ , or that the better abilities of OpenAI
| products are due to non-substantial tweaks (more training,
| better prompting, etc).
|
| I'm not sure either of these options is great for the original
| mission of OpenAI, although given their direction to "Closed-
| AI" I guess the former would be better for them.
| ants_everywhere wrote:
| I left pretty soon after a Google engineer decided the
| internal chat bot was sentient but before ChatGPT 3.5 came
| out. So I missed the entire period where Google was trying to
| catch up.
|
| But it seemed to me before I left that they were struggling
| to productize the bot and keep it from saying things that
| damage the brand. That's definitely something OpenAI figured
| out first.
|
| I got the feeling that maybe Microsoft's Tay experience cast
| a large shadow on Google's willingness to take its chat bot
| public.
| trashtester wrote:
| The way I understand it, the key difference is that when
| training o1, they were going beyond simply "think step-by-step"
| in that they were feeding the "step-by-step" reasoning patterns
| that ended up with a correct answer back into the training set,
| meaning the model was not so much trained to find the correct
| answer directly, but rather to reason using patterns that would
| generally lead to a correct answer.
|
| Furthermore, o1 is able to ignore (or even leverage) previous
| reasoning steps that do NOT lead to the correct answer to
| narrow down the search space, and then try again at inference
| time until it finds an answer that it's confident is correct.
|
| This (probably combined with some secret sauce to make this
| process more efficient) allows it to optimize how it navigates
| the search space of logical problems, basically the same way
| AlphaZero navigated to search space of games like Go and Chess.
|
| This has the potential to teach it to reason in ways that go
| beyond just creating a perfect fit to the training set. If the
| reasoning process itself becomes good enough, it may become
| capable of solving reasoning problems that are beyond most or
| even all humans, and in a fraction of the time.
|
| It still seems that o1 still has a way to go when it comes to
| it's World Model. That part may require more work on
| video/text/sound/embodiement (real or virtual). But for
| abstract problems, o1 may indeed be a very significant
| breakthrough, taking it beyond what we typically think of as an
| LLM.
| ants_everywhere wrote:
| Got it! Super cool and very helpful thanks!
| CephalopodMD wrote:
| Totally agree. It took me a full week before I realized that
| the Strawberry/o1 model was the mysterious Q* Sam Altman has
| been hyping up for almost a full year since the openai coup,
| which... is pretty underwhelming tbh. It's an impressive
| incremental advancement for sure! But it's really not the
| paradigm shifting gpt-5 worthy launch we were promised.
|
| Personal opinion: I think this means we've probably exhausted
| all the low hanging fruit in LLM land. This was the last thing
| I was reserving judgement for. When the most hyped up big idea
| openai has rn is basically "we're just gonna have the model
| dump out a massive wall of semi-optimized chain of thought
| every time and not send it over the wire" we're officially out
| of big ideas. Like I mean it obviously works... but that's more
| or less what we've _been_ doing for years now! Barring a total
| rethinking of LLM architecture, I think all improvements going
| forward will be baby steps for a while, basically moving at the
| same pace we've been going since gpt-4 launched. I don't think
| this is the path to AGI in the near term, but there's still
| plenty of headroom for minor incremental change.
|
| By analogy, i feel like gpt-4 was basically the same quantum
| leap we got with the iphone 4: all the basic functionality and
| peripherals were there by the time we got iphone 4
| (multitasking, facetime, the app store, various sensors, etc.),
| and everything since then has just been minor improvements. The
| current iPhone 16 is obviously faster, bigger, thinner, and
| "better" than the 4, but for the most part it doesn't really do
| anything extra that the 4 wasn't already capable of at some
| level with the right app. Similarly, I think gpt-4 was pretty
| much "good enough". LLMs are about as they're gonna get for the
| next little while, though they might get a little cheaper,
| faster, and more "aligned" (however we wanna define that). They
| might get slightly less stupid, but i don't think they're gonna
| get a whole lot smarter any time soon. Whatever we see in the
| next few years is probably not going to be much better than
| using gpt-4 with the right prompt, tool use, RAG, etc. on top
| of it. We'll only see improvements at the margins.
| bansheeps wrote:
| Update: Looks like Barret Zoph, GPT-4's post training (co-)lead
| is also leaving:
| https://x.com/barret_zoph/status/1839095143397515452
| thundergolfer wrote:
| And now Bob McGrew, Chief of Research
| lolinder wrote:
| OpenAI is restructuring to be a for profit. Looks like that's
| coming with a bunch of turnover.
|
| I'm not sure why the HN algorithm never let it hit the front
| page, but there's discussion here:
|
| https://news.ycombinator.com/item?id=41651548
| yas_hmaheshwari wrote:
| Whoa! This definitely looks much more troubling for the company
| now. Can't decide it is because AGI is coming very soon OR AGI
| is very far away
| unsupp0rted wrote:
| It's probably neither of those things. People can only be
| pissed off + burnt out for so long before they throw up their
| hands and walk out. Even if AGI is a random number of months
| away... or isn't.
| freefaler wrote:
| If they had any equity they might've vested and decided there
| is more to life than working there. It's hard to be very
| motivated to work at high pace when you can retire any moment
| without losing your lifestyle.
| jprete wrote:
| I cannot actually believe that of anyone working at OpenAI,
| unless the company internal culture has gotten so
| unpleasant that people want to quit. Which is a very
| different kind of change, but I can't see them going from
| Bell Labs to IBM in less than ten years.
| trashtester wrote:
| I'm guessing that most key players (Mira, Greg, Ilya, etc)
| negotiated deals last winter (if not before) that would
| ensure they kept their equity even if leaving, in return
| for letting Sam back in.
|
| Probably with some form of NDA attached.
| domcat wrote:
| Looks like far away is more reasonable.
| berniedurfee wrote:
| This is the money grab part of the show.
|
| As LLM capabilities start to plateau, everyone with any sort
| of name recognition is scrambling to ride the hype to a big
| pay day before reality catches up with marketing.
| freilanzer wrote:
| Why would they leave if AGI was near?
| bamboozled wrote:
| Enjoy their life while they can?
| aaronrobinson wrote:
| To build their bunkers
| riazrizvi wrote:
| Seems obvious to me that the quality of the models is not
| improving since GPT-4. The departures I'm guessing are a
| problem talent has with 'founder mode', Altman's choice of
| fast pace, this absence of model improvement with these new
| releases, and the relative temptation of personal profit
| outside of OpenAI's not-for-profit business model. People
| think they can do better in control themselves. I suspect
| they are all under siege with juicy offers of funding and
| opportunities. Whether or not they will do better is another
| story. My money is on Altman, I think he is right on the
| dumpster rocket idea, but it's very difficult to see that
| when you're a rocket scientist.
| d--b wrote:
| These messages really sound like written under threat. They
| have a weird authoritarian regime quality . Maybe they just had
| ChatGpt write it though.
| jprete wrote:
| It's way simpler than that, people don't burn bridges unless
| it's for a good reason.
|
| I do think that whoever Bob is, they probably really are a
| good manager. EDIT: I guess that's Bob McGrew, head of
| research, who is now also leaving.
| TheAlchemist wrote:
| Similar to when Andrei Karpathy left Tesla. Tesla was on the
| verge of 'solving FSD' and unlocking trillions of $ of revenue
| (and mind you, this was already 3 years after the CEO said that
| they will have 1 million robotaxis on the road by the year's
| end).
|
| Guess what ? Tesla is still on the verge of 'solving FSD'. And
| most probably it will be in the same place for the next 10 years.
|
| The writing is on the wall for OpenAI.
| vagab0nd wrote:
| I follow the latest updates to FSD and it's clear to me that
| they are getting closer to robotaxis really fast.
| squigz wrote:
| That's what was said years ago.
| TheAlchemist wrote:
| Yeah I follow it too. There is progress for sure, one have to
| wonder if the CEO was very consciously lying 5, 8 years ago
| when he said they are less than 1 year away from robotaxis,
| given how shitty the system was.
|
| They are on a path of linear improvement. They would need to
| go on a path of exponential improvement to have any hope of a
| working robotaxi in the next 2 years.
|
| That's not happening at all.
| ssnistfajen wrote:
| FSD isn't getting "solved" without outlawing human drivers,
| period. Otherwise you are trying to solve a non-deterministic
| system with deterministic software under a 0% error tolerance
| rate. Even without human drivers you still have to deal with
| all the non-vehicle entities that pop onto the road from time
| to time. Jaywalkers alone is almost as complex to deal with
| as human drivers.
| WFHRenaissance wrote:
| LOL this is BS. We have plenty of deterministic software
| being used to solve non-deterministic systems already. I
| agree that 0% error rate will require the removal of all
| human drivers from the system, but 0.0001% error rate will
| be seen as accepted risk.
| blitzar wrote:
| I hear they will have FSD by the end of the year _.
|
| _ which year exactly is TBA
| yas_hmaheshwari wrote:
| The original saying of "fake it till you make it" has been
| changed to "say it till you make it" :-)
| Sandworm5639 wrote:
| Can anyone tell me more about Mira Murati? What else is she known
| for? How did she end up in this position?
| sumedh wrote:
| Its all a bit of mystery, even the early board members of Open
| AI were relatively unknown people who could not fight Altman.
| JCM9 wrote:
| They set out to do some cool stuff. They did. Company is now in
| the reality of they need to run a business and make
| revenue/profit which is, honestly, a lot less fun than when you
| were in "let's change the world and do cool stuff" phase. AGI is
| much further away than thought. Was a good run and time to do
| something else and let others now do the "run a company" phase.
| Seems like nothing more to it than that and seems fair to me.
| andy_ppp wrote:
| What on Earth is going on that they keep losing their best
| people. Is it a strange work environment?
| JCM9 wrote:
| OpenAI is shifting from "we did some really cool stuff" phase
| into the reality of needing to run a company, getting revenue,
| etc phase. Not common for folks to want to move on and go find
| the next cool thing. AGI is not around the corner. Building a
| company is a very different thing than building cool stuff and
| OpenAI is now in building a company mode.
| xyst wrote:
| Company is circling the drain. Sam Altman must be a real
| nightmare to work with.
| lossolo wrote:
| Bob McGrew, head of research just quit too.
|
| "I just shared this with OpenAI"
|
| https://x.com/bobmcgrewai/status/1839099787423134051
|
| Barret Zoph, VP Research (Post-Training)
|
| "I posted this note to OpenAI."
|
| https://x.com/barret_zoph/status/1839095143397515452
|
| All used the same template.
| HaZeust wrote:
| At this point, I wonder if it's part of seniority employment
| contract to publicly announce departure? One of Sam's
| strategies for OpenAI publicity is to state it's "too dangerous
| to be in the common man's hands" (since at least GPT-2) - and
| this strategy seems to generate a similar buzz too?
|
| I wonder if this is just continued creative guerilla tactics to
| stir the "talk about them maybe finding AGI" pot.
|
| That or we're playing an inverse Roko's Basilisk.
| lsh123 wrote:
| Treason doth never prosper, what's the reason? For if it prosper,
| none dare call it Treason.
| martin82 wrote:
| My guess is that OpenAI has been taken over by three letter
| agencies (the adults have arrived) and the people leaving now are
| the ones who have a conscience and refuse to build the most
| powerful tool for tyranny and hand it to one of the most evil
| governments on earth.
|
| Sam, being the soulless grifter and scammer he is, of course will
| remain until the bitter end, drunk with the glimpse of power he
| surely got while forging backroom deals with the big boys.
| k1rd wrote:
| If you leave him on an island of cannibals... He will be the only
| one left.
| DeathArrow wrote:
| Why is this important?
| betimsl wrote:
| a-few-moments-later.jpeg: - She and the prime minister of Albania
| on the same photo
| fsndz wrote:
| I think mira saw that sama is wrong, so she left.
| https://www.lycee.ai/blog/why-sam-altman-is-wrong
| TheOtherHobbes wrote:
| Perhaps the team decided to research how many 'r's are in 'non-
| compete.'
| monkfish328 wrote:
| Guessing they've all made a lot of dough as well already?
| navjack27 wrote:
| Okay hear me out. Restructuring for profit right? There will
| probably be companies spawned off of all of these leaving.
|
| If the government ever wants a third party to oversee safety of
| openAI wouldn't it be convenient if one of those that left the
| company started a company that focused on safety. Safe
| Superintelligence Inc. gets the bid because lobbying because
| whatever I don't even care what the reason is in this made up
| scenario in my head.
|
| Basically what I'm saying is what if Sam is all like "hey guys,
| you know it's inevitable that we're going to be regulated, I'm
| going for profit for this company now, you guys leave and later
| on down the line we will meet again in an incestuous company
| relationship where we regulate ourselves and we all profit."
|
| Obviously this is bad. But also obviously this is exactly exactly
| what has happened in the past with other industries.
|
| Edit: The man is all about the long con anyway. -
| https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...
|
| Another edit: I'll go one further on this a lot of the people
| that are leaving are going to double down on saying that open AI
| isn't focused on safety to build up the public perception and
| therefore the governmental perception that regulation is needed
| so there's going to be a whole thing going on here. Maybe it
| won't just be safety and it might be other aspects also because
| not all the companies can be focused on safety.
| neycoda wrote:
| Now that AI has exploded, I keep thinking about that show
| called Almost Human, that opened describing a time when
| technology advanced so fast that it was unable to be regulated.
| navjack27 wrote:
| As long as government runs slowly and industry runs fast it's
| inevitable.
| jadtz wrote:
| Why would government care about safety? They already have the
| former director of NSA, sitting member of the board.
| navjack27 wrote:
| Why would they have the FCC? Why would they have FDA? Why
| would people from industry end up sitting on each of these
| things eventually?
|
| EDIT: oh and by the way i'm very for bigger government and
| more regulations to keep corpos in line. i'm hoping i'm wrong
| about all of this and we don't end up with corruption
| straight off the bat.
| snowwrestler wrote:
| I think the departures and switch to for-profit model may point
| in a different direction: that everyone involved is realizing
| that OpenAI's current work is not going to lead to AGI, and
| it's also not going to change.
|
| So the people who want to work on AGI and safety are leaving to
| do that work elsewhere, and OpenAI is restructuring to instead
| focus on wringing as much profit as possible out of their
| current architecture.
|
| Corporations are actually pretty bad at doing tons of different
| things simultaneously. See the failure of huge conglomerates
| like GE, as well as the failure of companies like Bell, Xerox,
| and Microsoft to drive growth with their corporate research
| labs. OpenAI is now locked into a certain set of technologies
| and products, which are attracting investment and customers.
| Better to suck as much out of that fruit as possible while it
| is ripe.
| mnky9800n wrote:
| I feel like it's unfair to expect growth to remain within
| your walls. bell and Xerox both drove a lot of growth. That
| growth just left bell and Xerox to go build things like intel
| and apple. They didn't keep it for themselves and that's a
| good thing. Could you imagine if the world was really like
| those old at&t commercials and at&t was actually the ones
| bringing it to you? I would not want a monolithic at&t
| providing all technology.
|
| https://youtu.be/xBJ2KXa9c6A?si=pB67u56Apj7gdiHa
|
| I do agree with you. They are locked into pulling value out
| of what they got and they probably aren't going to build
| something new.
| philosopher1234 wrote:
| These are some serious mental gymnastics. It depends on:
|
| 1. The government providing massive funds for AI safety
| research. There is no evidence for this. 2. Sam Altman and
| everyone else knowing this will happen and planning for it. 3.
| Sam Altman, amongst the richest people in the world, and
| everyone else involved, not being greedy. (Despite the massive
| evidence of greed) 4. San altman heroically abandoning his
| massive profits down the line.
|
| Also, even in your story, Sam Altman profits wildly and is
| somehow also not motivated by that profit.
|
| On the other hand, a much simpler and more realistic
| explanation is available: he wants to get rich.
| whiplash451 wrote:
| The baptists and the bootleggers
|
| https://a16z.com/ai-will-save-the-world/
| greener_grass wrote:
| AI safety people claiming that working on AI start-ups is a good
| way to prevent harmful AI is laughable.
|
| The second you hit some kind of breakthrough, capital finds a way
| to remove any and all guardrails that might impede future
| profits.
|
| It happened at DeepMind, Google, Microsoft and OpenAI. Why won't
| this happen the next time?
|
| And ironically, many in this community say that corporations are
| AI.
| personalityson wrote:
| Sam will be the last to leave and OpenAI continues to run on it's
| own
| sourcepluck wrote:
| Super-closed-source-for-maximum-profit-AI lost an employee? I
| hope she enters into a fruitful combative relationship with her
| former employer.
| gazebushka wrote:
| Man I really want to read a book about all of this drama
| dstanko wrote:
| So ChatGPT turned AGI and found a way to blackmail all of the
| ones that were agains it (them?) and blackmailed them to leave.
| For some reason I'm thinking of a movie Demon Seed... :P
___________________________________________________________________
(page generated 2024-09-26 23:02 UTC)