[HN Gopher] Elon Musk wanted an OpenAI for-profit
       ___________________________________________________________________
        
       Elon Musk wanted an OpenAI for-profit
        
       Author : arvindh-manian
       Score  : 245 points
       Date   : 2024-12-13 19:36 UTC (3 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | ronsor wrote:
       | I like this practice of publicly airing out dirty laundry.
        
         | motorest wrote:
         | This is not airing out dirty laundry. This is calling bullshit
         | on a bullshitter making bullshit claims.
        
         | vineyardmike wrote:
         | It's certainly in response to Elon getting more involved in the
         | government.
         | 
         | Recently the CFO basically challenged him to try and use his
         | influence against competition "I trust that Elon would act
         | completely appropriately... and not un-American [by abusing his
         | influence for personal gain]".
         | 
         | The best thing they can do is shine as much light on his
         | behavior in the hope that he backs down to avoid further
         | scrutiny. Now that Elon is competing, and involved with the
         | government, he'll be under a lot more scrutiny.
        
           | hipadev23 wrote:
           | > Now that Elon is competing, and involved with the
           | government, he'll be under a lot more scrutiny.
           | 
           | That's the cutest fucking thing I've heard this year. In what
           | world is anyone going to scrutinize Elon Musk? He's the
           | right-hand man of the most powerful person in the world. The
           | time for scrutiny was 8 years ago.
        
             | nativeit wrote:
             | Scrutiny: _noun_ critical observation or examination
        
               | snozolli wrote:
               | He'll be under scrutiny, just not by anyone with any
               | power whatsoever to stop or even meaningfully influence
               | his behaviors.
        
               | zie wrote:
               | Well Congress could meaningfully influence behaviors, but
               | it seems very unlikely unless something drastic happens.
        
               | nyc_data_geek1 wrote:
               | Scrutiny without the ability to exert oversight, control
               | or any modicum of restraint is... utterly useless to
               | everyone except the historians.
        
             | whoitwas wrote:
             | I didn't even realize he was American until this year. I
             | thought he was African.
        
           | hn1986 wrote:
           | It's naive for anyone to think that Elon _won 't_ use his
           | influence in government to empower his companies and weaken
           | his competitors.
        
             | threeseed wrote:
             | He spent $250m+ on helping getting Trump elected.
             | 
             | Of course he is going to try and recoup some of that money
             | back.
        
         | themanmaran wrote:
         | Agreed!(assuming that wasn't sarcasm)
         | 
         | Has a nice "nailing my grievances to the town square bulletin
         | board" feel. Doesn't result in any real legal evidence either
         | way, but it's fun to read along.
        
       | leonmusk wrote:
       | well it seems Elon is playing to win at all costs. Win what
       | though "the culture" from that ian banks book?
        
         | vidarh wrote:
         | Whole series of books. Banks would have detested where Musk has
         | ended up, though - he was a committed socialist.
        
       | mediumsmart wrote:
       | How could he! _thank god that did not happen._
        
         | MassiveQuasar wrote:
         | From TFA:
         | 
         | > Summer 2017: We and Elon agreed that a for-profit was the
         | next step for OpenAI to advance the mission
         | 
         | So basically Elon had the same idea as Sam Altman.
        
           | mediumsmart wrote:
           | Sam too?!! how could he! thank god that did not happen
           | either.
        
       | m463 wrote:
       | I wonder who wrote this missive?
        
         | jtokoph wrote:
         | GPT-4 :)
        
       | moralestapia wrote:
       | Yes ... and?
       | 
       | The part that raises eyebrows is how a non-profit suddenly become
       | a for-profit, from a legal standpoint.
        
         | ascorbic wrote:
         | He is suing to try to stop them becoming a for-profit. This
         | post is showing that he originally supported the idea.
        
           | moralestapia wrote:
           | >This post is showing that he originally supported the idea.
           | 
           | Yes ... and?
           | 
           | That still wouldn't make it _legal_. The lawsuit will be
           | decided based on jurisprudence, not based on  "what Elon
           | thinks is right".
           | 
           | This is a very amateurish move, tbh. Did they hire Matt
           | Mullenweg's legal team?
        
         | claudiulodro wrote:
         | I think it's supposed to be juxtaposed to this tweet of his:
         | 
         | > I'm still confused as to how a non-profit to which I donated
         | ~$100M somehow became a $30B market cap for-profit. If this is
         | legal, why doesn't everyone do it?
        
           | Vespasian wrote:
           | Well apparently that's a very bold faced lie if the timeline
           | claimed by sam altman is even close to being plausible.
           | 
           | Elon Musks image is, for some reason beyond me, one of a
           | "common man" who doesnt know all that much about business
           | (and he's trying to pander to Twitter fans with that).
           | 
           | Even more fascinating is that people are apparently buying
           | into it. As if you can just stumble into that much business
           | success (besides inheriting) without having a very firm grasp
           | on how the corperate structures behind it work.
        
         | topspin wrote:
         | A non-profit with a for-profit subsidiary...
         | 
         | Legal, despite the stench.
        
           | moralestapia wrote:
           | (Could still be) illegal.
           | 
           | Private inurement is a thing. There are laws written
           | explicitly to prevent this.
        
             | topspin wrote:
             | > Wrong
             | 
             | ?
             | 
             | OpenAI's non-profit+for-profit structure is detailed at
             | their own site: https://openai.com/our-structure/
             | 
             | It's a non-profit with a for-profit subsidiary. Please
             | elaborate on "wrong."
        
               | aipatselarom wrote:
               | (I'm @moralestapia, but on my phone)
               | 
               | Non-profits have a mission that has to be aligned with a
               | public/social benefit.
               | 
               | No amount of structuring would help you if it turns out
               | that your activities benefit a private
               | individual/organization instead of whatever public
               | benefit you wrote when setting up the non-profit.
               | 
               | All it takes if a judge ruling that [1] is happening, and
               | then it's over for you and all your derived entities,
               | subsidiaries, whatever-you-set-up-thinking-you-would-
               | outsmart-the-irs. Judges can see through your bullshit,
               | they do this 100s of times, year after year.
               | 
               | And also, "oh, but we wanted to do this since the
               | beginning" only digs you a deeper hole, lmao. Do they not
               | have common sense?
               | 
               | I'm surprised that @sama, whose major talent is
               | manipulat... sorry "social engineering", greenlit this
               | approach.
               | 
               | 1. https://www.irs.gov/charities-non-profits/charitable-
               | organiz...
        
       | akira2501 wrote:
       | [flagged]
        
         | adamrezich wrote:
         | That's the funniest part about all of this--the continued
         | posturing that AGI is _totally_ a thing that is _just_ beyond
         | our grasp.
        
           | blibble wrote:
           | just give us another $100 billion to spend, it's all we need
           | _wink_
        
             | kachapopopow wrote:
             | 23 trillion.
        
         | colechristensen wrote:
         | One of these days if AGI ever does actually come about, we
         | might very well have to go to war with them.
         | 
         | There might be a day where billionaires employ zero humans and
         | themselves merge with the AGI in a way that makes them not
         | quite human any more.
         | 
         | The amount of data being collected about everyone and what
         | machine learning can already do with it is frightening.
         | 
         | I'm afraid the reaction to AI when it actually becomes a threat
         | is going to look like more of a peasant revolt than a skynet
         | situation.
        
           | talldayo wrote:
           | > One of these days if AGI ever does actually come about, we
           | might very well have to go to war with them.
           | 
           | And the same conditions of material wealth that dictate
           | traditional warfare will not be changed by the _ChatGPT for
           | Warlords_ subscription. This entire conversation is silly and
           | predicated on beliefs that cannot be substantiated or
           | delineated logically. You (and the rest of the AI preppers)
           | are no different than the pious wasting their lives in fear
           | awaiting the promised rapture.
           | 
           | Life goes on. One day I might have to fistfight that super-
           | dolphin from _Johnny Mnemonic_ but I don 't spend much time
           | worrying about it in a relative sense.
        
             | bobthebuilders wrote:
             | The rules of traditional warfare will still exist, they
             | will just be fought by advanced hyper intelligent AIs
             | instead of humans. Hunter Miller humanoids like Optimus and
             | drones like Anduril will replace humans in war.
             | 
             | War will be the same, but the rich are preparing to unleash
             | a "new arsenal of democracy" against us in an AI takeover.
             | We must be prepared.
        
               | talldayo wrote:
               | > Hunter Miller humanoids like Optimus and drones like
               | Anduril will replace humans in war.
               | 
               | You do not understand how war is fought if you sincerely
               | believe this. Battles aren't won with price tags and
               | marketing videos, they're won with strategic planning and
               | tactical effect. The reason why the US is such a powerful
               | military is not because we field so much materiel, but
               | because each materiel is so effective. Many standoff-
               | range weapons are automated and precise within feet or
               | even inches of the target; failure rates are lower than
               | 98% in most cases. These are weapons that won't get
               | replaced by drones, and it's why Anduril _also_ produces
               | cruise-missiles and glide bombs in recognition that their
               | drones aren 't enough.
               | 
               | Serious analysts aren't taking drones seriously, it's a
               | consensus among everyone that isn't Elon Musk. Drones in
               | Ukraine are used in extreme short-range combat (often
               | less than 5km in range from each other), and often
               | require expending several units before landing a good
               | hit. These are improvised munitions of _last resort_ ,
               | not a serious replacement for antitank guided weaponry.
               | It's a fallacy on the level of comparing an IED to a
               | shaped-charge landmine.
               | 
               | > but the rich are preparing to unleash a "new arsenal of
               | democracy" against us in an AI takeover
               | 
               | The rich have already taken over with the IMF. You don't
               | need AI to rule the world if you can get them addicted to
               | a dollar standard and then make them indebted to your
               | infinite private capital. China does it, Russia does
               | it... the playbook hasn't changed. Even if you make a
               | super-AI as powerful as a nuke, you suffer from the same
               | problem that capitalism is a more devastating weapon.
        
               | colechristensen wrote:
               | >These are weapons that won't get replaced by drones
               | 
               | Those weapons _are_ drones. They 're just rockets instead
               | of quadcopters. They're also several orders of magnitude
               | more expensive, but they really could get driven by the
               | same off-the-shelf kind of technology if someone bothered
               | to make it.
               | 
               | And they _will_ get replaced. Location based targeting is
               | in many cases less interesting than targeting something
               | which can move and could be recognized by the weapon in
               | flight. Load up a profile of a tank, a license place,
               | images of a person, etc. to be recognized and targeted
               | independently in flight.
               | 
               | >Battles aren't won with price tags and marketing videos,
               | they're won with strategic planning and tactical effect.
               | 
               | Big wars tend to get won by resources more than tactics.
               | Japan and Germany couldn't keep up with US industrial
               | output. Germany couldn't keep up with USSR manpower.
               | 
               | Replacing soldiers with drones means it's more of a
               | contest of output than strategy.
        
               | bobthebuilders wrote:
               | I am not talking about drones like DJI quadcopters with
               | grenades duct taped to them or even large fixed wing
               | aircraft, I am talking about small personal humanoid
               | drones.
               | 
               | Civilization is going through a birth rate collapse. The
               | labor shortage will become more endemic in the coming
               | years, first in lower skill and wage jobs, and then
               | everywhere else.
               | 
               | Humanoid robots change the economics of war. No longer
               | does the military or the police need humans. Morale will
               | no longer be an issue. The infantry will become materiel.
        
             | colechristensen wrote:
             | Robot soldiers is a today problem. You want a gun or a bomb
             | on a drone with facial recognition that could roam the
             | skies until it finds you and destroys it's target?
             | 
             | That's a weekend project for a lot of people around here.
             | 
             | You don't need AGI for a lot of these things.
             | 
             | We are not far away from an entire AI corporation.
        
           | guerrilla wrote:
           | Yes, when one group of people, a small minority of the
           | population, controls the ability to produce food and
           | violence, then we have a serious problem.
        
           | Terr_ wrote:
           | > One of these days if AGI ever does actually come about, we
           | might very well have to go to war with them.
           | 
           | They arguably already exist in the form of very large
           | corporations. Their lingering dependency on low-level human
           | logic units is an implementation detail.
        
           | dboreham wrote:
           | Like Jonny Depp in that movie.
        
         | Terr_ wrote:
         | > Neither of you have anything even _approaching_ AGI.
         | 
         | On that note, is there a term for, er... Negative hype? Inverse
         | hype? I'm talking about where folks clutch their pearls and
         | say: "Oh no, our product/industry might be _too awesome_ and
         | _doom mankind_ with its strength and guaranteed growth and
         | profitability! "
         | 
         | These days it's hard to tell what portion is cynical marketing
         | ploy versus falling for their own propaganda.
        
         | rvz wrote:
         | > Neither of you have anything even _approaching_ AGI.
         | 
         | There are so many conflicting definitions of what "AGI" means.
         | Not even OpenAI or Microsoft even knows what it means.
         | 
         | "AGI" is a scam.
        
           | xiphias2 wrote:
           | Of course it's clear: AGI is achieved when a machine can
           | completely capable of simulating a human being.
           | 
           | At that point remote / office work is 100% over.
        
         | dang wrote:
         | Ok, but please don't fulminate on HN*. Comments like this
         | degrade the community for everyone.
         | 
         | You may not owe people who you feel are spoiled rich kids
         | better, but you owe this community better if you're
         | participating in it.
         | 
         | * this is in the site guidelines:
         | https://news.ycombinator.com/newsguidelines.html
        
           | akira2501 wrote:
           | Ok, but I think it's clear that what I wrote does not convey
           | violence or vehemence but simple disrespect. A disrespect not
           | born out of their early life and personal history but out of
           | their actions here _today_.
           | 
           | Which I think I'm entitled to convey as these are two CEOs
           | attempting to try a case in public while playing fast and
           | loose with the truth to bolster their cases. You may feel
           | that I, as a simple anonymous commentor, "owe this community
           | better," but do you spare none of these same sentiments for
           | the author himself?
        
       | unglaublich wrote:
       | If even OpenAI, who could benefit greatly from his money, warns
       | you about this person, you might want to take it seriously.
        
       | tuyguntn wrote:
       | So much public drama around AI company, just curious how it would
       | impact their brand and relationship with enterprise companies,
       | who usually seek stability when it comes to their service
       | providers
        
       | WhatsName wrote:
       | I wonder if there are PR people out there, who watched the
       | Wordpress vs. WPEngine disaster from the sideline and took notes.
       | 
       | For me this rhymes with recent history...
        
         | baobabKoodaa wrote:
         | What, you can't feel the Christmas spirit?
        
       | Havoc wrote:
       | Not 100% close to the facts, but from a cursory read this seems
       | deeply dishonest on OpenAI's part to me.
       | 
       | Musk appears to be objecting to a structure having profit driven
       | players ("YC stock along with a salary") directly in the
       | nonprofit...and is suggestion moving it to a parallel structure.
       | 
       | That's seems like a perfectly valid and frankly
       | ethically/governance sound point to raise. The fact that he
       | mentions incentives specifically to me suggests he was going down
       | that line of reasoning outlined above.
       | 
       | Framing that as "Elon Musk wanted an OpenAI for-
       | profit"...idk...maybe I'm missing something here, but dishonest
       | framing is definitely the word that comes to mind.
        
       | freedomben wrote:
       | > _You can't sue your way to AGI. We have great respect for
       | Elon's accomplishments and gratitude for his early contributions
       | to OpenAI, but he should be competing in the marketplace rather
       | than the courtroom._
       | 
       | Isn't that exactly what he's doing with x.ai? Grok and all that?
       | IIRC Elon has the biggest GPU compute cluster in the world right
       | now, and is currently training the next major version of his
       | "competing in the marketplace" product. It will be interesting to
       | see how this blog post ages.
       | 
       | I'm not dismissing the rest of the post (and indeed I think they
       | make a good case on Elon's hypocrisy!) but the above seems at
       | best like a pretty massive blindspot which (if I were invested in
       | OpenAI) would cause me some concern.
        
         | hangonhn wrote:
         | > biggest GPU compute cluster in the world right now
         | 
         | Really? I'm really surprised by that. I thought Meta was the
         | one who got the jump on everyone by hoarding H100s. Or did you
         | mean strictly GPUs and not any of the AI specific chips?
        
           | freedomben wrote:
           | Good point, I don't know if it's strictly GPUs or also
           | includes some other AI specific chips.
           | 
           | Nvidia wrote about it:
           | https://nvidianews.nvidia.com/news/spectrum-x-ethernet-
           | netwo...
        
             | hangonhn wrote:
             | oh wow. I think your original assertion is correct. Wow.
             | What a crazy arms race.
        
         | kiernanmcgowan wrote:
         | > IIRC Elon has the biggest CPU compute cluster in the world
         | right now
         | 
         | Do you have a source for this? I don't buy this when compared
         | to Google, Amazon, Lawrence Livermore National Lab...
        
           | freedomben wrote:
           | I first heard it on the All-In podcast, but I do see many
           | articles/blogs about it as well. Quick note though, I
           | mistyped CPU (and rapidly caught and fixed, but not fast
           | enough!) when I meant GPU.
           | 
           | [1]: https://www.yahoo.com/tech/worlds-fastest-supercomputer-
           | plea...
        
           | mrshu wrote:
           | The claim seems to mostly be coming fro NVIDIA marketing [0].
           | 
           | [0] https://nvidianews.nvidia.com/news/spectrum-x-ethernet-
           | netwo...
        
         | codemac wrote:
         | > biggest GPU compute cluster in the world right now
         | 
         | This is wildly untrue, and most in industry know that.
         | Unfortunately you won't have a source just like I won't, but
         | just wanted to voice that you're way off here.
        
           | freedomben wrote:
           | > _This is wildly untrue, and most in industry know that.
           | Unfortunately you won 't have a source just like I won't, but
           | just wanted to voice that you're way off here._
           | 
           | Sure, we probably can't know for sure who has the biggest as
           | they try to keep that under wraps for competition purposes,
           | but it's definitely not "wildly untrue." A simple search will
           | show that they have if not the biggest, damn near one of the
           | biggest. Just a quick sample:
           | 
           | https://nvidianews.nvidia.com/news/spectrum-x-ethernet-
           | netwo...
           | 
           | https://www.yahoo.com/tech/worlds-fastest-supercomputer-
           | plea...
           | 
           | https://www.tomshardware.com/pc-components/gpus/elon-musk-
           | to...
           | 
           | https://www.capacitymedia.com/article/musks-xais-colossus-
           | cl...
        
             | codemac wrote:
             | i've physically visited a larger one, it is not even a well
             | kept secret. we all see each other at the same airports and
             | hotels.
        
             | threeseed wrote:
             | Technically, it maybe the world's biggest _single_ AI
             | supercomputer.
             | 
             | But it ignores Amazon, Google and Microsoft/OpenAI being
             | able to run training workloads across their entire clouds.
        
           | rvz wrote:
           | It is true. [0]
           | 
           | [0] https://nvidianews.nvidia.com/news/spectrum-x-ethernet-
           | netwo...
        
             | sigh_again wrote:
             | Even just Meta dwarfs Twitter's cluster, with an estimated
             | 350k H100s by now.
        
               | BrickFingers wrote:
               | 2 months ago Jensen Huang did an interview where he said
               | xAi built the fastest cluster with 100k GPUs.he said
               | "what they achieved is singular, never been done before"
               | https://youtu.be/bUrCR4jQQg8?si=i0MpcIawMVHmHS2e
               | 
               | Meta said they would expand their infrastructure to
               | include 350k GPUs by the end of this year. But, my guess
               | is they meant a collection of AI clusters not a singular
               | large cluster. In the post where they mentioned this,
               | they shared details on 2 clusters with 24k GPUs
               | each.https://engineering.fb.com/2024/03/12/data-center-
               | engineerin...
        
             | verdverm wrote:
             | Really? Meta looks to be running larger clusters of Nvidia
             | GPUs already
             | 
             | https://engineering.fb.com/2024/03/12/data-center-
             | engineerin...
             | 
             | This doesn't account for inhouse silicon like Google where
             | the comparison becomes less direct (different devices,
             | multiple subgroups like DeepMind)
        
           | boringg wrote:
           | I don't think you've been paying attention to the industry
           | even though your posturing like an insider.
        
         | axus wrote:
         | Maybe Elon is doing both, competing in the marketplace and in
         | the courtroom. And in advising the president to regulate non-
         | profit AI .
        
           | freedomben wrote:
           | Agree, he is doing both. But if he's competing in the
           | marketplace, it seems pretty off base for Open AI to tell him
           | he should be competing in the marketplace. So I think my
           | criticism stands.
        
             | nativeit wrote:
             | I don't think their suggestion ever implies that he isn't.
        
               | freedomben wrote:
               | If the believed he already was competing in the
               | marketplace, then what would be the point of saying "he
               | should be competing in the marketplace rather than the
               | courtroom." ? I'm genuinely trying to understand what I'm
               | missing here because it seems illogical to tell someone
               | they should do something they are already doing.
        
         | Etheryte wrote:
         | Surely Meta has the biggest compute in that category, no? I
         | wouldn't be surprised if Elon went around saying that to raise
         | money though.
        
         | llm_nerd wrote:
         | >Isn't that exactly what he's doing with x.ai? Grok and all
         | that?
         | 
         | They aren't saying he isn't. But he _is_ trying to handicap
         | OpenAI, while his own offering at this point is farcical.
         | 
         | >It will be interesting to see how this blog post ages.
         | 
         | Whether Elon's "dump billions to try to get attention for The
         | Latest Thing" attempt succeeds or not -- the guy has an
         | outrageous appetite to be the center of attention, and sadly
         | people play along -- has zero bearing on the aging of this blog
         | post. Elon could simply be fighting them in the marketplace,
         | instead he's waging a public and legal campaign that honestly
         | makes him look like a pathetic bitch. And that's regardless of
         | my negative feelings regarding OpenAI's bait and switch.
        
           | elif wrote:
           | Eh grok is bad but I wouldn't call it farcial. It's terrible
           | at multimodal, but in terms of up to date recent cultural
           | knowledge, sentiments, etc. it's much better than the stale
           | GPT models (even with search added)
        
         | boringg wrote:
         | It's rich coming from Sam Altman -- the guy who famously tried
         | to use regulatory capture to block everyone else.
        
       | factorialboy wrote:
       | For-profit isn't the problem. Lying about being non-profit to
       | raise funds, and _then_ become for-profit, that's the underlying
       | concern.
        
         | ozim wrote:
         | That should be top comment on all OpenAI threads.
         | 
         | Just like "open source and forever free" - until of course it
         | starts to make sense charging money.
        
         | retinaros wrote:
         | yes but its easy to hate musk thinking u siding with the good
         | guys...
        
           | boringg wrote:
           | theres no good guys at the executive level of the AI world -
           | its money and power dressed in saving humanity language.
        
         | catigula wrote:
         | Yes, which means his post is an attempt to smear the
         | credibility of musk, not make a legal defense.
         | 
         | If this were a legal defense, this would be heard in court.
        
         | lesuorac wrote:
         | Too bad not disclosing that you always intended to convert the
         | non-profit into a for-profit during your testimony while
         | numerous senators congratulate you about your non-profit values
         | isn't problematic.
         | 
         | https://www.techpolicy.press/transcript-senate-judiciary-sub...
        
         | boringg wrote:
         | Its true. Also it looks like Musks original statement was
         | correct that they should have gone with a C-corp instead of a
         | NFP.
        
         | crowcroft wrote:
         | Some might characterize OpenAI leadership as not 'consistently
         | candid'.
        
       | InfiniteVortex wrote:
       | regardless of what you think of it, the drama is at least
       | entertaining!
        
       | hermannj314 wrote:
       | A few weeks ago my OpenAI credits expired and I was billed to
       | replace them. I had no idea this was the business model, fine you
       | got me with your auto-renew scam because you decided my tokens
       | were spoiled.
       | 
       | At some point, OpenAI became who they said they werent. You can't
       | even get chatgpt to do anything fun anymore as the lawyers have
       | hobbled its creative expression.
       | 
       | And now they want to fight with Elon over what they supposedly
       | used to believe about money back in 2017.
       | 
       | Who actually deserves our support going forward?
        
         | pc86 wrote:
         | Expiring tokens is a pretty standard business model. Are they
         | lying and saying the tokens never expire?
        
           | hermannj314 wrote:
           | Nope they aren't lying, but they lost me as a customer. My
           | Arby's gift card lasted longer than my OpenAI credits, it's a
           | horrible business model and I wish them luck, but I won't be
           | a part of supporting bad models.
        
             | selimthegrim wrote:
             | Don't give them any ideas. Next time Starbucks will be
             | popping up in the ChatGPT output, offering to convert your
             | unused tokens to rewards there.
        
         | sadeshmukh wrote:
         | The free credit or the tokens? Because that's a very different
         | story.
        
           | hermannj314 wrote:
           | One year ago I gave openAi $100 to have credits for small
           | hobby projects that use their API.
           | 
           | They expired and disappeared and then openAi charged me $100
           | to reload my lost money.
           | 
           | I am sure this is what I agreed to, but I personally thought
           | I was hacked or something because I didnt expect this
           | behavior from them.
           | 
           | They lost me as an API user and monthly chatgpt plus user. I
           | hope it was worth it. They want enterprise, business, and pro
           | users anyway and not my small potatoes.
        
       | AlanYx wrote:
       | Ignoring all the drama, this part is interesting:
       | 
       | "On one call, Elon told us he didn't care about equity personally
       | but just needed to accumulate $80B for a city on Mars."
        
         | pram wrote:
         | Measured another way, just a bit under 2 Twitters.
        
           | talldayo wrote:
           | The fact that owning Twitter was worth half a Mars colony to
           | him should give you an idea of how seriously he's taking this
           | whole thing. It's up there next to "Full Self Driving" and
           | "$25,000 EV" in the Big Jar of Promises Used To Raise Capital
           | and Nothing Else.
        
             | adabyron wrote:
             | He bought Twitter for $40B.
             | 
             | His Tesla stock was 0% ytd until the election.
             | 
             | Post election it is up roughly 70% ytd & has paid for
             | Twitter & the Mars colony multiple times.
             | 
             | Hard to say if that happens without him owning Twitter.
        
               | kingofheroes wrote:
               | This. In hindsight, buying Twitter at a loss was well
               | worth the long (though, more like medium) term results.
               | As much as it disgusts me, I'm impressed.
        
               | mhh__ wrote:
               | Honestly not convinced it was about that. I'm sure he
               | likes the stock going up but I think he is somewhat
               | earnest about why he bought twitter, or at least what he
               | wants to do with it when he got stuck with it.
               | 
               | As per usual, he bet the house and won. The vibes are
               | shifting, Trump won, etc
        
               | TheGRS wrote:
               | Are we really giving Twitter that much credit these days?
               | I feel like we gave it less credit when it was actually
               | popular. I would give Elon jumping around on stage more
               | credit than what Twitter/X did for this election.
        
               | SV_BubbleTime wrote:
               | >I feel like we gave it less credit when it was actually
               | popular.
               | 
               | I'm not sure I've ever seen a more self-referential
               | comment here. When it was popular _with you_. I know it
               | is hard to imagine, but 1 /2 the United States voters see
               | it as the last free speech area of the internet. Maybe
               | they're all wrong, and you are right, and BlueSky is
               | going to magically gain the network effects that Meta
               | couldn't drum up.
        
               | Arkhaine_kupo wrote:
               | > Hard to say if that happens without him owning Twitter.
               | 
               | Its fairly easy.
               | 
               | In almost every western country the incumbent
               | administration has been punished by voters due to
               | inflation, this has been the case in the uk, germany,
               | romania, france, mexico... list goes on. So Trump could
               | have won without Elon buying twitter.
               | 
               | Similarly he could have donated to Trump without buying
               | Twitter, and been on stage and been all day on twitter
               | saying nonsense without purchasing. So being close to
               | Trump is possible without buying Twitter.
               | 
               | The market would have reacted the same way, because the
               | market is reacting to the fact that Trump is a corrupt
               | leader and being close to him means the market will be
               | skewed to benefit his cronies (in this case Elon). If im
               | not wrong Trump has already mentioned creating "self
               | driving car initiatives" that probably means boosting
               | Tesla dangerous self driving mode, and also they have
               | alluded to investigating rival companies to tesla and
               | spacex or at least "reviewing their goverment contracts".
               | Other industries without social media owners, like
               | private prisions, also skyrocketed after trump won and
               | those paid trump as much as Elon but were not on social
               | media. The market would have reacted to Trump being
               | corrupt regardless of Elon buying Twitter.
               | 
               | So its easy to say that his stock would be up 70% without
               | buying twitter, as long as he kept the 250 million
               | investment in the Trump campaign, and then market assesed
               | Trump admin as affecting market fairness, both of which
               | would happen without his purchase.
        
             | elif wrote:
             | "full self driving" is a real thing and it's resilient and
             | nearly feature complete.
             | 
             | v13 is way better and safer than a great Uber driver, let
             | alone average.
             | 
             | Check it out
        
               | ben_w wrote:
               | I've been hearing "nearly feature complete" for over a
               | decade now: https://en.wikipedia.org/wiki/List_of_Predict
               | ions_for_Autono...
        
               | threeseed wrote:
               | > v13 is way better and safer than a great Uber driver,
               | let alone average.
               | 
               | Will need to see a source for this.
               | 
               | Especially since NHTSA crash reporting data is not made
               | public.
        
               | bee_rider wrote:
               | I'm sure soon enough we won't have to worry about NHTSA
               | keeping that sort of data private (because that agency
               | will simply be found inefficient and eliminated).
        
           | bravetraveler wrote:
           | Town squares go a lot further than I imagined
        
           | ShakataGaNai wrote:
           | When he bought it, sure? Probably more like 20 X's
        
             | SV_BubbleTime wrote:
             | The ideologue denialism is fascinating. You need to tell
             | yourself he's bad and the products he associates with are
             | bad - because he decided to be a conservative.
             | 
             | It's an amazing process watching from the outside.
        
         | NetOpWibby wrote:
         | This made me laugh but also made me think, "...that's it?
         | That's all it takes?"
        
           | marcosdumay wrote:
           | Land on Mars is cheap...
           | 
           | But no, I really doubt that's all it takes. Unless you
           | discount all of the R&D costs as SpaceX operational expenses.
        
             | kulahan wrote:
             | I imagine that's what he's doing. He's willing to put a lot
             | of company money into getting the city on Mars started,
             | because if he's first there, he's gonna set himself (or his
             | dynasty?) up to make hundreds of billions of dollars. Being
             | effectively in control of your own planet? Neat. Scary too.
        
               | linotype wrote:
               | Doing what exactly? What industry could Mars possibly
               | support profitably?
        
               | VectorLock wrote:
               | Yeah and Mars is a shitty place to live. And will always
               | be a shitty place to live. No amount of fantastical
               | "terraforming" is going to create a magnetosphere.
        
               | lyu07282 wrote:
               | perhaps the "not dying on earth industry" after climate
               | catastrophy hits
        
               | bee_rider wrote:
               | While I'm convinced we're going to screw this planet up,
               | the gap between "as bad as we can make Earth" and "as
               | good as we can make Mars" is pretty huge, right? And not
               | in a way that is optimistic for Mars.
        
               | lyu07282 wrote:
               | True it's probably easier to survive on earth in some
               | luxury bunker than on Mars no matter how much we destroy
               | earth. Alternative theories: billionaire space tourism,
               | it was never really about mars but about asteroid mining,
               | it was never about mars he just wants the government
               | subsidies
        
               | SaberTail wrote:
               | There's no climate scenario in which Mars is more
               | habitable than Earth. Even if a Texas-sized asteroid
               | crashed into Earth, Earth would still be more habitable
               | than Mars.
        
               | sangnoir wrote:
               | > What industry could Mars possibly support profitably?
               | 
               | Government contracts probably, but those don't need to be
               | "profitable" in the conventional sense. All that's needed
               | is to convince Congress that it's in America's interest
               | to establish public-private project for a moon base and a
               | Mars base, against a background of an ascendant Chinese
               | Space program (soon to be the only nation with a space
               | station). NASA can grow soy-beans, do bone density
               | studies or other science shit, but the main point will
               | just be being there.
        
         | beepbooptheory wrote:
         | I will always love Kim Stanley Robinson but I dont care:
         | please, Musk, go to Mars, you can have it.
        
           | gattr wrote:
           | I wonder if Elon Musk is of the Red or the Green mindset.
        
       | tuyguntn wrote:
       | > each board member has a deep understanding of technology, at
       | least a basic understanding of AI and strong & sensible morals.
       | 
       | > Put increasing effort into the safety/control problem
       | 
       | ... and we are working to get defense contracts which is used to
       | kill human beings in other countries, or fund organizations who
       | kill humans
        
       | jcrash wrote:
       | https://www.youtube.com/watch?v=FBoUHay7XBI&t=345s
       | 
       | one of the few youtube links on this page that is still up
        
       | lsy wrote:
       | I guess it's not news but it is pretty wild to see the level of
       | millenarianism espoused by all of these guys.
       | 
       | The board of OpenAI is supposedly going to "determine the fate of
       | the world", robotics to be "completely solved" by 2020, the goal
       | of OpenAI is to "avoid an AGI dictatorship".
       | 
       | Is nobody in these very rich guys' spheres pushing back on their
       | thought process? So far we are multiple years in with much
       | investment and little return, and no obvious large-scale product-
       | market fit, much less a superintelligence.
       | 
       | As a bonus, they lay out the OpenAI business model:
       | 
       | > Our fundraising conversations show that:
       | 
       | > * Ilya and I are able to convince reputable people that AGI can
       | really happen in the next <=10 years
       | 
       | > * There's appetite for donations from those people
       | 
       | > * There's very large appetite for investments from those people
        
         | EA-3167 wrote:
         | If you push back on them, you get pushed out. If you suck up to
         | them and their delusional belief systems, you might get paid.
         | It's a very self-selecting, self-reinforcing process.
         | 
         | In other words, it's also an affective death spiral.
        
         | hengheng wrote:
         | These guys didn't get to where they are now by admitting
         | mistakes and making themselves accountable. In power play
         | terms, that would be weak.
         | 
         | And once you are way up there and you have definitely left
         | earth, there is no right or wrong anymore, just strong and
         | weak.
        
         | ThrowawayTestr wrote:
         | It's been clear for a while now Elon has no one in his life
         | that's willing to push back on his inane ideas
        
         | pinewurst wrote:
         | Nobody seems to remember how the Segway was going to change our
         | world, backed by many of the VC power figures at the time +
         | Steve Jobs.
        
           | klik99 wrote:
           | In Segways defense that self balancing tech has made and will
           | continue to make an impact, just not world changing amount
           | (at least not yet) and not their particular company but the
           | companies they influenced - the same may end up true about
           | openai
        
           | hanspeter wrote:
           | I think we all remember, and if we forget, we're reminded
           | every time we see them at airports or doing city tours.
        
             | Calavar wrote:
             | I don't think I've seen a Segway in close to ten years.
             | Also I suspect most people under 25 have never even heard
             | of Segway.
        
               | elif wrote:
               | One wheels (Segway evolutionary grandchild) are almost as
               | popular as electric skateboards.
        
               | 9dev wrote:
               | So... not much? If anything, people drive electric
               | scooters around here. Those seem to hit the sweet spot.
        
           | stolenmerch wrote:
           | I remember serious discussions about how we'd probably need
           | to repave all of our sidewalks in the US to accommodate the
           | Segway
        
           | elif wrote:
           | You mean Steve Wozniak.
           | 
           | Close but no LSD
        
             | favorited wrote:
             | Steve Jobs said that Segways had the potential to be "as
             | big a deal as the PC."
        
             | pantalaimon wrote:
             | What makes you so sure about the LSD?
        
           | dghlsakjg wrote:
           | The Segway was a bit early, and too expensive, but I would
           | defend it... sort of.
           | 
           | Electric micromobility is a pretty huge driver of how people
           | negotiate the modern city. Self-balancing segways, e-skate,
           | e-bike and scooters are all pretty big changes that we are
           | seeing in many modern cityscapes.
           | 
           | Hell, a shared electric bike was just used as a getaway
           | vehicle for an assassination in NYC.
        
             | jpalawaga wrote:
             | e-/bikes and e-/scooters are big changes to city
             | navigation.
             | 
             | e-skate and segways are non-factors. And that's the
             | difference between a good product (ebike or even just plain
             | old bikeshare) and a bad one (segway).
        
           | arthurcolle wrote:
           | It reappeared as e-bikes and e-scooters - Lime, Bird, etc.
        
             | anothertroll123 wrote:
             | No it didn't
        
             | ldbooth wrote:
             | Reappeared as Electric unicycles, which look hilarious,
             | dangerous, and like a lot of fun.
        
             | n144q wrote:
             | None of which is doing great.
        
               | tim333 wrote:
               | There are about 70 lime bikes that commute to the square
               | by my flat. There's definitely some ebike stuff ongoing.
        
               | danenania wrote:
               | E-bikes are everywhere.
        
             | bee_rider wrote:
             | Apparently the first e-bike was invented in 1895. So I
             | don't think it is accurate to give Segway too much credit
             | in their creation. Anyway the innovation of Segway was the
             | balance system, which e-bikes don't need.
             | 
             | (I'm not familiar with the site in general, but I think
             | there's no reason for them to lie about the date, and
             | electric vehicles always show up surprisingly early).
             | 
             | https://reallygoodebikes.com/blogs/electric-bike-
             | blog/histor...
        
           | mbreese wrote:
           | The hype cycle for Segway was insane. Ginger (code name)
           | wasn't just going to change the world, it was going to cause
           | us to rethink how cities were laid out and designed. No one
           | would get around the world in the same way.
           | 
           | The engineering behind it was really quite nice, but the hype
           | set it up to fail. If it hasn't been talked up so much in the
           | media, the launch wouldn't have felt so flat. There was no
           | way for them to live up to the hype.
        
             | pinewurst wrote:
             | The "Code Name: Ginger" book, by a writer embedded with the
             | team, is excellent btw.
        
             | tim333 wrote:
             | I guess it depends on what media you follow. As a Brit my
             | recollection was hearing it was a novelty gadget that about
             | a dozen American eccentrics were using, and then there was
             | the story that a guy called Jimi Heselden bought the
             | company and killed himself by driving one off a cliff and
             | then that was about it. Not the same category as AI at all.
        
         | Flomolok wrote:
         | I tried Claude.
         | 
         | If hardware continues it's evolution of speed in the next 10
         | years I can have Claude but local + running constantly and yeah
         | that would change certain things fundamentaly
        
           | Scene_Cast2 wrote:
           | When ChatGPT was down a few days back, I locally booted up
           | Codestral. It was decent and usable.
        
           | jasonjmcghee wrote:
           | Try llama 3.3 70B. On groq or something. Runs on a 64GB
           | macbook (4bit quantized, which seems to not impact quality
           | much). Things have come a long way. Compare to llama 2 70b.
           | It's wild
        
             | Terretta wrote:
             | Llama 3.3 70B 8-bit MLX runs on Macbook 128GB at 7+ tokens
             | per second while running a full suite of other tools, even
             | at the 130k tokens size, and behaves with surprising
             | coherence. Reminded me of this time last year, first trying
             | Mixtral 8x22 -- which still offers a distinctive _je ne
             | sais quoi_!
        
         | slibhb wrote:
         | > So far we are multiple years in with much investment and
         | little return
         | 
         | Modern (2024) LLMs are "little return"? Seriously? For me,
         | they've mostly replaced Google. I have no idea how well the
         | scaling will contiue and I'm generally unimpressed with AI-
         | doomer narratives, but this technology is real.
        
           | __loam wrote:
           | They're not talking about your anecdotes, they're talking
           | about the capex and returns. Which are overwhelmingly
           | negative.
        
             | zachthewf wrote:
             | I mean, inference costs have decreased like 1000x in a few
             | years. OpenAI is the fastest growing startup by revenue,
             | ever.
             | 
             | How foolish do you have be to be worrying about ROI right
             | now? The companies that are building out the datacenters
             | produce billions per year in free cash flow. Maybe OP would
             | prefer a dividend?
        
               | ben_w wrote:
               | Given how close the tech is to running on consumer
               | hardware -- by which I mean normal consumers not top-end
               | MacBooks -- there's a real chance that the direct ROI is
               | going to be exactly zero within 5 years.
               | 
               | I say direct, because Chrome is free to users, yet
               | clearly has a benefit to Google worth spending on both
               | development and advertising the browser to users, and
               | analogous profit sources may be coming to LLMs for
               | similar reasons -- you can use it locally so long as you
               | don't mind every third paragraph being a political
               | message sponsored by the Turquoise Party of Tumbrige
               | Wells.
        
               | billyhoffman wrote:
               | > OpenAI is the fastest growing startup by revenue, ever.
               | 
               | No it's not. Facebook hit $2B in Revenue in late 2010 -
               | early 2011, ~5 years after its founding.
               | 
               | https://dazeinfo.com/2018/11/14/facebook-revenue-and-net-
               | inc...
        
               | kulahan wrote:
               | Finding _one_ example of him being wrong still _kinda_
               | supports his point, don 't you think?
        
               | dghlsakjg wrote:
               | Especially when that example is Facebook!
        
               | shawabawa3 wrote:
               | Coinbase was founded in 2013 and hit $1B in revenue in
               | 2019 iirc
        
             | elif wrote:
             | What do you think the capex returns were for DARPA NET?
             | 
             | These reflexive humanist anti-AI takes are going to age
             | like peeled bananas.
        
             | pembrook wrote:
             | Yea, I mean, if it can't show a return in the very short
             | term, is it even worth doing? How could something possibly
             | develop into being profitable after years of not being so?
             | 
             | All you have to do is point to bankrupt bookseller Amazon,
             | whom that dumb Jeff Bezos ran into the ground with his
             | money-losing strategy for years. Clearly it doesn't work!
        
         | roughly wrote:
         | Regarding the Silicon Valley Mindset, Douglass Rushkoff wrote a
         | quite good book on the topic:
         | https://bookshop.org/p/books/survival-of-the-richest-escape-...
        
         | ben_w wrote:
         | > Is nobody in these very rich guys' spheres pushing back on
         | their thought process?
         | 
         | Yes, frequently and loudly.
         | 
         | When Altman was collecting the award at Cambridge the other
         | year, protesters dropped in on the after-award public talk/Q&A
         | session, _and he actively empathised with the protestors_.
         | 
         | > So far we are multiple years in with much investment and
         | little return, and no obvious large-scale product-market fit,
         | much less a superintelligence.
         | 
         | I just got back from an Indian restaurant in the middle of
         | Berlin, and the table next to me I overheard a daughter talking
         | to her mother about ChatGPT and KI (Kunstliche Intelligenz, the
         | German for AI).
         | 
         | The product market fit is fantastic. This isn't the first time
         | I've heard random strangers discussing it in public.
         | 
         | What's not obvious is how to monetise it. Old meme parroted
         | around was "has no moat", which IMO is like saying Microsoft
         | has no moat for spreadsheets: sure, anyone can make the core
         | tech, and sure we don't know who is Microsoft vs StarOffice vs
         | ClarisWorks vs Google Docs, but there's more than zero moat.
         | From what I've seen, if OpenAI didn't develop new products,
         | they'd be making enough to be profitable, but it's a Red Queen
         | race to remain worth paying for.
         | 
         | As for "much less a superintelligence": even the current models
         | meet every definition of "very smart" I had while growing up,
         | despite their errors. As an adult, I'd still call them book-
         | smart if not abstractly smart. Students or recent graduates,
         | but not wise enough to know their limits and be cautious.
         | 
         | For current standards of what intelligence means, we'd better
         | hope we don't get ASI in the next decade or two, because if and
         | when that happens then "humans need not apply" -- and by
         | extension, foundational assumptions of economics may just stop
         | holding true.
        
           | this_user wrote:
           | > When Altman was collecting the award at Cambridge the other
           | year, protesters dropped in on the after-award public
           | talk/Q&A session, and he actively empathised with the
           | protestors.
           | 
           | He always does that to give himself cover, but he has clearly
           | shown that his words mean very little in this regard. He
           | always dodges criticism. He used to talk about the importance
           | of him being accountable to the OpenAI board and them being
           | able to fire him if necessary when people were questioning
           | the dangers of having one person have this much control over
           | something as big as bleeding edge AI. He also used to mention
           | how he had no direct financial interests in the company since
           | he had no equity.
           | 
           | Then the board did fire him. What happened next? He came
           | back, the board is gone, he now openly has complete control
           | over OpenAI, and they have given him a potentially huge
           | equity package. I really don't think Sam Altman is
           | particularly trustworthy. He will say whatever he needs to
           | say to get what he wants.
        
             | kulahan wrote:
             | Wasn't he fired for questionable reasons? I thought
             | everyone _wanted_ him back, and that 's why he was able to
             | return. It was, as I remember, _just_ the board that wanted
             | him out.
             | 
             | I imagine if he was doing something truly nefarious,
             | opinions might have been different, but I have no idea what
             | kind of cult of personality he has at that company, so I
             | might be wrong here.
        
               | samvher wrote:
               | Define everyone. I was delighted when they fired him. I
               | don't believe he has humanity's best interest at heart.
        
               | ben_w wrote:
               | 745 of 700 employees responded on a weekend to call for
               | his return, threatened mass resignations if the board did
               | not resign; among the signatories was board member
               | Sutskever, who defected from the board and publicly
               | apologized for his participation in the board's previous
               | actions.
        
               | fwip wrote:
               | 745 of 700?
        
               | samvher wrote:
               | I followed the drama. The point I was (somewhat
               | unsuccessfully) trying to make was that while, sure,
               | there were groups who wanted him back (mainly the groups
               | with vested financial interests and associated leverage),
               | my sense was that the way it played out was not
               | necessarily in line with wider humanity's best interest,
               | i.e. as would have been hoped based on OpenAI's publicly
               | stated goals.
        
               | LeafItAlone wrote:
               | >745 of 700 employees responded on a weekend to call for
               | his return, threatened mass resignations if the board did
               | not resign
               | 
               | I would think final count doesn't really matter. Self
               | serving cowards, like me, would sign it once they see the
               | way the wind was blowing. How many signed it before Satya
               | at Microsoft indicated support for Altman?
        
               | KerryJones wrote:
               | * 747/770
               | https://www.nasdaq.com/articles/747-of-770-openai-
               | employees-...
        
               | tasuki wrote:
               | > I thought everyone wanted him back, and that's why he
               | was able to return.
               | 
               | Everyone _working at OpenAI_ wanted him back. Which only
               | includes people who have a significant motivation to see
               | OpenAI succeed financially.
               | 
               | Also, there are rumours he can be vindictive. For all I
               | know, that might be a smear campaign. But _if_ that were
               | the case, and half the people at OpenAI wanted him back,
               | the other half would have a motivation to follow so as
               | not to get whatever punishment from Sam.
        
               | Arkhaine_kupo wrote:
               | > I thought everyone wanted him back,
               | 
               | Ilyia Sutskever who was the chief Scientist of the
               | company and honestly irreplacable in terms of AI
               | knowledge left after Altman returned.
        
             | yard2010 wrote:
             | This guy is outright scary. He gives me the chills.
        
           | 23B1 wrote:
           | > and he actively empathised with the protestors.
           | 
           | https://www.simplypsychology.org/narcissistic-mirroring.html
        
             | ben_w wrote:
             | Even if so, that's still showing awareness of what his
             | critics are critical of.
             | 
             | Now, can you make a falsifiable prediction: what would a
             | narcissist do that a normal person would not, such that you
             | can tell if he's engaging in a narcissist process rather
             | that what your own link also says is a perfectly normal and
             | healthy behaviour?
        
               | tivert wrote:
               | > Even if so, that's still showing awareness of what his
               | critics are critical of.
               | 
               | Mere awareness is really that meaningful.
               | 
               | >>> and he actively empathised with the protestors.
               | 
               | >> https://www.simplypsychology.org/narcissistic-
               | mirroring.html
               | 
               | > Now, can you make a falsifiable prediction: what would
               | a narcissist do that a normal person would not, such that
               | you can tell if he's engaging in a narcissist process
               | rather that what your own link also says is a perfectly
               | normal and healthy behaviour?
               | 
               | He doesn't have to, because I think the point was to
               | raise doubts about your interpretation.
               | 
               | But if you're looking for more evidence, there are all
               | the stories (from many different people) about Altman
               | being dishonest and manipulative _and being very
               | effective at it_. That 's the context that a lot of
               | people are going use to interpret your claim that Altman
               | "actively empathised with the protestors."
        
           | manquer wrote:
           | > The product market fit is fantastic. This isn't the first
           | time I've heard random strangers discussing it in public.
           | 
           | Hardly the evidence of PMF. There is always something new in
           | the zeitgeist, that every one is talking about, some more so
           | than others .
           | 
           | 2 years before it was VR, few years before that NFTs and
           | blockchain everything, before that it was self driving cars
           | before that personal voice assistants like Siri and so on .
           | 
           | - self driving has not transformed us into minority report
           | and despite how far it has come it cannot in next 30 years be
           | ubiquitous, even if the l5 magic tech exists today in every
           | new car sold it will take 15 years for current cars to
           | lifecycle.
           | 
           | - Crypto has not replaced fiat currency , even in most
           | generous reading you can see it as store of value like gold
           | or whatever useless baubles people assign arbitrary value to,
           | but has no traction for 3 out of other 4 key functions of
           | money .
           | 
           | - VR is not transformative to every day life and is 5
           | fundamental breakthroughs away.
           | 
           | - Voice assistants are useless setting alarms and selecting
           | music 10 years in.
           | 
           | There has been meaningful and measurable in each of these
           | fields, but none of them have met the high bar of world
           | transforming .
           | 
           | AI is aiming for much higher bar of singularity and
           | consciousness. Just in every hype cycles we are in peak of
           | inflated expectations, we will reach a plateau of
           | productivity where it is will be useful in specific areas (as
           | it already is) and people will move on to the next fad.
        
             | bee_rider wrote:
             | Consciousness is a stupid and unreasonable goal, it is
             | basically impossible to confirm that a machine isn't just
             | faking it really well.
             | 
             | Singularity is at least definable...Although I think it is
             | not really the bar required to be really impactful. If we
             | get an AI system that can do the work of like 60% of
             | hardcore knowledge workers, 80% of office workers, and 95%
             | of CEOs/politicians and other pablumists, it could be
             | really change how the economy works without actually being
             | a singularity.
        
             | ben_w wrote:
             | > 2 years before it was VR, few years before that NFTs and
             | blockchain everything, before that it was self driving cars
             | before that personal voice assistants like Siri and so on .
             | 
             | I never saw people talking about VR in public, nor NFTs,
             | and the closest I got to seeing blockchain in public were
             | adverts, not hearing random people around me chatting about
             | it. The only people I ever saw in real life talking about
             | self-driving cars were the ones I was talking to myself,
             | and everyone else was dismissive of them. Voice assistants
             | were mainly mocked from day 1, with the Alex advert being
             | re-dubbed as a dystopian nightmare.
             | 
             | > AI is aiming for much higher bar of singularity and
             | consciousness.
             | 
             | No, it's aiming to be _economically useful_.
             | 
             | "The singularity" is what a lot of people think is an
             | automatic consequence of being able to solve tasks related
             | to AI; me, I think that's how we sustained Moore's Law so
             | long (computers designing computers, you can't place a
             | billion transistors by hand, but even if you could the
             | scale is now well into the zone where quantum tunnelling
             | has to be accounted for in the design), and that
             | "singularities" are a sign something is wrong with the
             | model.
             | 
             | "Consciousness" has 40 definitions, and is therefore not
             | even a meaningful target.
             | 
             | > Just in every hype cycles we are in peak of inflated
             | expectations, we will reach a plateau of productivity where
             | it is will be useful in specific areas (as it already is)
             | and people will move on to the next fad.
             | 
             | In that at least, we agree.
        
             | eastbound wrote:
             | Common, VR, NFTs and blockchain were always abysses of void
             | looking for a usecase. Driving cars maybe, but development
             | has been stalling for 15 years.
        
             | tim333 wrote:
             | >people will move on to the next fad
             | 
             | AI isn't really a fad. It's going to something more like
             | electricity, say.
        
           | naming_the_user wrote:
           | > For current standards of what intelligence means, we'd
           | better hope we don't get ASI in the next decade or two,
           | because if and when that happens then "humans need not apply"
           | -- and by extension, foundational assumptions of economics
           | may just stop holding true.
           | 
           | I'm not sure that we need superintelligence for that to be
           | the case - it may depend on whether you include physical
           | ability in the definition of intelligence.
           | 
           | At the point that we have an AI that's capable of every task
           | that say a 110 IQ human is, including manipulating objects in
           | the physical world, then basically everyone is unemployed
           | unless they're cheaper than the AI.
        
             | ben_w wrote:
             | While I would certainly expect a radical change to
             | economics even from a middling IQ AI -- or indeed a low IQ,
             | as I have previously used the example of IQ 85 because
             | that's 15.9% of the population that would become
             | permanently unable to be economically useful -- I don't
             | think it's quite as you say.
             | 
             | Increasing IQ scores _seem_ to allow increasingly difficult
             | tasks to be performed competently -- not just the same
             | tasks faster, and also not just  "increasingly difficult"
             | in the big-O-notation sense, but it seems like below
             | certain IQ thresholds (or above them but with certain
             | pathologies), some thoughts just aren't "thinkable" even
             | with unbounded time.
             | 
             | While this might simply be an illusion that breaks with
             | computers because silicon outpaces synapses by _literally_
             | the degree to which jogging outpaces continental drift, I
             | don 't see strong evidence at this time for the idea that
             | this is an illusion. We may get that evidence in a very
             | short window, but I don't see it yet.
             | 
             | Therefore, in the absence of full brain uploads, I suspect
             | that higher IQ people may well be able to perform useful
             | work even as lower IQ people are outclassed by AI.
             | 
             | If we do get full brain uploads, then it's the other way
             | around, as a few super-geniuses will get their brains
             | scanned but say it takes a billion dollars a year to run
             | the sim in real time, then Moore's and Koomey's laws will
             | take n years to lower that to $10 million dollars a year,
             | 2n years to lower it to a $100k a year, and 3n years to
             | lower it to $1k/year.
        
               | naming_the_user wrote:
               | I completely agree with everything that you're saying
               | here - my use of the term "basically everyone" was lazy
               | in that I'm implying that at the 110 IQ level the
               | majority (approx 70%) of people are economically obsolete
               | outside of niche areas (e.g. care/butler style work in
               | which people desire the "human touch" for emotional
               | reasons).
               | 
               | I think that far below the 70% level we've already broken
               | economics. I can't see a society functioning in which
               | most people actually _know_ that they can't break out of
               | whatever box they're currently in - I think that things
               | like UBI are a distraction in that they don't account for
               | things like status jockeying that I think we're pretty
               | hardwired to do.
        
               | danenania wrote:
               | I think this trend of using IQ as a primary measuring
               | stick is flawed.
               | 
               | Human minds and AI minds have radically different
               | architectures, and therefore have different strengths and
               | weaknesses. IQ is only one component of what allows a
               | typical worker to do their job.
               | 
               | Even just comparing humans, the fact that one person
               | with, say, a 120 IQ can do a particular job--say they are
               | an excellent doctor--obviously does _not_ mean that any
               | other human with an equal or greater IQ can also do that
               | job effectively.
        
           | jkaptur wrote:
           | Can you expand on your spreadsheet analogy?
           | 
           | I think Joel Spolsky explained the main Office moat well
           | here: https://www.joelonsoftware.com/2008/02/19/why-are-the-
           | micros...
           | 
           | > ... it might take you weeks to change your page layout
           | algorithm to accommodate it. If you don't, customers will
           | open their Word files in your clone and all the pages will be
           | messed up.
           | 
           | Basically, people who use Office have extremely specific
           | expectations. (I've seen people try a single keyboard
           | shortcut, see that it doesn't work in a web-based
           | application, and declare that whole thing "doesn't work".)
           | Reimplementing all that stuff is really time consuming.
           | There's also a strong network effect - if your company uses
           | Office, you'll probably use it too.
           | 
           | On the other hand, people don't have extremely specific
           | expectations for LLMs because 1) they're fairly new and 2)
           | they're almost always nondeterministic anyway. They don't
           | care so much about using the same one as everyone they know
           | or work with, because there's no network aspect of the
           | product.
           | 
           | I don't think the moats are similar.
        
             | pj_mukh wrote:
             | "Basically, people who use Office have extremely specific
             | expectations."
             | 
             | Interesting point, but to OP's point- This wasn't true when
             | Office was first introduced and Office still created a
             | domineering market share. In fact, I'd argue these moat-by-
             | idiosyncracy features are a result of that market share.
             | There is nothing stopping OpenAI from developing their own
             | over time.
        
               | bee_rider wrote:
               | Does office actually have a moat? I thought the kids
               | liked Google docs nowadays. (No opinion as to which is
               | actually better, the actual thing people should do is
               | write LaTeX in vim. You can even share documents! Just
               | have everybody attach to the same tmux session and take
               | turns).
        
               | jkaptur wrote:
               | Sure, there's nothing stopping any business from
               | developing a moat. The Excel example doesn't make the
               | case of OpenAI any clearer to me.
        
           | lyu07282 wrote:
           | > foundational assumptions of economics may just stop holding
           | true
           | 
           | Those assumptions are already failing billions of people,
           | some people might still be benefiting from those "assumptions
           | of economics" so they don't see the magnitude of the problem.
           | But just as the billions who suffer now have no power, so
           | will you have no power once those assumptions fail for you
           | too.
        
           | sangnoir wrote:
           | > ...like saying Microsoft has no moat for spreadsheets
           | 
           | Which would be very inaccurate as network-effects are Excel's
           | (and Word's) moat. Excel being bundled with Office and
           | Windows helped, but it beat Lotus-123 by being a superior
           | product at a time the computing landscape was changing.
           | OpenAI has no such advantage yet: a text-based API is about
           | as commoditized as a technology can get and OpenAI is
           | furiously launching interfaces with lower interoperability
           | (where one can't replace GPT-4o with Claude 3.5 via a drop-
           | down)
        
           | jrflowers wrote:
           | > protesters dropped in on the after-award public talk
           | 
           | I'm going to guess that GP does not consider random
           | protestors to be in Sam Altman's 'sphere'
           | 
           | > The product market fit is fantastic.
           | 
           | This is true insomuch that you define "product market fit" as
           | "somebody mentioning it in an Indian restaurant in Berlin"
           | 
           | > every definition of "very smart"
           | 
           |  _Every definition_ you say?
           | 
           | https://bsky.app/profile/edzitron.com/post/3lclo77koj22y
        
         | TZubiri wrote:
         | I remember seeing OpenAI like 10 years ago in GiveWell's list
         | of charities along with water.org, deworm the world 80.000
         | hours and that kind of things.
         | 
         | It's a wild take to say that they have gotten nowhere and that
         | they haven't found product-market fit.
        
         | elif wrote:
         | >So far we are multiple years in with much investment and
         | little return, and no obvious large-scale product-market fit
         | 
         | Literally every market has been disrupted and some are being
         | optimized into nonexistence.
         | 
         | You don't know anyone who's been laid off by a giant
         | corporation that's also using an AI process that people did 3
         | years ago?
        
           | munk-a wrote:
           | I know companies that have had layoffs - but those would have
           | happened anyways - regular layoffs are practically demanded
           | by the market at this point.
           | 
           | I know companies that have (or rather are in the process of)
           | adopting AI into business workflows. The only companies I
           | know of that aren't using more labor to correct their AI
           | tools are the ones that used it pre-ChatGPT/AI Bubble. Plenty
           | of companies have rolled out "talk to our AI" chat bubbles on
           | their websites and users either exploit and jailbreak them to
           | run prompts on the company's dime or generally detest them.
           | 
           | AI is an extremely useful tool that has been improving our
           | lives for a long time - but we're in the middle of an
           | absolutely bonkers level bubble that is devouring millions of
           | dollars for projects that often lack a clear monetization
           | plan. Even code gen seems pretty underwhelming to most of the
           | developers I've heard from that have used it - it may very
           | well be extremely impactful to the next generation of
           | developers - but most current developers have already honed
           | their skills to out-compete code gen in the low complexity
           | problems it can competently perform.
           | 
           | Lots of money is entering markets - but I haven't seen real
           | disruption.
        
         | Dig1t wrote:
         | I'm going to favorite this thread and come back with a comment
         | in 10 years. I think it will be fun to revisit this
         | conversation.
         | 
         | If you really don't think that this line of research and
         | development is leading to AGI then I think you are being
         | hopelessly myopic.
         | 
         | >robotics to be "completely solved" by 2020
         | 
         | There are some incredible advances happening _right now_ in
         | robotics largely due to advances in AI. Obviously 2020 was not
         | exactly correct, but also we had COVID which kind of messed up
         | everything in the business world. And arguing that something
         | didn't happen in 2020 but instead happened in 2025 or 2030, is
         | sort of being pedantic isn't it?
         | 
         | Being a pessimist makes you sound smart and world-weary, but
         | you are just so wrong.
        
           | 9dev wrote:
           | Being an optimist makes you sound naive and a dreamer. There
           | is no scientific agreement that LLMs are going to lead to AGI
           | in the slightest--we cannot even define _what consciousness
           | is_ , so even if the technology would lead to actual
           | intelligence, we lack the tools to prove that.
           | 
           | In terms of robotics, the progress sure is neat, but for the
           | foreseeable time, a human bricklayer will outcompete any
           | robot; if not on the performance dimension, then on cost or
           | flexibility. We're just not there yet, not by a long stretch.
           | And that won't change just by deceiving yourself.
        
           | yodsanklai wrote:
           | > line of research and development is leading to AGI
           | 
           | What do you mean by AGI exactly? if you want to come back in
           | 10 years to see who's right, at least you should provide some
           | objective criteria so we can decide if the goal has been
           | attained.
        
             | Dig1t wrote:
             | I'm talking about this:
             | 
             | https://en.wikipedia.org/wiki/Artificial_general_intelligen
             | c...
             | 
             | >Artificial general intelligence (AGI) is a type of
             | artificial intelligence (AI) that matches or surpasses
             | human cognitive capabilities across a wide range of
             | cognitive tasks.
             | 
             | Most (if not all) of the tests listed on the wikipedia page
             | will be passed:
             | 
             | >Several tests meant to confirm human-level AGI have been
             | considered, including:
             | 
             | >The Turing Test (Turing)
             | 
             | This test is of course already passed by all the existing
             | models.
             | 
             | >The Robot College Student Test (Goertzel)
             | 
             | >The Employment Test (Nilsson)
             | 
             | >The Ikea test (Marcus)
             | 
             | >The Coffee Test (Wozniak)
             | 
             | >The Modern Turing Test (Suleyman)
        
         | andy_ppp wrote:
         | Echo chambers are very effective at drowning out dissenting
         | voices.
        
         | madrox wrote:
         | I think an assumption that a lot of people make about people
         | with power is that they say what they actually believe. In my
         | experience, they do not. Public speech is a means to an end,
         | and they will say whatever is the strongest possible argument
         | that will lead them to what they actually want.
         | 
         | In this case, OpenAI wants look like they're going to save the
         | world and do it in a noble way. It's Google's "don't be evil"
         | all over again.
        
         | pembrook wrote:
         | > _no obvious large-scale product-market fit_
         | 
         | I mean, I know pessimistically "ackshually"-ing yourself into
         | the wrong side of history is kind of Hackernews's thing (eg.
         | that famous dropbox comment).
         | 
         | But if you don't think OpenAI found product-market-fit with
         | ChatGPT, then I don't think you understand what product-market-
         | fit is...
        
           | snowwrestler wrote:
           | It's popular but not making money. So maybe product-audience
           | fit so far.
        
         | dbreunig wrote:
         | Altman appears to say AGI is far away when he shouldn't be
         | regulated, right around the corner when he's raising funds, or
         | is going to happen tomorrow and be mundane when he's trying
         | break a Microsoft contract.
        
         | worik wrote:
         | > and no obvious large-scale product-market fit,
         | 
         | Really?
         | 
         | I use, and pay for, OpenAI every day
        
         | peutetre wrote:
         | > _Is nobody in these very rich guys ' spheres pushing back on
         | their thought process?_
         | 
         | It's more simple. They've found that the grander the vision,
         | the bigger the lie the more people will believe it. So they
         | lie.
         | 
         | Take Tesla's supposed full self-driving as an example. Tesla's
         | doesn't have full self-driving. Musk has been lying about it
         | for a decade. Musk tells the same lie year after year, like
         | clockwork.
         | 
         | And yet there are still plenty of true believers who ardently
         | defend Tesla's lies and buy more Tesla stock.
         | 
         | The lies work.
        
         | wnevets wrote:
         | > Is nobody in these very rich guys' spheres pushing back on
         | their thought process?
         | 
         | The moment someone does that they're no longer in the very rich
         | guys sphere.
        
         | delusional wrote:
         | I agree with your analysis. The nice thing I've realized is
         | that this means I can just stop paying attention to it. The
         | product sucks, and is useless. The people are all idiots with
         | god complexes. All the money are fake funny money they get from
         | their rich friends in silicon valley.
         | 
         | It will literally have no impact on anything. It will be like
         | NFT's however long ago that was. Everybody will talk about how
         | important it is, then they wont. Life will go one as it always
         | have, with people and work, and the slow march of progress. In
         | 30 years nobody is going to remember who "sam altman" was.
        
         | benatkin wrote:
         | > I guess it's not news but it is pretty wild to see the level
         | of millenarianism espoused by all of these guys.
         | 
         | Unprecedented change has already happened with LLMs. So this is
         | expected.
         | 
         | > So far we are multiple years in with much investment and
         | little return
         | 
         | ...because it's expensive to build what they're building.
        
         | sebzim4500 wrote:
         | >and no obvious large-scale product-market fit
         | 
         | I'm afraid you are in as much an echo chamber as anyone. 200
         | million+ weekly active users is large scale pmf
        
       | catigula wrote:
       | Is it even possible for Sam Altman to stop being dishonest? This
       | isn't a method to redress concerns, it's a smear that has nothing
       | to do with the lawsuit.
        
       | milleramp wrote:
       | Is this one of the 12 days of OpenAI?
        
         | heavyarms wrote:
         | If this is a GPT-generated joke, I'd say they cracked AGI.
        
       | meagher wrote:
       | People change their minds all the time. What someone wanted in
       | 2017 could be the same or different in 2024?
        
         | gkoberger wrote:
         | Sure, but the nuance is Elon only wants what benefits him most
         | at the time. There was no philosophical change, other than now
         | he's competing.
         | 
         | He's allowed these opinions, we're allowed to ignore them and
         | lawyers are allowed to use this against him.
        
           | ganeshkrishnan wrote:
           | > but the nuance is Elon only wants what benefits him most at
           | the time.
           | 
           | Isn't that almost everyone? The people who left OpenAi could
           | have joined forces but everyone went ahead and created their
           | own company "for AGI"
           | 
           | It's like the wild west where everyone dreams of digging up
           | gold.
        
       | kelseyfrog wrote:
       | Maybe a weird question, but how does a capitalist economy work
       | where AGI-performed-labor is categorically less expensive than
       | human-performed labor? Do most people who labor for wages just go
       | unemployed, and then bankrupt, and then die?
        
         | next_xibalba wrote:
         | No one has a strong answer for this. This question was the
         | origin of a lot of universal basic income chatter and research
         | beginning in maybe 2014. The idea being, whoever creates
         | superintelligence will capture substantively all future
         | profits. If a non-profit (or capped profit) company does it,
         | they could fund UBI for society. But the research into UBI have
         | not been very decisive. Do people flourish or flounder when
         | receiving UBI? And what does that, in aggregate, do to a
         | society? We still don't really know. Oh and also, OAI is no
         | longer a non-profit and has eliminated the profit cap.
        
         | slibhb wrote:
         | The question shows a total disconnect from reality. People who
         | are unemployed with no money today don't die. Even street
         | people don't die.
         | 
         | If AGI labor is cheaper and as effective as human labor, prices
         | will drop to the point where the necessities cost very little.
         | People will be able to support themselves working part-time
         | jobs on whatever hasn't been automated. The tax system will
         | become more redistributive and maybe we'll adopt a negative
         | income tax. People will still whine and bitch about capitalism
         | even as it negates the need for them to lift a single finger.
        
           | kelseyfrog wrote:
           | Until now, I didn't think that poverty impacting mortality
           | was a controversial statement. Poverty is the fourth greatest
           | cause of US deaths[1].
           | 
           | Why would I hire a part time worker when I could hire an AI
           | for cheaper? When I said categorically, I meant it.
           | 
           | 1. https://medicalxpress.com/news/2023-04-poverty-fourth-
           | greate...
        
             | slibhb wrote:
             | Quite a goal post move from "just go unemployed, and then
             | bankrupt, and then die" to "poverty impacts health"
        
               | kelseyfrog wrote:
               | I didn't say health; you did. I said mortality which is
               | synonymous with death.
               | 
               | Misquoting someone to prove a point is intellectually
               | bankrupt. Take a breath and calm down. Why don't you try
               | stealmanning instead?
        
         | Barrin92 wrote:
         | By artificially recreating scarcity in virtual space and
         | reproducing the same kind economic dynamic there, what Antonio
         | Negri called 'symbolic production'. Think fashion brands, video
         | game currency, IP, crypto, Clubhouse, effectively the world
         | we're already in. There's an older essay by Zizek somewhere
         | where he points out that this already has played out. Marx was
         | convinced 'general intellect', what we know call the 'knowledge
         | economy' would render traditional economics obsolete, but we
         | just privatized knowledge and reimposed the same logic on top
         | of it.
        
           | wmf wrote:
           | Most people have nothing to contribute in the
           | virtual/knowledge realm. Certainly not enough to live on.
        
             | pixelsort wrote:
             | Careful, we are not supposed to discuss how many human jobs
             | will remain. The script clearly states the approved lines
             | are "there will still be human jobs" and "focus on being
             | adaptable". When it seems normal that Fortune 500 execs are
             | living at sea on yachts, that's when you'll know OpenAI
             | realized their vision for humanity.
        
               | selimthegrim wrote:
               | I would think certain recent events in New York will
               | probably be a bigger impetus for Fortune 500 execs living
               | at sea on yachts
        
         | natbennett wrote:
         | Comparative advantage. Even if AGI is better at absolutely all
         | work the amount of work it is capable of doing is finite. So
         | AGI will end up mostly being used for the highest value, most
         | differentiated work it's capable of, and it will "trade" with
         | humans for work where humans have a lower opportunity cost,
         | even if it would technically be better at those tasks too.
         | 
         | Basically the same dynamic that you see in e.g. customer
         | support. A company's founder is going to be better than your
         | typical tier 1 support person at every tier 1 support task, but
         | the founder's time is limited and the opportunity cost of
         | spending their time on first-contact support is high.
        
         | judah wrote:
         | I remember watching an episode of the 1960s police drama,
         | Dragnet.
         | 
         | In one of the episodes, Detective Joe Friday spoke with some
         | computer technicians in a building full of computers (giant, at
         | the time). Friday asked the computer technician,
         | 
         | > "One more thing. Do you think that computers will take all
         | our jobs one day?"
         | 
         | > "No. There will always be jobs for humans. Those jobs will
         | change, maybe include working on and maintaining computers, but
         | there will still be important jobs for humans."
         | 
         | That bit of TV stuck with me. Here we are 60 years later and
         | that has proven true. I suspect it will still be true in 60
         | years, regardless of how well AI advances.
         | 
         | Dario Amodei, former VP of research at OpenAI and current CEO
         | of Anthropic, notes[0] a similar sentiment:
         | 
         | > "First of all, in the short term I agree with arguments that
         | comparative advantage will continue to keep humans relevant and
         | in fact increase their productivity, and may even in some ways
         | level the playing field between humans. As long as AI is only
         | better at 90% of a given job, the other 10% will cause humans
         | to become highly leveraged, increasing compensation and in fact
         | creating a bunch of new human jobs complementing and amplifying
         | what AI is good at, such that the "10%" expands to continue to
         | employ almost everyone. In fact, even if AI can do 100% of
         | things better than humans, but it remains inefficient or
         | expensive at some tasks, or if the resource inputs to humans
         | and AI's are meaningfully different, then the logic of
         | comparative advantage continues to apply. One area humans are
         | likely to maintain a relative (or even absolute) advantage for
         | a significant time is the physical world. Thus, I think that
         | the human economy may continue to make sense even a little past
         | the point where we reach "a country of geniuses in a
         | datacenter".
         | 
         | Amodei does think that eventually we may need to eventually
         | organize economic structures with powerful AI in mind, but this
         | need not imply humans will never have jobs.
         | 
         | [0]: https://darioamodei.com/machines-of-loving-grace
        
         | jdminhbg wrote:
         | The way comparative advantage works, even if an AGI is better
         | than me at Task A and Task B, if it is better at Task A than
         | Task B, it'll do Task A and I'll do Task B.
         | 
         | I think a lot of people confuse AGI with infinite AGI.
        
           | llamaimperative wrote:
           | ... what
           | 
           | No, _you're_ confusing finite AGI for an AGI that can only
           | compete with a single person at a time.
           | 
           | In a world of AGI better-than-humans at both A and B, it's a
           | whole lot cheaper to replicate AGI that's better at A and B
           | and do both A and B than it is to _employ human beings_ who
           | do B badly.
        
         | elif wrote:
         | There are essentially two answers to this question, and neither
         | are in the interests of capital so there will never be progress
         | if power toward them until forced.
         | 
         | The first is the bill gates "income tax for robots" idea which
         | does a kind of hand wavey thing figuring out how to calculate a
         | robot, how the government will distribute taxes, etc. That one
         | is a mess impossible to get support for and nearly impossible
         | to transition to.
         | 
         | The other idea, put forth by the Democracy in Europe 2025
         | party, is called a universal basic dividend, which essentially
         | says that to make AI owned by humanity, the utility of
         | automation should be calculated, and as a result, a dividend
         | will be paid out (just like any other stock holder) which is a
         | percent of that company's profit derived from automation. It
         | becomes part of the corporate structure, rather than a
         | government structure, so this one I think kinda has merit on
         | paper, but likewise zero motivation to implement until it's
         | virtually too late
        
       | vwkd wrote:
       | Sounds like Elon's "fourth attempt [..] to reframe his claims"
       | might actually be close to the target.
       | 
       | Otherwise, why would they engage in a publicity battle to sway
       | public sentiment precisely now, if their legal case wasn't weak?
        
         | hatsix wrote:
         | I'm generally in the camp of "I wouldn't miss anyone or
         | anything involved in this story if they suddenly stopped
         | existing", but I don't understand how engaging in a publicity
         | battle is considered proof of anything. If their case was weak,
         | what use is it to get the public on "their side" and they lose?
         | If their case is strong, why wouldn't they want the public to
         | be on their side?
         | 
         | I hope they all spend all of their money in court and go
         | bankrupt.
        
       | dogboat wrote:
       | Pre transformers paper emails. Fun to read.
        
       | 23B1 wrote:
       | Neither they nor Elon can or should be trusted to tell the truth.
       | The only utility this statement should have is to illustrate
       | whatever public narrative OAI wishes to affect.
        
         | I_am_tiberius wrote:
         | True. But after Elon's twitter lies and world domination
         | ambitions he showed during the past 5 years, I just can't
         | support his narrative.
        
       | dtquad wrote:
       | Important context: Elon Musk's close friend from the PayPal days
       | and fellow libertarian tech billionaire David Sacks has been
       | selected as the Trump admin's czar for AI and crypto.
       | 
       | This is why OpenAI and Sam Altman are _understandably_ concerned.
        
         | I_am_tiberius wrote:
         | For additional context: These PayPal guys are very much contra
         | Google and YC (Sam/pg).
        
       | dgrin91 wrote:
       | dumb question - in one of the emails they mention ICO. What is
       | that?
       | 
       | > I have considered the ICO approach and will not support it.
       | 
       | ...
       | 
       | > I respect your decision on the ICO idea
       | 
       | Pretty sure they aren't talking about Initial Coin Offerings. Any
       | clue what they mean?
        
         | wmf wrote:
         | Altman created Worldcoin so maybe he did mean Initial Coin
         | Offering.
        
       | boringg wrote:
       | If I read anything from this it's the OpenAI is looking weak and
       | worried if they are trying to use this to garner support or, at
       | least, generate negative publicity for x.AI / Musk.
       | 
       | Altman being the regulatory capture man muscling out competitors
       | via pushing the white house and washington to move for safety,
       | the whole board debacle and converting from not for profit to
       | profit.
       | 
       | I don't think anyone sees Musks efforts as altruistic.
        
       | averageRoyalty wrote:
       | It's an aside, but these sorts of timelines are very American
       | centric.
       | 
       | I don't know when your autumn ("fall") or summer are in relation
       | to September. Don't mix keys here, either use months or quarters,
       | not a mix of things including some relative to a specific
       | hemisphere.
        
         | ronsor wrote:
         | OpenAI is an American company
        
           | averageRoyalty wrote:
           | And they intend to "determine the fate of the world". As
           | such, communication shouldn't be American centric.
        
       | exprofmaddy wrote:
       | It seems the humans pursuing AGI lack sufficient natural
       | intelligence. I'm sad that humans with such narrow and misguided
       | perspectives have so much power, money, and influence. I worry
       | this won't end well.
        
       | agnosticmantis wrote:
       | From Musk's email:
       | 
       | "Frankly, what surprises me is that the AI community is taking
       | this long to figure out concepts. It doesn't sound super hard.
       | High-level linking of a large number of deep nets sounds like the
       | right approach or at least a key part of the right approach."
       | 
       | Genuine question I've always had is, are these charlatans
       | conscious of how full of shit they are, or are they really high
       | on their own stuff?
       | 
       | Also it grinds my gears when they pull out probabilities out of
       | their asses:
       | 
       | "The probability of DeepMind creating a deep mind increases every
       | year. Maybe it doesn't get past 50% in 2 to 3 years, but it
       | likely moves past 10%. That doesn't sound crazy to me, given
       | their resources."
        
         | hn1986 wrote:
         | You should read what he says about software engineering. He's
         | clearly clueless
        
       | Biologist123 wrote:
       | I'm neither an Elon fanboy nor a hater. But I do wonder both what
       | he possesses and is possessed by to have created not just a
       | successful company, but era-defining companies plural.
        
       | tonygiorgio wrote:
       | Just about the only open part about OpenAI is how their dirty
       | laundry is constantly out in the open.
        
       | linotype wrote:
       | Google/Gemini have none of this baggage.
        
         | dtquad wrote:
         | Google/Gemini are also the only ones who are not entirely
         | dependent on Nvidia. They are now several generations into
         | their in-house designed and TSMC manufactured TPUs.
        
       | andrewinardeer wrote:
       | Okay. Is OpenAI now deflecting? Deflecting and reframing.
        
       | VectorLock wrote:
       | I found all the discussions about Dota pretty amusing. I had no
       | idea it was such a big thing for them early on.
        
         | bee_rider wrote:
         | If I'm replaced by a DOTA bot I'm going to be pissed. At least
         | it could be a bot for a good game, like StarCraft or something.
        
       ___________________________________________________________________
       (page generated 2024-12-13 23:00 UTC)