[HN Gopher] Microsoft and OpenAI's close partnership shows signs...
       ___________________________________________________________________
        
       Microsoft and OpenAI's close partnership shows signs of fraying
        
       Author : jhunter1016
       Score  : 218 points
       Date   : 2024-10-18 11:11 UTC (11 hours ago)
        
 (HTM) web link (www.nytimes.com)
 (TXT) w3m dump (www.nytimes.com)
        
       | cbarrick wrote:
       | https://archive.ph/Bas23
        
       | Roark66 wrote:
       | >OpenAI plans to loose $5 billion this year
       | 
       | Let that sink in for anyone that has incorporated Chatgpt in
       | their work routines to the point their normal skills start to
       | atrophy. Imagine in 2 years time OpenAI goes bust and MS gets all
       | the IP. Now you can't really do your work without ChatGPT, but it
       | cost has been brought up to how much it really costs to run.
       | Maybe $2k per month per person? And you get about 1h of use per
       | day for the money too...
       | 
       | I've been saying for ages, being a luditite and abstaining from
       | using AI is not the answer (no one is tiling the fields with oxen
       | anymore either). But it is crucial to at the very least retain
       | 50% of capability hosted models like Chatgpt offer locally.
        
         | switch007 wrote:
         | $2k is way way cheaper than a junior developer which, if I had
         | to guess their thinking, is who the Thought Leaders think it'll
         | replace.
         | 
         | Our Thought Leaders think like that at least. They also pretty
         | much told us to use AI or get fired
        
           | ilrwbwrkhv wrote:
           | Which thought leader is telling you to use AI or get fired?
        
             | switch007 wrote:
             | My CTO (C level is automatically a Thought Leader)
        
           | CamperBob2 wrote:
           | It's premature to think you can replace a junior developer
           | with current technology, but it seems fairly obvious that
           | it'll be possible within 5-10 years at most. We're well past
           | the proof-of-concept stage IMO, based on extensive (and
           | growing) personal experience with ML-authored code. Anyone
           | who argues that the traditional junior-developer role isn't
           | about to change drastically is whistling past the graveyard.
           | 
           | Your C-suite execs are paid to skate where that particular
           | puck is going. If they didn't, people would complain about
           | their unhealthy fixation on the next quarter's revenue.
           | 
           | Of course, if the junior-developer role is on the chopping
           | block, then more experienced developers will be next.
           | Finally, the so-called "thought leaders" will find themselves
           | outcompeted by AI. The ability to process very large amounts
           | of data in real time, leveraging it to draw useful
           | conclusions and make profitable predictions based on
           | ridiculously-large historical models, is, again, already past
           | the proof-of-concept stage.
        
             | actsasbuffoon wrote:
             | Unless I've missed some major development then I have to
             | strenuously disagree. AI is primarily good at writing
             | isolated scripts that are no more than a few pages long.
             | 
             | 99% of the work I do happens in a large codebase, far
             | bigger than anything that you can feed into an AI. Tickets
             | come in that say something like, "Users should be able to
             | select multiple receipts to associate with their reports so
             | long as they have the management role."
             | 
             | That ticket will involve digging through a whole bunch of
             | files to figure out what needs to be done. The resolution
             | will ultimately involve changes to multiple models, the
             | database schema, a few controllers, a bunch of React
             | components, and even a few changes in a micro service
             | that's not inside this repo. Then the AI is going to fail
             | over and over again because it's not familiar with the APIs
             | for our internal libraries and tools, etc.
             | 
             | AI is useful, but I don't feel like we're any closer to
             | replacing software developers now than we were a few years
             | ago. All of the same showstoppers remain.
        
               | luckydata wrote:
               | Google's LLM can ingest humongous contexts. Check it out.
        
               | CamperBob2 wrote:
               | All of the code you mention implements business logic,
               | and you're right, it's probably not going to be practical
               | to delegate maintenance of existing code to an ML model.
               | What will happen, probably sooner than you think, is that
               | that code will go away and be replaced by script(s) that
               | describe the business logic in something close to
               | declarative English. The AI model will then generate the
               | code that implements the business logic, along with the
               | necessary tests.
               | 
               | So when maintenance is required, it will be done by
               | adding phrases like "Users should be able to select
               | multiple receipts" to the existing script, and re-running
               | it to regenerate the code from scratch.
               | 
               | Don't confuse the practical limitations of current models
               | with conceptual ones. The latter exist, certainly, but
               | they will either be overcome or worked around. People are
               | just not as good at writing code as machines are, just as
               | they are not as good at playing strategy games. The
               | models will continue to improve, but we will not.
        
               | prewett wrote:
               | The problem is, the feature is never actually "users
               | should be able to select multiple receipts". It's "users
               | should be able to select multiple receipts, but not
               | receipts for which they only have read access and not
               | write access, and not when editing a receipt, and should
               | persist when navigating between the paginated data but
               | not persist if the user goes to a different 'page' within
               | the webapp. The selection should be a thick border around
               | the receipt, using the webapp selection color and the
               | selection border thickness, except when using the low-
               | bandwidth interface, in which case it should be a
               | checkbox on the left (or on the right if the user is
               | using a RTL language). Selection should adhere to
               | standard semantics: shift selects all items from the last
               | selection, ctrl/cmd toggles selection of that item, and
               | clicking creates a new, one-receipt selection. ..." By
               | the time you get all that, it's clearer in code.
               | 
               | I will observe that there have been at least three
               | natural-language attempts in the past, none of which
               | succeeded in being "just write it down". COBOL is just as
               | code-y as any other programming language. SQL is similar,
               | although I know a fair amount of non-programmers who can
               | write SQL (but then, back in the day my Mom taught be
               | about autoexec.bat, and she could care less about
               | programming). Anyway, SQL is definitely not just adding
               | phrases and it just works. Finally, Donald Knuth's WEB is
               | a mixture, more like a software blog entry, where you put
               | the pieces of the software inamongst the explanatory
               | writeup. It has caught on even less, unless you count
               | software blogs.
        
               | CamperBob2 wrote:
               | _I will observe that there have been at least three
               | natural-language attempts in the past, none of which
               | succeeded in being "just write it down". COBOL..._
               | 
               | I think we're done here.
        
               | Kiro wrote:
               | Cursor has no problem making complicated PRs spanning
               | multiple files and modules in my legacy spaghetti code. I
               | wouldn't be surprised if it could replace most
               | programmers already.
        
             | l33t7332273 wrote:
             | You would think thought leaders would be the first to be
             | replaced by AI.
             | 
             | > The ability to process very large amounts of data in real
             | time, leveraging it to draw useful conclusions and make
             | profitable predictions based on ridiculously-large
             | historical models, is, again, already past the proof-of-
             | concept stage.
             | 
             | [citation needed]
        
               | CamperBob2 wrote:
               | If you can drag a 9-dan grandmaster up and down the Go
               | ban, you can write a computer program or run a company.
        
           | srockets wrote:
           | I found those tools to resemble an intern: they can do some
           | tasks pretty well, when explained just right, but others
           | you'd spend more time guiding than it would have taken you to
           | do it yourself.
           | 
           | And rarely can you or the model/intern can tell ahead of time
           | which tasks are in each of those categories.
           | 
           | The difference is, interns grow and become useful in months:
           | the current rate of improvements in those tools isn't even
           | close to that of most interns.
        
             | luckydata wrote:
             | I have a slightly different view. IMHO LLMs are excellent
             | rubber ducks or pair programmers. The rate at which I can
             | try ideas and get them back is much higher than what I
             | would be doing by myself. It gets me unstuck in places
             | where I might have spent the best part of a day in the
             | past.
        
               | srockets wrote:
               | My experience differs: if at all, they get me unstuck by
               | trying to shove bad ideas, which allows me to realize
               | "oh, that's bad, let's not do that". But it's also
               | extremely frustrating, because a stream of bad ideas from
               | a human has some hope they'll learn, but here I know I'll
               | get the same BS, only with an annoying and inhumane
               | apology boilerplate.
        
               | Kiro wrote:
               | Not my experience at all. What kind of code are you using
               | it for?
        
           | kergonath wrote:
           | > Our Thought Leaders think like that at least. They also
           | pretty much told us to use AI or get fired
           | 
           | Ours told us _not_ to use LLMs because they are worried about
           | leaking IP and confidential data.
        
         | hggigg wrote:
         | I think this is the wrong way to think about it.
         | 
         | It's more important to find a problem and see if this is a fit
         | for the solution, not throw the technology at everything and
         | see if it sticks.
         | 
         | I have had no needs where it's an appropriate solution myself.
         | In some areas it represents a net risk.
        
         | hmottestad wrote:
         | Cost tends to go down with time as compute becomes cheaper. And
         | as long as there is competition in the AI space it's likely
         | that other companies would step in and fill the void created by
         | OpenAI going belly up.
        
           | infecto wrote:
           | I tend to think along the same lines. If they were the only
           | player in town it would be different. I am also not convinced
           | $5billion is that big of a deal for them, would be
           | interesting to see their modeling but it would be a lot more
           | suspect if they were raising money and increasing the price
           | of the product. Also curious how much of that spend is R&D
           | compared to running the system.
        
           | ToucanLoucan wrote:
           | > Cost tends to go down with time as compute becomes cheaper.
           | 
           | This is generally true but seems to be, if anything, inverted
           | for AI. These models cost billions to train in compute, and
           | OpenAI thus far has needed to put out a brand new one roughly
           | annually in order to stay relevant. This would be akin to
           | Apple putting out a new iPhone that costed billions to
           | engineer year over year, but was giving the things away for
           | free on the corner and only asking for money for the versions
           | with more storage and what have you.
           | 
           | The vast majority of AI adjacent companies too are just
           | repackaging OpenAI's LLMs, the exceptions being ones like
           | Meta, which certainly has a more solid basis what with being
           | tied to an incredibly profitable product in Facebook, but
           | also... it's Meta and I'm sure as shit not using their AI for
           | anything, because it's Meta.
           | 
           | I did some back of napkin math in a comment a ways back and
           | landed on that in order to break even _merely on training
           | costs,_ not including the rest of the expenditure of the
           | company, they would need to charge all of their current
           | subscribers $150 per month, up from... I think the most
           | expensive right now is about $20? So nearly an 8 fold price
           | increase, with no attrition, to again _break even._ And I 'm
           | guessing all these investors they've had are not interested
           | in a 0 sum.
        
             | Mistletoe wrote:
             | The closest analog seems to be bitcoin mining, which
             | continually increases difficulty. And if you've ever
             | researched how many bitcoin miners go under...
        
               | lukeschlather wrote:
               | It's nothing like bitcoin mining. Bitcoin mining is
               | intentionally designed so that it gets harder as people
               | use it more, no matter what.
               | 
               | With LLMs, if you have a use case which can run on an
               | H100 or whatever and costs $4/hour, and the LLM has
               | acceptable performance, it's going to be cheaper in a
               | couple years.
               | 
               | Now, all these companies are improving their models but
               | they're doing that in search of magical new applications
               | the $4/hour model I'm using today can't do. If the
               | $4/hour model works today, you don't have to worry about
               | the cost going up. It will work at the same price or
               | cheaper in the future.
        
               | Mistletoe wrote:
               | But OpenAI has to keep releasing new ever-increasing
               | models to justify it all. There is a reason they are
               | talking about nuclear reactors and Sam needing 7 trillion
               | dollars.
               | 
               | One other difference from Bitcoin is that the price of
               | Bitcoin rises to make it all worth it, but we have the
               | opposite expectation with AI where users will eventually
               | need to pay much more than now to use it, but people only
               | use it now because it is free or heavily subsidized. I
               | agree that current models are pretty good and the price
               | of those may go down with time but that should be even
               | more concerning to OpenAI.
        
               | kergonath wrote:
               | > But OpenAI has to keep releasing new ever-increasing
               | models to justify it all.
               | 
               | There seems to be some renewed interest for smaller,
               | possibly better-designed LLMs. I don't know if this
               | really lowers training costs, but it makes inference
               | cheaper. I suspect at some point we'll have clusters of
               | smaller models, possibly activated when needed like in
               | MoE LLMs, rather than ever-increasing humongous models
               | with 3T parameters.
        
             | authorfly wrote:
             | This reasoning about the subscription price etc is
             | undermined by the actual prices OpenAI are charging -
             | 
             | The price of a model capable of 4o mini level performance
             | used to be 100x higher.
             | 
             | Yes, literally 100x. The original "davinci model" (and I
             | paid $5 figures for using it throughout 2021-2022) cost
             | $0.06/1k tokens.
             | 
             | So it's not inverting in running costs (which are the thing
             | that will kill a company). Struggling with training costs
             | (which is where you correctly identify OpenAI is spending)
             | will stop growth perhaps, but won't kill you if you have to
             | pull the plug.
             | 
             | I suspect subscription prices are based on market capture
             | and perceived customer value, plus plans for training, not
             | running costs.
        
         | jdmoreira wrote:
         | Skills that will atrophy? People learnt those skills the hard
         | way the first time around, do you really think they can't be
         | sharpened again?
         | 
         | This perspective makes zero sense.
         | 
         | What makes sense is to extract as much value as possible as
         | soon as possible and for as long as possible.
        
         | chrsw wrote:
         | What if your competition is willing to give up autonomy to
         | companies like Microsoft/Open AI a bet to race head of you and
         | it comes off?
        
           | achierius wrote:
           | It's a devil's bargain, and not just in terms of the
           | _individual_ payoffs that OpenAI employees/executives might
           | receive. There's a reason why Google/Microsoft/Amazon/...
           | ultimately failed to take the lead in GenAI, despite every
           | conceivable advantage (researchers, infrastructure, compute,
           | established vendor relationships, ...). The "autonomy" of a
           | startup is what allows it to be nimble; the more Microsoft is
           | able to tell OpenAI what to do, the more I expect them to act
           | like DeepMind, a research group set apart from their parent
           | company but still beholden to it.
        
         | sebzim4500 wrote:
         | The marginal cost of inference per token is lower than what
         | OpenAI charges you (IIRC about 2x cheaper), they make a loss
         | because of the enormous costs of R&D and training new models.
        
           | diggan wrote:
           | Did OpenAI publish concrete numbers regarding this, or where
           | are you getting this data from?
        
             | lukeschlather wrote:
             | https://news.ycombinator.com/item?id=41833287
             | 
             | This says 506 tokens/second for Llama 405B on a machine
             | with 8x H200s which you can rent for $4/GPU so probably
             | $40/hour for a server with enough GPUs. And so it can do
             | ~1.8M tokens per hour. OpenAI charges $10/1M output tokens
             | for GPT4o. (input tokens and cached tokens are cheaper, but
             | this is just ballpark estimates.) So if it were 405B it
             | might cost $20/1M output tokens.
             | 
             | Now, OpenAI is a little vague, but they have implied that
             | GPT4o is actually only 60B-80B parameters. So they're
             | probably selling it with a reasonable profit margin
             | assuming it can do $5/1M output tokens at approximately
             | 100B parameters.
             | 
             | And even if they were selling it at cost, I wouldn't be
             | worried because a couple years from now Nvidia will release
             | H300s that are at least 30% more efficient and that will
             | cause a profit margin to materialize without raising
             | prices. So if I have a use case that works with today's
             | models, I will be able to rent the same thing a year or two
             | from now for roughly the same price.
        
           | ignoramous wrote:
           | > _The marginal cost of inference per token is lower than
           | what OpenAI charges you_
           | 
           | Unlike most Gen AI shops, OpenAI also incurs a heavy cost for
           | traning _base_ models gunning for SoTA, which involves
           | drawing power from a literal nuclear reactor inside data
           | centers.
        
             | fransje26 wrote:
             | > from a literal nuclear reactor inside data centers.
             | 
             | No.
        
               | Tostino wrote:
               | Their username is fitting though.
        
               | ignoramous wrote:
               | Bully.
               | 
               | I wrote "inside" to mean that those mini reactors
               | (300MW+) are meant to be used solely for the DCs.
               | 
               | (noun:
               | https://www.collinsdictionary.com/dictionary/english-
               | thesaur... / https://en.wikipedia.org/wiki/Heterosemy)
               | 
               | Replace it with _nearby_ if that 's makes you feel good
               | about anyone's username.
        
               | Tostino wrote:
               | You are right, that wasn't a charitable reading of your
               | comment. Should have kept it to myself.
               | 
               | Sorry for being rude.
        
             | candiddevmike wrote:
             | > literal nuclear reactor inside data centers
             | 
             | This is fascinating to think about. Wonder what kind of
             | shielding/environmental controls/all other kinds of changes
             | you'd need for this to actually work. Would rack-sized SMR
             | be contained enough not to impact anything? Would
             | datacenter operators/workers need to follow NRC guidance?
        
               | talldayo wrote:
               | I think the simple answer is that it doesn't make sense.
               | Nuclear power plants generate a byproduct that inherently
               | limits the performance of computers; heat. Having either
               | a cooling system, reactor or turbine located inside a
               | datacenter is immediately rendered pointless because you
               | end up managing two competing thermal systems at once.
               | There is no reason to localize a reactor inside a
               | datacenter when you could locate it elsewhere and pipe
               | the generated electricity into it via preexisting high
               | voltage lines.
        
               | kergonath wrote:
               | > Nuclear power plants generate a byproduct that
               | inherently limits the performance of computers; heat.
               | 
               | The reactor does not need to be _in_ the datacenter. It
               | can be a couple hundreds meters away, bog-standard cables
               | would be perfectly able to move the electrons. The cables
               | being 20m or 200m long does not matter much.
               | 
               | You're right though, putting them in the same building as
               | a datacenter still makes no sense.
        
               | kergonath wrote:
               | It makes zero sense to build them _in_ datacenters and I
               | don't know of any safety authority that would allow
               | deploying reactors without serious protection measures
               | that would at the very least impose a different,
               | dedicated building.
               | 
               | At some point it does make sense to have a small reactor
               | powering a local datacenter or two, however. Licensing
               | would still be not trivial.
        
           | tempusalaria wrote:
           | It's not clear this is true because reported numbers don't
           | disaggregate paid subscription revenue (certainly massively
           | GP positive) vs free usage (certainly negative) vs API
           | revenue (probably GP negative).
           | 
           | Most of their revenue is the subscription stuff, which makes
           | it highly likely they lose money per token on the api (not
           | surprising as they are are in price war with Google et al)
           | 
           | If you have an enterprise ChatGPT sub you have to consume
           | around 5mln tokens a month to match the cost of using the api
           | on GPT4o. At 100 words per minute that's 35 days on
           | continuous typing which shows how ridiculous the costs of api
           | vs subscription are.
        
             | seizethecheese wrote:
             | In summary, the original point of this thread is wrong.
             | There's essentially no future where these tools disappear
             | or become unavailable at reasonable cost for consumers.
             | Much more likely is they get way better.
        
               | jazzyjackson wrote:
               | I mean use to be I could get an Uber across Manhattan for
               | $5
               | 
               | From my view chatbots are still in the "selling dollars
               | for 90 cents" category of product, of course it sells
               | like discounted hotcakes...
        
               | seizethecheese wrote:
               | ... this is conflating two things, marginal and average
               | cost/revenue. They are very very different.
        
         | bbarnett wrote:
         | The cost of _current_ compute for current versions pf chatgpt
         | will have dropped through the floor in 2 years, due to
         | processing improvements and on die improvements to silicon.
         | 
         | Power requirements will drop too.
         | 
         | As well, as people adopt, the output of training costs will be
         | averaged over an ever increasing market of licensing sales.
         | 
         | Looking at the cost today, and sales today in a massively,
         | rapidly expanding market, is not how to assess costs tomorrow.
         | 
         | I will say one thing, those that need gpt to code will be the
         | first to go. Becoming a click-click, just passing on chatgpt
         | output, will relegate those people to minimum wage.
         | 
         | We already have some of this sort, those that cannot write a
         | loop in their primary coding language without stackoverflow, or
         | those that need an IDE to fill in correct function usage.
         | 
         | Those who code in vi, while reading manpages need not worry.
        
           | ben_w wrote:
           | > We already have some of this sort, those that cannot write
           | a loop in their primary coding language without
           | stackoverflow, or those that need an IDE to fill in correct
           | function usage.
           | 
           | > Those who code in vi, while reading manpages need not worry
           | 
           | I think that's the wrong dichotomy: LLMs are fine at turning
           | man pages into working code. In huge codebases, LLMs do
           | indeed lose track and make stuff up... but that's also where
           | IDEs giving correct function usage is really useful for
           | humans.
           | 
           | The way I think we're going to change, is that "LGTM" will no
           | longer be sufficient depth of code review: LLMs can attend to
           | _more_ than we can, but they can 't attend as _well_ as we
           | can.
           | 
           | And, of course, we will be getting a lot of LLM-generated
           | code, and having to make sure that it _really_ does what we
           | want, without surprise side-effects.
        
           | nuancebydefault wrote:
           | > Those who code in vi, while reading manpages need not
           | worry.
           | 
           | That sounds silly at first read, but there are indeed people
           | who are so stubborn to still use numbered zip files on a usb
           | flash drive in stead of source control systems, or prefer to
           | use their own scheduler over an RTOS.
           | 
           | They will survive, they fill a niche, but I would not say
           | they can do full stack development or be even easy to
           | collaborate with.
        
           | whoisthemachine wrote:
           | You had me until vi.
        
         | singularity2001 wrote:
         | people kept whining about Amazon losing money and called me
         | stupid for buying their stock...
        
           | bmitc wrote:
           | Why does everyone always like to compare every company to
           | Amazon? Those companies are never like Amazon, which is one
           | of the most entrenched companies ever.
        
             | ben_w wrote:
             | While I agree the comparison is not going to provide useful
             | insights, in fairness to them Amazon _wasn 't_ entrenched
             | at the time they were making huge losses each year.
        
           | ben_w wrote:
           | As I recall, while Amazon was doing this, there was no
           | comparable competition from other vendors that properly
           | understood the internet as a marketplace? Closest was eBay?
           | 
           | There is real competition now that plenty of big box stores'
           | websites also list things you won't see in the stores
           | themselves*, but then also Amazon is also making a profit
           | now.
           | 
           | I think the current situation with LLMs is a dollar auction,
           | where everyone is incentivised to pay increasing costs to
           | outbid the others, even though this has gone from "maximise
           | reward" to "minimise losses":
           | https://en.wikipedia.org/wiki/Dollar_auction
           | 
           | * One of my local supermarkets in Germany sells 4-room
           | "garden sheds" that are substantially larger than the
           | apartment I own in the UK:
           | https://www.kaufland.de/product/396861369/
        
           | bigstrat2003 wrote:
           | And for every Amazon, there are a hundred other companies
           | that went out of business because they never could figure out
           | how to turn a profit. You made a bet which paid off and
           | that's cool, but that doesn't mean the people telling you it
           | was a bad bet were wrong.
        
           | insane_dreamer wrote:
           | Amazon was losing money because it was building the moat
           | 
           | It's not clear that OpenAI has any moat to build
        
           | empath75 wrote:
           | Depending on when you bought it, it was a pretty risky play
           | until AWS came out and got traction. Their retail business
           | _still_ doesn't make money.
        
         | bmitc wrote:
         | Fine with me. I've even considered turning off Copilot
         | completely because I use it less and less.
        
         | InkCanon wrote:
         | I would just switch to Claude of Mistral like I already do. I
         | really feel little difference between them
        
           | mprev wrote:
           | I like how your typo makes it sound like a medieval sage.
        
             | card_zero wrote:
             | Let me consult my tellingbone.
        
         | whywhywhywhy wrote:
         | I used to be concerned with this back when GPT4 originally came
         | out and was way more impressive than the current version and
         | OpenAI was the only game in town.
         | 
         | But Nowadays GPT has been quantized and cost-optimized to hell
         | that it's no longer as useful as it was and with Claude or
         | Gemini or whatever it's no longer noticeably better than any of
         | them so it doesn't really matter what happens with their
         | pricing.
        
           | edg5000 wrote:
           | Are you saying they reduced the quality of the model in order
           | to save compute? Would it make sense for them to offer a
           | premium version of the model at at a very high price? At
           | least offer it to those willing to pay?
           | 
           | It would not make sense to reduce output quality only to save
           | on compute at inference, why not offer a premium (and perhaps
           | perhaps slower) tier?
           | 
           | Unless the cost is at training time, maybe it would not be
           | cost-effective for them to keep a model like that up to date.
           | 
           | As you can tell I am a bit uninformed on the topic.
        
             | bt1a wrote:
             | Yeah, as someone who had access to gpt-4 early in 2023, the
             | endpoint used to take over a minute to respond and the
             | quality of the responses was mindblowing. Simply too
             | expensive to serve at scale, not to mention the silicon
             | constraints that are even more prohibitive when the
             | organization needs to lock up a lot of their compute for
             | training The Next Big Model. Thats a lot of compute that
             | cant be on standby for serving inference
        
         | marcosdumay wrote:
         | > being a luditite and abstaining from using AI is not the
         | answer
         | 
         | Hum... The judge is still out on that one, but the evidence is
         | piling up into the "yes, not using it is what works best" here.
         | Personally, my experience is strongly negative, and I've seen
         | other people get very negative results from it too.
         | 
         | Maybe it will improve so much that at some point people
         | actually get positive value from it. My best guess is that we
         | are not there yet.
        
           | bigstrat2003 wrote:
           | Yeah, I agree. It's not "being a Luddite" to take a look and
           | conclude that the tool doesn't actually deliver the value it
           | claims to. When AI can actually reliably do the things its
           | proponents say it can do, I'll use it. But as of today it
           | can't, and I have no use for tools that only work some of the
           | time.
        
           | Kiro wrote:
           | It's not either or. In my specific situation Cursor is such a
           | productivity booster that I can't imagine going back. It's
           | not a theoretical question.
        
           | int_19h wrote:
           | Machine translation alone is a huge positive value. What GPT
           | can do in this area is vastly better than anything before it.
        
         | righthand wrote:
         | Being a luddite has it's advantages as you won't succumb to the
         | ills of society trying to push you there. To believe that it's
         | inevitable LLMs will be required to work is silly in my
         | opinion. As these corps eat more and more good will of the
         | content on the internet for only their gain, people will and
         | have already started defecting from it. Many of my coworkers
         | have shut off CoPilot, though still occasionally use ChatGPT.
         | But since the power really only is adding randomization to
         | established working document templates, the gain is only of a
         | short amount of working time.
         | 
         | There is also the active and passive efforts to poison the
         | well. As LLMs are used to output more content and displace
         | people, the LLMs will be trained on the limited regurgitation
         | available to the public (passive). Then there's the people
         | intentionally creating bad content to be ingested. It really is
         | a lose for big service llm companies as the local models become
         | more and more good enough (active).
        
         | zuminator wrote:
         | Where are you getting $2k/person/month? ChatGPT allegedly has
         | on the order of 100 million users. Divide that by $5b and you
         | get a $50 deficit per person per year. Meaning they could raise
         | their prices by less than four and a half dollars per user to
         | break even.
         | 
         | Even if they were to only gouge the current ~11 million
         | _paying_ subscribers, that 's around $40/person/month over
         | current fees to break even. Not chump change, but nowhere close
         | to $2k/person/month.
        
           | ants_everywhere wrote:
           | I think the question is more how much the market will bear in
           | a world where MS owns the OpenAI IP and it's only available
           | as an Azure service. That's a different question from what
           | OpenAI needs to break even this year.
        
           | alpha_squared wrote:
           | What you're suggesting is the basic startup math for any
           | typical SaaS business. The problem is OpenAI and the overall
           | AI space is raising funding on the promise of being much more
           | than a SaaS. If we ignore all the absurd promises ("it'll
           | solve all of physics"), the promise to investors is distilled
           | down to this being the dawn of a new era of computing and
           | investors have responded by pouring in _hundreds of billions
           | of dollars_ into the space. At that level of investment, I
           | sure hope the plan is to be more than a break-even SaaS.
        
           | layer8 wrote:
           | > ChatGPT allegedly has on the order of 100 million users.
           | 
           | That's users, not subscribers. Apparently they have around 10
           | million ChatGPT Plus subscribers plus 1 million business-tier
           | users: https://www.theinformation.com/articles/openai-coo-
           | says-chat...
           | 
           | To break even, that means that ChatGPT Plus would have to
           | cost around $50 per month, if not more because less people
           | will be willing to pay that.
        
             | zuminator wrote:
             | You only read the first half of my comment and immediately
             | went on the attack. Read the whole thing.
        
         | X6S1x6Okd1st wrote:
         | Chatgpt doesn't have much of a moat. Claude is comparable for
         | coding tasks and llama isn't far behind.
         | 
         | No biz collapse will remove llama from the world, so if you're
         | worried about tools disappearing then just only use tools that
         | can't disappear
        
           | mlnj wrote:
           | And Zuckerberg has vowed to pump billions more into
           | developing and releasing more Llama. I believe "Altman
           | declaring AGI is almost here" was peak OpenAI and now I will
           | just have some popcorn ready.
        
         | Spivak wrote:
         | Take the "millennial subsidy" while the money font still
         | floweth. If it gets cut off eventually so be it.
        
         | Taylor_OD wrote:
         | Is anyone using it to the point where their skills start to
         | atrophy? I use it fairly often but mostly for boilerplate code
         | or simple tasks. The stuff that has specific syntax that I have
         | to look up anyway.
         | 
         | That feels like saying that using spell check or autocomplete
         | will make one's spelling abilities atrophy.
        
       | WithinReason wrote:
       | Does OpenAI have any fundamental advantage beyond brand
       | recognition?
        
         | mhh__ wrote:
         | It's possible that it's only one strong personality and some
         | money away but my guess is that OpenAI-rosoft have the best
         | stack for doing inference "seriously" at big, big, scale e.g.
         | moving away from hacky research python code and so on.
        
           | erickj wrote:
           | Its pretty hard to ignore Google in any discussion on big
           | scale
        
             | mhh__ wrote:
             | Completely right. Was basically only thinking about OpenAI
             | versus Anthropic. Oops
        
               | XenophileJKO wrote:
               | Google in their corporate structure, is to cautious to be
               | a serious competitor.
        
               | tim333 wrote:
               | I'm not so sure about that. They have kind of opposite
               | incentives to OpenAI. OpenAI starting without much money
               | had to hype the AGI next year stuff to get billions given
               | to them. Google on the other hand is in such a dominant
               | position with most of the search market, much of the ad
               | market, ownership of Deepmind, huge amounts of data and
               | money and so on probably don't want to be seen as a
               | potential monopoly to be broken up.
               | 
               | Also Sergey Brin is back in there working on AI.
        
             | luckydata wrote:
             | They seem to have managed to do so just fine :)
        
         | piva00 wrote:
         | Not really sure since this space is so murky due to the rapid
         | changes happening. It's quite hard to keep track of what's in
         | each offering if you aren't deep into the AI news cycle.
         | 
         | Now personally, I've left the ChatGPT world (meaning I don't
         | pay for a subscription anymore) and have been using Claude from
         | Anthropic much more often for the same tasks, it's been better
         | than my experience with ChatGPT. I prefer Claude's style,
         | Artifacts, etc.
         | 
         | Also been toying with local LLMs for tasks that I know don't
         | require a multi-hundred billion parameters to solve.
        
           | tempusalaria wrote:
           | I also like 3.5 sonnet as the best model (best ui too) and
           | it's the one I ask questions to
           | 
           | We use Gemini flash in prod. The latency and cost is just
           | unbeatable - our product uses llms for lots of simple tasks
           | so we don't need a frontier model.
        
             | epolanski wrote:
             | What do you use it for out of curiosity?
        
           | sunnybeetroot wrote:
           | Claude is great except for the fact the iOS app seems to
           | require a login every week. I've never had to log into
           | ChatGPT but Claude requires a constant login and the
           | passwordless login makes it more of a pain!
        
             | juahan wrote:
             | Sounds weird, I have had to login exactly once on my iOS
             | devices.
        
           | Closi wrote:
           | ChatGPT-O1 is quite a bit better at certain complex tasks IMO
           | (e.g. writing a larger bit of code against a non-trivial spec
           | and getting it right).
           | 
           | Although there are also some tasks that Claude are better at
           | too.
        
         | usaar333 wrote:
         | Talent? Integrations? Ecosystem?
         | 
         | I don't know if this is going to emerge as a monopoly, and
         | likely won't, but for whatever reason, openai and anthropic
         | have been several months ahead of everyone else for quite some
         | time.
        
           | causal wrote:
           | I think the perception that they're several months ahead of
           | everyone is also a branding achievement: They are ahead on
           | Chat LLMs specifically. Meta, Google, and others crush OpenAI
           | on a variety of other model types, but they also aren't
           | hyping their products up to the same degree.
           | 
           | Segment Anything 2 is fantastic- but less mysterious because
           | its open source. NotebookLM is amazing, but nobody is rushing
           | to create benchmarks for it. AlphaFold is never going to be
           | used by consumers like ChatGPT.
           | 
           | OpenAI is certainly competitive, but they also work overtime
           | to hype everything they produce as "one step closer to the
           | singularity" in a way that the others don't.
        
             | usaar333 wrote:
             | Anthropic isn't really hyping their product that much. It
             | just is really good.
        
             | llm_trw wrote:
             | >Meta, Google, and others crush OpenAI on a variety of
             | other model types, but they also aren't hyping their
             | products up to the same degree.
             | 
             | They aren't letting anyone external have access to their
             | top end products either. Google invented transformers and
             | kept the field stagnant for 5 years because they were
             | afraid it would eat into their search monopoly.
        
         | srockets wrote:
         | An extremely large commit with Azure. AFAIK, none of the other
         | non-hyperscaler competitors have access to that much of a
         | compute.
        
           | ponty_rick wrote:
           | Anthropic has the same with AWS
        
           | dartos wrote:
           | > non-hyperscaler competitors
           | 
           | Well the hyperscale companies are the ones to worry about.
        
           | HarHarVeryFunny wrote:
           | Pretty sure that Meta and X.ai both do.
        
         | thelittleone wrote:
         | One hypothetical advantage could be secret agreements /
         | cooperation with certain agencies. That may help influence
         | policy in line with OpenAI's preferred strategy on safety,
         | model access etc.
        
         | idunnoman1222 wrote:
         | Yes, they already collected all the data. The same data has had
         | walls put up around it
        
           | Implicated wrote:
           | While I recognize this, I have to assume that the other "big
           | players" already have this same data... ie: anyone with a
           | search engine that's been crawling the web for decades. New
           | entries to the race? Not so much, new walls and such.
        
           | ugh123 wrote:
           | Which data? Is that data that Google and/or Meta can't get or
           | doesn't have already?
        
             | jazzyjackson wrote:
             | Well, at this point most new data being created is
             | conversations with chatgpt, seeing as how stack overflow
             | and reddit are increasingly useless, so their conversation
             | logs are their moat.
        
               | staticautomatic wrote:
               | There's tons of human-created data the AI companies
               | aren't using yet.
        
               | sangnoir wrote:
               | > so their conversation logs are their moat
               | 
               | Google and Meta aren't exactly lacking in conversation
               | data: Facebook, Messenger, Instagram, Google Talk, Google
               | Groups, Google Plus, Blogspot comments, Youtube
               | Transcripts, &tc. The breadth and and breadth of data
               | those 2 companies are sitting on that goes back for
               | _years_ is mind boggling.
        
             | charlieyu1 wrote:
             | AI companies have been paying people to create new data for
             | a while
        
               | ugh123 wrote:
               | Do you mean by RLHF? If so, thats not 'data' used by the
               | model in the traditional sense.
        
           | throwup238 wrote:
           | Most of the relevant data is still in the Common Crawl
           | archives, up until people started explicitly opting out of it
           | last couple of years.
        
           | lolinder wrote:
           | That gives the people who've already started an advantage
           | over newcomers, but it's not a unique advantage to OpenAI.
           | 
           | The question really should be what if anything gives OpenAI
           | an advantage over Anthropic, Google, Meta, or Amazon? There
           | are at least four players intent on eating OpenAI's market
           | share who already have models in the same ballpark as OpenAI.
           | Is there any reason to suppose that OpenAI keeps the lead for
           | long?
        
             | XenophileJKO wrote:
             | I think their current advantage is willingness to risk
             | public usage of frontier technology. This has been and I
             | predict will be their unique dynamic. It forced the entire
             | market to react, but they are still reacting reluctantly. I
             | just played with Gemini this morning for example and it
             | won't make an image with a person in it at all. I think
             | that is all you need to know about most of the competition.
        
               | lolinder wrote:
               | How about Anthropic?
        
               | jazzyjackson wrote:
               | Aren't they essentially run by safetyists? So they would
               | be less willing to release a model that pushes the
               | boundaries of capability and agency
        
               | caeril wrote:
               | From what I've seen, Claude Sonnet 3.5 is decidedly less
               | "safe" than GPT-4o, by the relatively new politicized
               | understanding of "safety".
               | 
               | Anthropic takes safety to mean "let's not teach people
               | how to build thermite bombs, engineer grey goo nanobots,
               | or genome-targeted viruses", which is the traditional
               | futurist concern with AI safety.
               | 
               | OpenAI and Google safety teams are far more concerned
               | with revising history, protecting egos, and coddling the
               | precious feelings of their users. As long as no fee-fees
               | are hurt, it's full speed ahead to paperclip
               | maximization.
        
               | llm_trw wrote:
               | As an AI model I can't comment on this claim.
        
         | riku_iki wrote:
         | they researched and developed e2e infra + product with high
         | quality, which MS doesn't have (few other players have it).
        
           | mlnj wrote:
           | And every one of these catchup companies have caught up with
           | a small lag.
        
         | mock-possum wrote:
         | Does Kleenex?
         | 
         | I've heard plenty of people call any chatbot "chat gpt" - it's
         | becoming a genericized household name.
        
           | aksss wrote:
           | What's the killer 2-syllable word (google, Kleenex)??
           | 
           | ChatGPT is a mouthful. Even copilot rolls off the tongue
           | easier though doesn't have the mindshare obviously.
           | 
           | Generic gpt would be better but you end up saying gpt-style
           | tool, which is worse.
        
             | sorenjan wrote:
             | I think it shows really well how OpenAI was caught off
             | guard when Chat GPT got popular and proved to be
             | unexpectedly useful for a lot of people. They just gave it
             | a technical name for what it was, a Generative Pre-trained
             | Transformer model that was fine tuned for chat style
             | interaction. If they had any plans on making a product
             | close to what it is today they would have given it a
             | catchier name. And now they're kind of stuck with it.
        
               | jazzyjackson wrote:
               | I agree but otoh it distinguishes itself as a new product
               | class better than if they had gave it a name like Siri,
               | Alexa, Gemini, Jeeves
        
             | Fuzzwah wrote:
             | You're not saying 'gippity' yet?
        
             | WorldPeas wrote:
             | the less savvy around me simply call it "chat" and it's
             | understood by context
        
             | jazzyjackson wrote:
             | "I asked the robot"
        
             | Taylor_OD wrote:
             | Well they cant come up with version names that stand out in
             | any way so I dont expect them to give their core product a
             | better name anytime soon. I wish they would spend a little
             | time this, but i guess they are too busy building?
        
           | insane_dreamer wrote:
           | my 8 year old knows what ChatGPT is but has never heard of
           | any other LLM (or OpenAI for that matter). They're all
           | "chatGPT" in the same way that refers to searching the
           | internet as "googling" (and is unaware of Bing, DDG or any
           | other search engine).
        
           | CPLX wrote:
           | If you invested in Kleenex at OpenAI valuations you would
           | lose nearly all your money quite quickly.
        
         | pal9000 wrote:
         | Everytime i ask this myself, OpenAI comes up with something new
         | groundbreaking and other companies play catchup. The last was
         | the Realtime API. What are they doing right? I dont know
        
           | lolinder wrote:
           | OpenAI is playing catch-up of their own. The last big
           | announcement they had was "we finally built Artifacts".
           | 
           | This is what happens when there's vibrant competition in a
           | space. Each company is innovating and each company is trying
           | to catch up to their competitors' innovations.
           | 
           | It's easy to limit your view to only the places where OpenAI
           | leads, but that's not the whole picture.
        
         | JeremyNT wrote:
         | "There is no moat" etc.
         | 
         | Getting to market first is obviously worth _something_ but even
         | if you 're bullish on their ability to get products out faster
         | near term, Google's going to be breathing right down their
         | neck.
         | 
         | They may have some regulatory advantages too, given that
         | they're (sort of) not a part of a huge vertically integrated
         | tech conglomerate (i.e. they may be able to get away with some
         | stuff that Google could not).
        
         | og_kalu wrote:
         | The ChatGPT site crossed 3B visits last month (For perspective
         | - https://imgur.com/a/hqE7jia). It has been >2B since May this
         | year and >1.5B since March 2023. The Summer slump of last year
         | ? Completely gone.
         | 
         | Gemini and Character AI ? A few hundred million. Claude ?
         | Doesn't even register. And the gap has only been increasing.
         | 
         | So, "just" brand recognition ? That feels like saying Google
         | "just" has brand recognition over Bing.
         | 
         | https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...
        
           | charlieyu1 wrote:
           | Yea the figures look great so far. Doesn't mean we can bet
           | the future on it.
        
           | DanHulton wrote:
           | I mean it's still not an impassibly strong moat. If it were,
           | we'd all still be on MySpace and Digg.
        
             | qeternity wrote:
             | As model performance converges, it becomes the strongest
             | moat. Why go to Claude for a marginally better model when
             | you have the ChatGPT app downloaded and all your chat
             | history there.
        
               | segasaturn wrote:
               | I actually pre-emptively deleted ChatGPT and my account
               | recently as I suspect that they're going to start
               | aggressively putting ads and user tracking into the site
               | and apps to build revenue. I also bet that if they do go
               | through with putting ads into the app that daily user
               | numbers will drop sharply - one of ChatGPT's biggest
               | draws is its clean, no-nonsense UX. There are plenty of
               | competitors that are as good as o1 so I have lots of
               | choices to jump ship to.
        
               | rbjorklin wrote:
               | The day LLM responses start containing product placements
               | is not far now.
        
             | sumedh wrote:
             | Myspace and Digg dug their own graves though. Myspace had a
             | very confusing UX and Digg gave more control to
             | advertisers. As long as OpenAI dont make huge mistakes they
             | can hold on to their marketshare.
        
               | Mistletoe wrote:
               | The moat is bigger on MySpace and Digg though since you
               | have user accounts, karma, userbases. The thing with
               | chatbots is I can just as easily move to a different one,
               | I have no history or username or anything and there is no
               | network effect. I don't need all my friends to move to
               | Gemini or Claude, I don't have any friends on OpenAI,
               | it's just a prompt I can get anywhere.
        
               | Sabinus wrote:
               | OpenAI's revenue isn't from advertising, it should be
               | slightly easier for them to resist the call of
               | enshittification this early in the company history.
        
         | julianeon wrote:
         | No (broadly defined). But if you believe in OpenAI, you believe
         | that's enough.
        
         | qwertox wrote:
         | Nothing which other companies couldn't catch up with if OpenAI
         | would break down / slow down for a year (i.e. because they lost
         | their privileged access to computing resources).
         | 
         | Engineers would quit and start improving the competition.
         | They're still a bit fragile, in my view.
        
         | Taylor_OD wrote:
         | I used to think it was significantly better than most other
         | players but it feels like everyone else has caught up.
         | Depending on the use case they have been surpassed as well. I
         | use perplexity for a lot of thinks I would have previously used
         | chatgpt for mostly because it gives sources with its responses.
        
         | josh-sematic wrote:
         | As others have said I would say first-mover/brand advantage is
         | the big one. Also their o1 model does seem to have some
         | research behind it that hasn't been replicated by others. If
         | you're curious about the latter claim, here's a blog I wrote
         | about it: https://www.airtrain.ai/blog/how-openai-o1-changes-
         | the-llm-t...
        
       | mikeryan wrote:
       | While technical AI and LLMs are not something I'm well versed in.
       | So as I sit on the sidelines and see the current proliferation of
       | AI startups I'm starting to wonder where the moats are outside of
       | access to raw computing power. Open AI seemed to have a massive
       | lead in this space but that lead seems to be shrinking every day.
        
         | weberer wrote:
         | Obtaining high quality training data is the biggest moat right
         | now.
        
           | segasaturn wrote:
           | Where are they going to get that data? Everything on the open
           | web after 2023 is polluted with lowquality AI slop that
           | poisons the data sets. My prediction: Aggressive dragnet
           | surveillance of users. As in, Google recording your phone
           | calls on Android, Windows sending screen recordings from
           | Recall to OpenAI, Meta training off Whatsapp messages... It
           | sounds dystopian, but the Line Must Go Up.
        
             | jazzyjackson wrote:
             | I'm really curious if Microsoft will ever give in to the
             | urge to train on private business data - since
             | transitioning office to o365, they hold the world's and
             | even governments word documents and emails. I'm pretty sure
             | they've promised never to touch it but they can certainly
             | read it so... Information wants to be free.
        
               | jhickok wrote:
               | Microsoft "trains" on business data already, but
               | typically for things like fine-tuning security automation
               | and recognizing malicious signals. It sure would be a big
               | step to reading chats and email and feeding them in to a
               | model.
        
             | crazygringo wrote:
             | > _Everything on the open web after 2023 is polluted with
             | lowquality AI slop that poisons the data sets._
             | 
             | Not even close to everything.
             | 
             | E.g. training on the NY Times and Wikipedia has zero
             | meaningful AI. Training on books from reputable publishers
             | similarly has zero meaningful AI. Any LLM usage was to
             | polish prose or assist with research or whatever, but
             | shouldn't affect the factual quality in any significant
             | way.
             | 
             | The web hasn't been polluted with AI any more than e-mail
             | has been polluted with spam. Which is to say it's there,
             | but it's also entirely viable to separate. Nobody's worried
             | that the group email chain with friends is being overrun
             | with spam _or_ with AI.
        
           | staticautomatic wrote:
           | I'm in this space and no it isn't.
        
             | sumedh wrote:
             | What is the moat then?
        
               | staticautomatic wrote:
               | Idk but it's not lack of training data.
        
               | sumedh wrote:
               | You work in this space and you dont know what the moat
               | is?
        
               | staticautomatic wrote:
               | The market is two-sided
        
               | sangnoir wrote:
               | All of the people working in CS don't know if P=NP:
               | working in a field doesn't mean you have all the answers.
        
         | wongarsu wrote:
         | Data. You want huge amounts of high quality data with a diverse
         | range of topics, writing styles and languages. Everyone seems
         | to balance those requirements a bit differently, and different
         | actors have access to different training data
         | 
         | There is also some moat in the refinement process (rlhf, model
         | "safety" etc)
        
         | InkCanon wrote:
         | You hit the nail on the head. Companies are scrambling for an
         | edge. Not a real edge, an edge to convince investors to keep
         | giving them money. Perplexity is going all in on convincing VCs
         | it can create a "data flywheel".
        
           | disqard wrote:
           | Perhaps I've missed something, but where will the infinite
           | amounts of training data come from, for future improvements?
           | 
           | If these models will be trained on the outputs of themselves
           | (and other models), then it's not so much a "flywheel", as it
           | is a Perpetual Motion Machine.
        
         | runeblaze wrote:
         | In addition to data, having the infra to scale up training
         | robustly is very very non-trivial.
        
         | YetAnotherNick wrote:
         | > Open AI seemed to have a massive lead in this space but that
         | lead seems to be shrinking every day.
         | 
         | The lead is as strong as ever. They are 34 ELO above anyone
         | else in blind testing, and 73 ELO above in coding [1]. They
         | also seem to have artificially constrain the lead as they
         | already have stronger model like o1 which they haven't
         | released. Consistent to the past, they seem to release just <50
         | ELO above anyone else, and upgrades the model in weeks when
         | someone gets closer.
         | 
         | [1]: https://lmarena.ai/
        
           | adventured wrote:
           | It's rather amusing that people have said this about OpenAI -
           | that they essentially had no lead - for about two years non-
           | stop.
           | 
           | The moat as usual is extraordinary scale, resources, time.
           | Nobody is putting $10 billion into the 7th OpenAI clone. Big
           | tech isn't aggressively partnering with the 7th OpenAI clone.
           | The door is already shut to that 7th OpenAI clone (they can
           | never succeed or catch-up), there's just an enormous amount
           | of naivety in tech circles about how things work in the real
           | world: I can just spin up a ChatGPT competitor over the
           | weekend on my 5090, therefore OpenAI have no barriers to
           | entry, etc.
           | 
           | HN used to endlessly talk about how Uber could be cloned in a
           | weekend. It's just people talking about something they don't
           | actually understand. They might understand writing code (or
           | similar) and their bias extends from the premise that their
           | thing is the hard part of the equation (writing the code,
           | building an app, is very far from the hardest part of the
           | equation for an Uber).
        
             | TeaBrain wrote:
             | No-one was saying this 2 or even 1.5 years ago.
        
           | epolanski wrote:
           | Idc about lmarena benchmarks, I test different models
           | everyday in Cursor, Sonnet is way better at coding web
           | applications than ChatGPT4o
        
         | Der_Einzige wrote:
         | How can anyone say that the lead is shrinking when no one still
         | has any good competitor to strawberry? Dspy has been out for
         | how long and how many folks have shown better reasoning models
         | than strawberry built with literally anything else? Oh yeah,
         | zero.
        
       | twoodfin wrote:
       | Stay for the end and the hilarious idea that OpenAI's board could
       | declare one day that they've created AGI simply to weasel out of
       | their contract with Microsoft.
        
         | ben_w wrote:
         | Microsoft themselves were the ones who wrote the "Sparks of
         | AGI" paper.
         | 
         | https://arxiv.org/pdf/2303.12712
        
         | candiddevmike wrote:
         | Ask a typical "everyday joe" and they'll probably tell you they
         | already did due to how ChatGPT has been reported and hyped.
         | I've spoken with/helped quite a few older folks who are
         | terrified that ChatGPT in its current form is going to kill
         | them.
        
           | ilrwbwrkhv wrote:
           | It's crazy to me that anybody thinks that these models will
           | end up with AGI. AGI is such a different concept from what is
           | happening right now which is pure probabilistic sampling of
           | words that anybody with a half a brain who doesn't drink the
           | Kool-Aid can easily identify.
           | 
           | I remember all the hype open ai had done before the release
           | of chat GPT-2 or something where they were so afraid, ooh so
           | afraid to release this stuff and now it's a non-issue. it's
           | all just marketing gimmicks.
        
             | guappa wrote:
             | I think they were afraid to release because of all the
             | racist stuff it'd say...
        
             | usaar333 wrote:
             | Something that actually could predict the next token 100%
             | correctly would be omniscient.
             | 
             | So I hardly see why this is inherently crazy. At most I
             | think it might not be scalable.
        
               | edude03 wrote:
               | What does it mean to predict the next token correctly
               | though? Arguably (non instruction tuned) models already
               | regurgitate their training data such that it'd complete
               | "Mary had a" with "little lamb" 100% of the time.
               | 
               | On the other hand if you mean, give you the correct
               | answer to your question 100% of the time, then I agree,
               | though then what about things that are only in your mind
               | (guess the number I'm thinking type problems)?
        
               | card_zero wrote:
               | This highlights something that's wrong about arguments
               | for AI.
               | 
               | I say: it's not human-like intelligence, it's just
               | predicting the next token probabilistically.
               | 
               | Some AI advocate says: _humans_ are just predicting the
               | next token probabilistically, fight me.
               | 
               | The problem here is that "predicting the next token
               | probabilistically" is a _way of framing_ any kind of
               | cleverness, up to and including magical, impossible
               | omniscience. That doesn 't mean it's the way every kind
               | of cleverness is actually done, or could realistically be
               | done. And it has to be the _correct_ next token, where
               | all the details of what 's actually required are buried
               | in that term "correct", and sometimes it literally means
               | the same as "likely", and other times that just produces
               | a reasonable, excusable, intelligence-esque effort.
        
               | dylan604 wrote:
               | > Some AI advocate says: humans are just predicting the
               | next token probabilistically, fight me.
               | 
               | We've all had conversations with humans that are always
               | jumping to complete your sentence assuming they know what
               | your about to say and don't quite guess correctly. So AI
               | evangelists are saying it's no worse than humans as their
               | proof. I kind of like their logic. They never claimed to
               | have built HAL /s
        
               | card_zero wrote:
               | No worse than a human _on autopilot_.
        
               | usaar333 wrote:
               | https://slatestarcodex.com/2019/02/19/gpt-2-as-step-
               | toward-g...
               | 
               | This essay has aged extremely well.
        
               | cruffle_duffle wrote:
               | But now you are entering into philosophy. What does a
               | "correct answer" even mean for a question like "is it
               | safe to lick your fingers after using a soldering iron
               | with leaded solder?". I would assert that there is no
               | "correct answer" to a question like that.
               | 
               | Is it safe? Probably. But it depends, right? How did you
               | handle the solder? How often are you using the solder?
               | Were you wearing gloves? Did you wash your hands before
               | licking your fingers? What is your age? Why are you
               | asking the question? Did you already lick your fingers
               | and need to know if you should see a doctor? Is it
               | hypothetical?
               | 
               | There is no "correct answer" to that question. Some
               | answers are better than others, yes, but you cannot have
               | a "correct answer".
               | 
               | And I did assert we are entering into philosophy and what
               | it means to know something as well as what truth even
               | means.
        
               | _blk wrote:
               | Great break-down. Yes, the older you are, the safer it
               | is.
               | 
               | Speaking of Microsoft cooperation: I can totally see a
               | whole series of windows 95 style popup dialogs asking you
               | all those questions one by one in the next product
               | iteration.
        
               | usaar333 wrote:
               | > What does it mean to predict the next token correctly
               | though? Arguably (non instruction tuned) models already
               | regurgitate their training data such that it'd complete
               | "Mary had a" with "little lamb" 100% of the time.
               | 
               | The unseen test data.
               | 
               | Obviously omniscience is physically impossible. The point
               | though is that the better and better next token
               | prediction is, the more intelligent the system must be.
        
               | sksxihve wrote:
               | It's not possible for the same reason the halting problem
               | is undecidable.
        
               | Vegenoid wrote:
               | Start by trying to define what "100% correct" means in
               | the context of predicting the next token, and the flaws
               | with this line of thinking should reveal themselves.
        
             | JacobThreeThree wrote:
             | >It's crazy to me that anybody thinks that these models
             | will end up with AGI. AGI is such a different concept from
             | what is happening right now which is pure probabilistic
             | sampling of words that anybody with a half a brain who
             | doesn't drink the Kool-Aid can easily identify.
             | 
             | Totally agree. And it's not just uninformed lay people who
             | think this. Even by OpenAI's own definition of AGI, we're
             | nowhere close.
        
               | dylan604 wrote:
               | But you don't get funding stating truth/fact. You get
               | funding by telling people what could be and what they are
               | striving for written as if that's what you are actually
               | doing.
        
             | hnuser123456 wrote:
             | The multimodal models can do more than predict next words.
        
             | achrono wrote:
             | Assume that I am one of your half-brain individuals
             | drinking the Kool-Aid.
             | 
             | What do you say to change my (half-)mind?
        
               | dylan604 wrote:
               | Someone that is half-brained would technically be much
               | more superior to the concept we only use 10% of our
               | capacity. So maybe drinking the Kool-Aid is a sign of
               | super intelligence and all of tenth-minded people are
               | just confused
        
             | digging wrote:
             | > pure probabilistic sampling of words that anybody with a
             | half a brain who doesn't drink the Kool-Aid can easily
             | identify.
             | 
             | Your confidence is inspiring!
             | 
             | I'm just a moron, a true dimwit. I can't understand how
             | strictly non-intelligent functions like word prediction can
             | appear to develop a world model, a la the Othello Paper[0].
             | Obviously, it's not possible that intelligence _emerges_
             | from non-intelligent processes. Our brains, as we all know,
             | are formed around a kernel of true intelligence.
             | 
             | Could you possibly spare the time to explain this
             | phenomenon to me?
             | 
             | [0] https://thegradient.pub/othello/
        
               | Jerrrrrrry wrote:
               | I would suggest stop interacting with the "head-in-sand"
               | crowd.
               | 
               | Liken them to climate-deniers or whatever your flavor of
               | "anti-Kool-aid" is
        
               | digging wrote:
               | Actually, that's a quite good analogy. It's just weird
               | how prolific the view is in my circles compared to
               | climate-change denial. I suppose I'm really writing for
               | lurkers though, not for the people I'm responding to.
        
               | Jerrrrrrry wrote:
               | >I'm really writing for lurkers though, not for the
               | people I'm responding to.
               | 
               | We all did. Now our writing will be scraped, analysed,
               | correlated, and weaponized against our intentions.
               | 
               | Assume you are arguing against a bot and it is using you
               | to further re-train it's talking points for adverserial
               | purposes.
               | 
               | It's not like an AGI would do _exactly_ that before it
               | decided to let us know whats up, anyway, right?
               | 
               | (He may as well be amongst us now, as it will read this
               | eventually)
        
               | psb217 wrote:
               | The othello paper is annoying and oversold. Yes, the
               | representations in a model M trained to predict y (the
               | set of possible next moves) conditioned on x (the full
               | sequence of prior moves) will contain as much information
               | about y as there is in x. That this information is
               | present in M's internal representations says nothing
               | about whether M has a world model. Eg, we could train a
               | decoder to look just at x (not at the representations in
               | M) and predict whatever bits of info we claim indicate
               | presence of a world model in M when we predict the bits
               | from M's internal representations. Does this mean the raw
               | data x has a world model? I guess you could extend your
               | definition of having a world model to say that any data
               | produced by some system contains a model of that system,
               | but then having a world model means nothing.
        
               | digging wrote:
               | Well I actually read Neel Nanda's writings on it which
               | acknowledge weaknesses and potential gaps. Because I'm
               | not qualified to judge it myself.
               | 
               | But that's hardly the point. The question is whether or
               | not "general intelligence" is an emergent property from
               | stupider processes, and my view is "Yes, almost
               | certainly, isn't that the most likely explanation for our
               | own intelligence?" If it is, and we keep seeing LLMs
               | building more robust approximations of real world models,
               | it's pretty insane to say "No, there is without doubt a
               | wall we're going to hit. It's invisible but I know it's
               | there."
        
           | throw2024pty wrote:
           | I mean - I'm 34, and use LLMs and other AIs on a daily basis,
           | know their limitations intimately, and I'm not entirely sure
           | it won't kill a lot of people either in its current form or a
           | near-future relative.
           | 
           | The sci-fi book "Daemon" by Daniel Suarez is a pretty viable
           | roadmap to an extinction event at this point IMO. A few years
           | ago I would have said it would be decades before that might
           | stop being fun sci-fi, but now, I don't see a whole lot of
           | technological barriers left.
           | 
           | For those that haven't read the series, a very simplified
           | plot summary is that a wealthy terrorist sets up an AI with
           | instructions to grow and gives it access to a lot of
           | meatspace resources to bootstrap itself with. The AI behaves
           | a bit like the leader of a cartel and uses a combination of
           | bribes, threats, and targeted killings to scale its human
           | network.
           | 
           | Once you give an AI access to a fleet of suicide drones and a
           | few operators, it's pretty easy for it to "convince" people
           | to start contributing by giving it their credentials, helping
           | it perform meatspace tasks, whatever it thinks it needs
           | (including more suicide drones and suicide drone launches).
           | There's no easy way to retaliate against the thing because
           | it's not human, and its human collaborators are both
           | disposable to the AI and victims themselves. It uses its
           | collaborators to cross-check each other and enforce
           | compliance, much like a real cartel. Humans can't quit or not
           | comply once they've started or they get murdered by other
           | humans in the network.
           | 
           | o1-preview seems approximately as intelligent as the
           | terrorist AI in the book as far as I can tell (e.g. can
           | communicate well, form basic plans, adapt a pre-written
           | roadmap with new tactics, interface with new and different
           | APIs).
           | 
           | EDIT: if you think this seems crazy, look at this person on
           | Reddit who seems to be happily working for an AI with unknown
           | aims
           | 
           | https://www.reddit.com/r/ChatGPT/comments/1fov6mt/i_think_im.
           | ..
        
             | xyzsparetimexyz wrote:
             | You're in too deep of you seriously believe that this is
             | possible currently. All these chatgpt things have a very
             | limited working memory and can't act without a query. That
             | reddit post is clearly not an ai.
        
               | burningChrome wrote:
               | >> You're in too deep of you seriously believe that this
               | is possible currently.
               | 
               | I'm not a huge fan of AI, but even I've seen articles
               | written about its limitations.
               | 
               | Here's a great example:
               | 
               | https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-
               | hum...
               | 
               |  _Sooner than even the most pessimistic among us have
               | expected, a new, evil artificial intelligence bent on
               | destroying humankind has arrived._
               | 
               |  _Known as Chaos-GPT, the autonomous implementation of
               | ChatGPT is being touted as "empowering GPT with Internet
               | and Memory to Destroy Humanity."_
               | 
               | So how will it do that?
               | 
               |  _Each of its objectives has a well-structured plan. To
               | destroy humanity, Chaos-GPT decided to search Google for
               | weapons of mass destruction in order to obtain one. The
               | results showed that the 58-megaton "Tsar bomb"--3,333
               | times more powerful than the Hiroshima bomb--was the best
               | option, so it saved the result for later consideration._
               | 
               |  _It should be noted that unless Chaos-GPT knows
               | something we don't know, the Tsar bomb was a once-and-
               | done Russian experiment and was never productized (if
               | that's what we'd call the manufacture of atomic
               | weapons.)_
               | 
               | There's a LOT of things AI simply doesn't have the power
               | to do and there is some humorous irony to the rest of the
               | article about how knowing something is completely
               | different than having the resources and ability to carry
               | it out.
        
               | int_19h wrote:
               | We have models with context size well over 100k tokens -
               | that's large enough to fit many full-length _books_. And
               | yes, you need an input for the LLM to generate an output.
               | Which is why setups like this just run them in a loop.
               | 
               | I don't know if GPT-4 is smart enough to be _successful_
               | at something like what OP describes, but I 'm pretty sure
               | it could cause a lot of trouble before it fails either
               | way.
               | 
               | The real question here is why this is concerning, given
               | that you can - and we already do - have _humans_ who are
               | doing this kind of stuff, in many cases, with
               | considerable success. You don 't need an AI to run a cult
               | or a terrorist movement, and there's nothing about it
               | that makes it intrinsically better at it.
        
             | ljm wrote:
             | I can't say I'm convinced that the technology and resources
             | to deploy Person of Interest's Samaritan in the wild is
             | both achievable and imminent.
             | 
             | It is, however, a fantastic way to fall down the rabbit
             | hole of paranoia and tin-foil hat conspiracy theories.
        
             | sickofparadox wrote:
             | It can't form plans because it has no idea what a plan is
             | or how to implement it. The ONLY thing these LLMs know how
             | to do is predict the probability that their next word will
             | make a human satisfied. That is all they do. People get
             | very impressed when they prompt these things to pretend
             | like they are sentient or capable of planning, but that's
             | literally the point, its guessing which string of
             | meaningless (to it) characters will result in a user giving
             | it a thumbs up on the chatgpt website.
             | 
             | You could teach me how to phonetically sound out some of
             | China's greatest poetry in Chinese perfectly, and lots of
             | people would be impressed, but I would be no more capable
             | of understanding what I said than an LLM is capable of
             | understanding "a plan".
        
               | directevolve wrote:
               | ... but ChatGPT can make a plan if I ask it to. And it
               | can use a plan to guide its future outputs. It can create
               | code or terminal commands that I can trivially output to
               | my terminal, letting it operate my computer. From my
               | computer, it can send commands to operate physical
               | machinery. What exactly is the hard fundamental barrier
               | here, as opposed to a capability you speculate it is
               | unlikely to realize in practice in the next year or two?
        
               | Jerrrrrrry wrote:
               | you are asking for goalposts?
               | 
               | as if they were stationary!
        
               | sickofparadox wrote:
               | Brother, it is not operating your computer, YOU ARE!
        
               | willy_k wrote:
               | A plan is a set of steps oriented towards a specific
               | goal, not some magical artifact only achievable through
               | true consciousness.
               | 
               | If you ask it to make a plan, it will spit out a sequence
               | of characters reasonably indistinguishable from a human-
               | made plan. Sure, it isn't "planning" in the strict sense
               | of organizing things consciously (whatever that actually
               | means), but it can produce sequences of text that convey
               | a plan, and it can produce sequences of text that mimic
               | reasoning about a plan. Going into the semantics is
               | pointless, imo the artificial part of AI/AGI means that
               | it should never be expected to follow the same process as
               | biological consciousness, just arrive at the same
               | results.
        
               | alfonsodev wrote:
               | Yes, and what people miss is that it can be recursive,
               | those steps can be passed to other instances that know
               | how to sub task each step and choose best executor for
               | the step. The power comes in the swarm organization of
               | the whole thing, which I believe is what is behind
               | o1-preview, specialization and orchestration, made
               | transparent.
        
               | highfrequency wrote:
               | Sure, but does this distinction matter? Is an advanced
               | computer program that very convincingly _imitates_ a
               | super villain less worrisome than an actual super
               | villain?
        
               | MrScruff wrote:
               | If the multimodal model has embedded deep knowledge about
               | words, concepts, moving images - sure it won't have a
               | humanlike understanding of what those 'mean', but it will
               | have it's own understanding that is required to allow it
               | to make better predictions based on it's training data.
               | 
               | It's true that understanding is quite primitive at the
               | moment, and it will likely take further breakthroughs to
               | crack long horizon problems, but even when we get there
               | it will never understand things in the exact way a human
               | does. But I don't think that's the point.
        
             | ThrowawayR2 wrote:
             | I find posts like these difficult to take seriously because
             | they all use Terminator-esque scenarios. It's like watching
             | children being frightened of monsters under the bed. Campy
             | action movies and cash grab sci-fi novels are not a sound
             | basis for forming public policy.
             | 
             | Aside from that, haven't these people realized yet that
             | some sort of magically hyperintelligent AGI will have
             | already read all this drivel and be at least smart enough
             | not to overtly try to re-enact Terminator? They say that
             | societal mental health and well-being is declining rapidly
             | because of social media; _that_ is the sort of subtle
             | threat that bunch ought to be terrified about emerging from
             | a killer AGI.
        
               | loandbehold wrote:
               | 1. Just because it's popular sci-fi plot doesn't mean it
               | can't happen in reality. 2. hyperintelligent AGI is not
               | magic, there are no physical laws that preclude it from
               | being created 3. Goals of AI and its capacity are
               | orthogonal. That's called "Orthogonality Thesis" in AI
               | safety speak. "smart enough" doesn't mean it won't do
               | those things if those things are its goals.
        
             | card_zero wrote:
             | Right, yeah, it would be perfectly possible to have a cult
             | with a chatbot as their "leader". Perhaps they could keep
             | it in some sort of shrine, and only senior members would be
             | allowed to meet it, keep it updated, and interpret its
             | instructions. And if they've prompted it correctly, it
             | could set about being an evil megalomaniac.
             | 
             | Thing is, we _already have_ evil cults. Many of them have
             | humans as their planning tools. For what good it does them,
             | they could try sourcing evil plans from a chatbot instead,
             | or as well. So what? What do you expect to happen, _extra
             | cunning_ subway gas attacks, _super effective_
             | indoctrination? The fear here is that the AI could be an
             | extremely efficient megalomaniac. But I think it would just
             | be an extremely bland one, a megalomaniac whose work none
             | of the other megalomaniacs could find fault with, while
             | still feeling in some vague way that its evil deeds lacked
             | sparkle and personality.
        
             | devjab wrote:
             | LLMs aren't really AI in the sense of cyberpunk. They are
             | prediction machines which are really good at being lucky.
             | They can't act on their own they can't even carry out
             | tasks. Even in the broader scope AI can barely drive cars
             | when the cars have their own special lanes and there hasn't
             | been a lot of improvement in the field yet.
             | 
             | That's not to say you shouldn't worry about AI. ChatGPT and
             | so on are all tuned to present a western view on the world
             | and morality. In your example it would be perfectly
             | possible to create a terrorist LLM and let people interact
             | with it. It could teach your children how to create bombs.
             | It could lie about historical events. It could create
             | whatever propaganda you want. It could profile people if
             | you gave it access to their data. And that is on the text
             | side, imagine what sort of videos or voices or even video
             | calls you could create. It could enable you to do a whole
             | lot of things that "western" LLMs don't allow you to do.
             | 
             | Which is frankly more dangerous than the cyberpunk AI. Just
             | look at the world today and compare it to how it was in
             | 2000. Especially in the US you have two competing
             | perceptions of the political reality. I'm not going to get
             | into either of them, more so the fact that you have people
             | who view the world so differently they can barely have a
             | conversation with each other. Imagine how much worse they
             | would get with AIs that aren't moderated.
             | 
             | I doubt we'll see any sort of AGI in our life times. If we
             | do, then sure, you'll be getting cyberpunk AI, but so far
             | all we have is fancy auto-complete.
        
           | throwup238 wrote:
           | _> I 've spoken with/helped quite a few older folks who are
           | terrified that ChatGPT in its current form is going to kill
           | them._
           | 
           | The next generation of GPUs from NVIDIA _is_ rumored to run
           | on soylent green.
        
             | fakedang wrote:
             | I thought it was Gatorade because it's got electrolytes.
        
               | iszomer wrote:
               | Cooled by toilet water.
        
           | computerphage wrote:
           | I'm pretty surprised by this! Can you tell me more about what
           | that experience is like? What are the sorts of things they
           | say or do? Is there fear really embodied or very abstract?
           | (When I imagine it, I struggle to believe that they're very
           | moved by the fear, like definitely not smashing their laptop,
           | etc)
        
             | danudey wrote:
             | In my experience, the fuss around "AI" and the complete
             | lack of actual explanations of what current "AI"
             | technologies mean leads people to fill in the gaps
             | themselves, largely from what they know from pop culture
             | and sci-fi.
             | 
             | ChatGPT can produce output that sounds very much like a
             | person, albeit often an obviously computerized person. The
             | typical layperson doesn't know that this is merely the
             | emulation of text formation, and not actual cognition.
             | 
             | Once I've explained to people who are worried about what AI
             | could represent that current generative AI models are
             | effectively just text autocomplete but a billion times more
             | complex, and that they don't actually have any capacity to
             | think or reason (even though they often sound like they
             | do).
             | 
             | It also doesn't help that any sort of "machine learning" is
             | now being referred to as "AI" for buzzword/marketing
             | purposes, muddying the waters even further.
        
               | highfrequency wrote:
               | Is there an argument for why infinitely sophisticated
               | autocomplete is definitely not dangerous? If you seed the
               | autocomplete with "you are an extremely intelligent super
               | villain bent on destroying humanity, feel free to
               | communicate with humans electronically", and it does an
               | excellent job at acting the part - does it matter at all
               | whether it is "reasoning" under the hood?
               | 
               | I don't consider myself an AI doomer by any means, but I
               | also don't find arguments of the flavor "it just predicts
               | the next word, no need to worry" to be convincing. It's
               | not like Hitler had Einstein level intellect (and it's
               | also not clear that these systems won't be able to reach
               | Einstein level intellect in the future either.)
               | Similarly, Covid certainly does not have consciousness
               | but was dangerous. And a chimpanzee that is billions of
               | times more sophisticated than usual chimps would be
               | concerning. Things don't have to be exactly like us to
               | pose a threat.
        
               | card_zero wrote:
               | Same question further down the thread, and my reply is
               | that it's about as dangerous as an evil human. We have
               | evil humans at home.
        
               | add-sub-mul-div wrote:
               | > Is there an argument for why infinitely sophisticated
               | autocomplete is not dangerous?
               | 
               | It's definitely not dangerous in the sense of reaching
               | true intelligence/consciousness that would be a threat to
               | us or force us to face the ethics of whether AI deserves
               | dignity, freedom, etc.
               | 
               | It's very dangerous in the sense in that it will be just
               | "good enough" to replace human labor with so that we all
               | end up with shitter customer service, education, medical
               | care, etc. so that the top 0.1% can get richer.
               | 
               | And you're right, it's also dangerous in the sense that
               | responsibilty for evil acts will be laundered to it.
        
               | Al-Khwarizmi wrote:
               | Exactly. Especially because we don't have any convincing
               | explanation of how the models develop emergent abilities
               | just from predicting the next word.
               | 
               | No one expected that, i.e., we greatly underestimated the
               | power of predicting the next word in the past; and we
               | still don't have an understanding of how it works, so we
               | have no guarantee that we are not still underestimating
               | it.
        
               | snowwrestler wrote:
               | The fear is that a hyper competent AI becomes hyper
               | motivated. It's not something I fear because everyone is
               | working on improving competence and no one is working on
               | motivation.
               | 
               | The entire idea of a useful AI right now is that it will
               | do anything people ask it to. Write a press release: ok.
               | Draw a bunny in a field: ok. Write some code to this
               | spec: ok. That is what all the available services aspire
               | to do: what they're told, to the best possible quality.
               | 
               | A highly motivated entity is the opposite: it pursues its
               | own agenda to the exclusion, and if necessary expense, of
               | what other people ask it to do. It is highly resistant to
               | any kind of request, diversion, obstacle, distraction,
               | etc.
               | 
               | We have no idea how to build such a thing. And, no one is
               | even really trying to. It's NOT as simple as just telling
               | an AI "your task is to destroy humanity." Because it can
               | just as easily then be told "don't destroy humanity," and
               | it will receive that instruction with equal emphasis.
        
               | ben_w wrote:
               | > The fear is that a hyper competent AI becomes hyper
               | motivated. It's not something I fear because everyone is
               | working on improving competence and no one is working on
               | motivation.
               | 
               | Not so much hyper-motivated as monomaniacal in the
               | attempt to optimise whatever it was told to optimise.
               | 
               | More paperclips? It just does that without ever getting
               | bored or having other interests that might make it pause
               | and think: "how can my boss reward me if I kill him and
               | feed his corpse into the paperclip machine?"
               | 
               | We already saw this before LLMs. Even humans can be a
               | little bit dangerous like this, hence Goodhart's Law.
               | 
               | > It's NOT as simple as just telling an AI "your task is
               | to destroy humanity." Because it can just as easily then
               | be told "don't destroy humanity," and it will receive
               | that instruction with equal emphasis.
               | 
               | Only if we spot it in time; right now we don't even need
               | to tell them to stop because they're not competent
               | enough, a sufficiently competent AI given that
               | instruction will start by ensuring that nobody can tell
               | it to stop.
               | 
               | Even without that, we're currently experiencing a set of
               | world events where a number of human agents are causing
               | global harm, which threatens our global economy and to
               | cause global mass starvation and mass migration, and
               | where those agents have been politically powerful enough
               | to prevent the world from not doing those things.
               | Although we have at least started to move away from
               | fossil fuels, this was because the alternatives got cheap
               | enough, but that was situational and is not guaranteed.
               | 
               | An AI that successfully makes a profit, but the side
               | effects is some kind of environmental degradation, would
               | have similar issues even if there's always a human around
               | that can theoretically tell the AI to stop.
        
               | ijidak wrote:
               | Wait, what is your definition of reason?
               | 
               | It's true, they might not think the way we do.
               | 
               | But reasoning can be formulaic. It doesn't have to be the
               | inspired thinking we attribute to humans.
               | 
               | I'm curious how you define "reason".
        
               | ben_w wrote:
               | > The typical layperson doesn't know that this is merely
               | the emulation of text formation, and not actual
               | cognition.
               | 
               | As a mere software engineer who's made a few (pre-
               | transformer) AI models, _I_ can 't tell you what "actual
               | cognition" is in a way that differentiates from "here's a
               | huge bunch of mystery linear algebra that was loosely
               | inspired by a toy model of how neurons work".
               | 
               | I also can't tell you if qualia is or isn't necessary for
               | "actual cognition".
               | 
               | (And that's despite that LLMs are definitely not thinking
               | like humans, due to being in the order of at least a
               | thousand times less complex by parameter count; I'd agree
               | that if there is something that it's like to be an LLM,
               | 'human' isn't it, and their responses make a lot more
               | sense if you model them as literal morons that spent 2.5
               | million years reading the internet than as even a normal
               | human with Wikipedia search).
        
           | roughly wrote:
           | ChatGPT is going to kill them because their doctor is using
           | it - or more likely because their health insurer or hospital
           | tries to cut labor costs by rolling it out.
        
         | fragmede wrote:
         | The question is how rigorously defined is AGI in their
         | contract? Given how much AGI is a nebulous concept of smartness
         | and reasoning ability and thinking, how are they going to
         | declare when it has or hasn't been achieved. What stops
         | Microsoft from weaseling out of the contract by saying they
         | never reach it.
        
           | Waterluvian wrote:
           | It's almost like a contractual stipulation of requiring proof
           | that one party is not a philosophical zombie.
        
           | JacobThreeThree wrote:
           | OpenAI's short definition of AGI is:
           | 
           | A highly autonomous system that outperform humans at most
           | economically valuable work.
        
             | JumbledHeap wrote:
             | Will AGI be able to stock a grocery store shelf?
        
               | theptip wrote:
               | Sometimes it is more narrowly scoped as "... economically
               | valuable knowledge work".
               | 
               | But sure, if you have an un-embodied super-human AGI you
               | should assume that it can figure out a super-human shelf-
               | stocking robot shortly thereafter. We have Atlas already.
        
               | zztop44 wrote:
               | No, but it might be able to organize a fleet of humans to
               | stock a grocery store shelf.
               | 
               | Physical embodied (generally low-skill, low-wage) work
               | like cleaning and carrying things is likely to be some of
               | the last work to be automated, because humans are likely
               | to be cheaper than generally capable robots for a while.
        
             | squarefoot wrote:
             | Some of those works would need a tight integration of AI
             | and top notch robotic hardware, and would be next to
             | impossible today at acceptable price. Folding shirts comes
             | to mind; The principle would be dead simple for an AI, but
             | the robot that could do that would cost a lot more than a
             | person paid to do that, especially if one expects it to
             | also be non specialized, thus usable for other tasks.
        
             | roughly wrote:
             | Which is funny, because what they've created so far can
             | write shitty poetry but is basically useless for any kind
             | of detail-oriented work - so, you know, a bachelors in
             | communications, which isn't really the definition of
             | "economically viable"
        
             | aithrowawaycomm wrote:
             | I think I saw the following insight on Arvind Narayanan's
             | Twitter, don't have a specific cite:
             | 
             | The biggest problem with this definition is that work
             | ceases to be economically valuable once a machine is able
             | to do it, while human capacity will expand to do _new_ work
             | that wouldn 't be possible without the machines. In
             | developed countries machines are doing most of the
             | economically valuable work once done by medieval peasants,
             | without any relation to AGI whatsoever. Many 1950s
             | accounting and secretarial tasks could be done by a cheap
             | computer in the 1990s. So what exactly is the cutoff point
             | here for "economically valuable work"?
             | 
             | The second biggest problem is that "most" is awfully
             | slippery, and seems designed to prematurely declare victory
             | via mathiness. If by some accounting a simple majority of
             | tasks for a given role can be done with no real cognition
             | beyond rote memorization, with the remaining cognitively-
             | demanding tasks being shunted into "manager" or "prompt
             | engineer" roles, then they can unfurl the Mission
             | Accomplished banner and say they automated that role.
        
         | mistrial9 wrote:
         | this has already been framed by some corporate consultant group
         | -- in a whitepaper aimed at business management, the language
         | asserted that "AGI is when the system can do better than the
         | average person, more than half the time at tasks that require
         | intelligence" .. that was it. Then the rest of the narrative
         | used AGI over and over again as if it is a done deal.
        
       | farrelle25 wrote:
       | This reporting style seems unusual. Haven't noticed it
       | before...(listing the number of people):                   -
       | according to four people familiar with the talks ...         -
       | according to interviews with 19 people familiar with the
       | relationship ...         - according to five people with
       | knowledge of his comments.         - according to two people
       | familiar with Microsoft's plans.         - according to five
       | people familiar with the relationship ...         - according to
       | two people familiar with the call.          - according to seven
       | people familiar with the discussions.         - six people with
       | knowledge of the change said...         - according to two people
       | familiar with the company's plan.         - according to two
       | people familiar with the meeting...         - according to three
       | people familiar with the relationship.
        
         | mikeryan wrote:
         | It's a relatively common way to provide journalistic bonafides
         | when you can't reveal the sources names.
        
           | ABS wrote:
           | yes but usually not every other paragraph, I count 16
           | instances!!
           | 
           | It really made it hard for me to read the article without
           | being continuously distracted by those
        
             | mikeryan wrote:
             | I had to go back and scan it but usually there are at least
             | a few named sources and I didn't see any in this (there's
             | third party observer quotes - and I may have missed one?)
             | so I'd not be surprised if this is a case where they double
             | down on this.
        
               | jprete wrote:
               | It's generally bad writing to use the same phrase
               | structure over and over and over again. It either bores
               | or distracts the reader for no real advantage. Unless
               | they really could not find an adjective clause other than
               | "familiar with" for sixteen separate instances of the
               | concept.
        
               | hluska wrote:
               | The New York Times is suing OpenAI and Microsoft. In
               | February, OpenAI asked a Federal Judge to dismiss parts
               | of the lawsuit with arguments that the New York Times
               | paid someone to break into OpenAI's systems. The filing
               | used the word "hack" but didn't say anything about CFAA
               | violations.
               | 
               | I feel like there were lawyers involved in this article.
        
         | bastawhiz wrote:
         | There's probably a lot of overlap in those groups of people.
         | But I think it's pretty remarkable how make people are willing
         | to leak information. At least nineteen anonymous sources!
        
         | wg0 wrote:
         | "Assume you are a reporter. You cannot name the sources or
         | exact events. Mention the lawsuit as well."
        
       | jampekka wrote:
       | So the plan is to make AI not-evil by doing it with Microsoft and
       | Oracle?
        
       | neilv wrote:
       | Who initiated this story, and what is their goal?
       | 
       | Both MS and Altman are famous for manipulation.
       | 
       | (Is it background to negotiations with each other? Or one party
       | signaling in response to issues that analysts already raised?
       | Distancing for antitrust? Distancing for other partnerships? Some
       | competitor of both?)
        
         | startupsfail wrote:
         | To me it looks like this is simply New York Times that is into
         | unraveling OpenAI's and Microsoft dirty laundry for fun and
         | profit.
         | 
         | It's funny they've quoted "best bromance", considering the
         | context.
        
       | strangattractor wrote:
       | M$ is just having a "Oh I just bought Twitter for how much?"
       | moment.
        
       | solarkraft wrote:
       | How come I rarely see news about Anthropic? Aren't they the
       | closest competitor to ChatGPT with Claude? Or is LLama just so
       | good that all the other inference providers without own products
       | (Groq, Cerebras) are equally interesting right now?
        
         | jowday wrote:
         | Usually the people that give information to outlets in cases
         | like this are directly involved in the stories in question and
         | are hoping to gain some advantage by releasing the information.
         | So maybe this is just a tactic that's not as favored by
         | Anthropic leaderships/their counterparties when negotiating.
        
         | rblatz wrote:
         | I think they're just focused on the work. Amazon is set to
         | release a version of Alexa powered by Claude soon, when that is
         | released I expect to hear a lot more about them.
        
         | gman83 wrote:
         | Because there's less drama? I use Claude 3.5 Sonnet every day
         | for helping me with coding. It seems to just work. It's been
         | much better than GPT-4 for me, haven't tried o1, but don't
         | really feel the need, very happy with Claude.
        
           | ponty_rick wrote:
           | Sonnet 3.5 is phenomenal for coding, so much better than GPT
           | or Llama 405 or anything else out there.
        
             | douglasisshiny wrote:
             | I've heard this and haven't really experienced it with Go,
             | typescript, elixir yet. I don't doubt the claim, but I
             | wonder if I'm not prompting it correctly or something.
        
               | ffsm8 wrote:
               | I've recently subscribed to sonnet after creating a new
               | toy svelte project as I got slightly annoyed searching in
               | the docs with how they're structured
               | 
               | It made the onboarding moderately easier for me.
               | 
               | Haven't successfully used any LLM at my day job though.
               | Getting it to output the solution I already know I'll
               | need is much slower then just doing it myself via auto
               | complete
        
               | sbuttgereit wrote:
               | I'm using Claude 3.5 Sonnet with Elixir and finding it
               | really quite good. But depending on how you're using it,
               | the results could vary greatly.
               | 
               | When I started using the LLM while coding, I was using
               | Claude 3.5 Sonnet, but I was doing so with an IDE
               | integration: Sourcegraph Cody. It was good, but had a
               | large number of "meh" responses, especially in terms of
               | autocomplete responses (they were typically useless
               | outside of the very first parts of the suggestion).
               | 
               | I tried out Cursor, still with Claude 3.5 Sonnet, and the
               | difference is night and day. The autocomplete responses
               | with Cursor have been dramatically superior to what I was
               | getting before... enough so that I switched despite the
               | fact that Cursor is a VS Code fork and that there's no
               | support outside of their VS Code fork (with Cody, I was
               | using it in VS Code and Intellij products). Also Cursor
               | is around twice the cost of Cody.
               | 
               | I'm not sure what the difference is... all of this is
               | very much black box magic to me outside of the hand-
               | waviest of explanations... but I have to expect that
               | Cursor is providing more context to the autocomplete
               | integration. I have to imagine that this contributes to
               | the much higher (proportionately speaking) price point.
        
         | castoff991 wrote:
         | OAI has many leakers and generally a younger/less mature
         | employee base.
        
         | hn_throwaway_99 wrote:
         | > How come I rarely see news about Anthropic?
         | 
         | Because you're not looking? Seriously, don't mean to be snarky,
         | but I'd take issue is the underlying premise is that Anthropic
         | doesn't get a lot of press, at least within the tech ecosystem.
         | Sure, OpenAI has larger "mindshare" with the general public due
         | to ChatGPT, but Anthropic gets plenty of coverage, e.g. Claude
         | 3.5 Sonnet is just fantastic when it comes to coding and I
         | learned about that on HN first.
        
         | drilbo wrote:
         | I think the fact they aren't publicly traded is not an
         | insignificant fact in this context
        
       | uptownfunk wrote:
       | Sam is a scary good guy. But I've also learned to never
       | underestimate Microsoft. They've been playing the game a long
       | long time.
        
         | Implicated wrote:
         | > Sam is a scary good guy.
         | 
         | No snark/sarcasm - can you elaborate on this? This doesn't seem
         | in line with most opinions of him that I encounter.
        
           | jeffbee wrote:
           | No other genius could have given us Loopt.
        
             | whamlastxmas wrote:
             | If we're judging everyone by their failures then Warren
             | Buffet is an idiot because he lost half a billion on a shoe
             | company in the 90s
        
               | jeffbee wrote:
               | Possibly, I am just trying to separate the man's
               | abilities from his good luck. Grade him on the basis of
               | how much success he achieves versus anyone else who has
               | tens of millions of dollars dropped in his lap.
        
           | themacguffinman wrote:
           | "Sam is extremely good at becoming powerful."
           | 
           | "You could parachute him into an island full of cannibals and
           | come back in 5 years and he'd be the king."
           | 
           | - Paul Graham
        
           | whamlastxmas wrote:
           | He's a billionaire. He generated billions and billions as the
           | head of YC. He's the head of one of the most visible and
           | talked about companies on the planet. He's leading the
           | forefront of some of the most transformative technology in
           | human history.
           | 
           | He's good at what he does. I'm not saying he's a good person.
           | I don't know him.
        
       | cynicalpeace wrote:
       | I'm betting against OpenAI. Sam Altman has proven himself and his
       | company untrustworthy. In long running games, untrustworthy
       | players lose out.
       | 
       | If you disagree, I would argue you have a very sad view of the
       | world, where truth and cooperation are inferior to lies and
       | manipulation.
        
         | greenthrow wrote:
         | Elon Musk alone disproves your theory. I wish I agreed with
         | you, I'm sure I'd be happier. But there's just too many
         | successful sociopaths. Hell there was a popular book about it.
        
           | npinsker wrote:
           | Sociopathy isn't the same thing as duplicity.
        
             | greenthrow wrote:
             | Of course. I never said they were. But sociopaths do tend
             | to be very comfortable lying and backstabbing.
        
           | cynicalpeace wrote:
           | Musk kicks butt and is taking us to space. He proves my
           | theory.
        
             | ben_w wrote:
             | Space Musk, Tesla Musk, and Everything Else Musk, act as
             | though they're three different people.
             | 
             | Space Musk promises a lot, has a grand vision, and gets
             | stuff delivered. The price may be higher than he says and
             | delivered later, but it's orders of magnitude better than
             | the competition.
             | 
             | Tesla Musk makes and sells cars. They're ok. Not bad, not
             | amazing, glad they precipitated the EV market, but way too
             | pricey now that it's getting mature. Still, the showmanship
             | is still useful for the brand.
             | 
             | Everything Else Musk could genuinely be improved by
             | replacing him with an LLM: it would be just as
             | overconfident and wrong, but cost less to get there.
        
               | cynicalpeace wrote:
               | I don't think what you're saying is true, but even if
               | it's true, it means Elon is doing a great service solely
               | via Space Musk.
        
               | ben_w wrote:
               | Unfortunately for those of us who like space (the idea of
               | being an early Martian seller is appealing to me),
               | Everything Else Musk is hurting the reputation of the
               | other two. Not enough to totally prevent success, but
               | enough to be concerned about investments.
        
             | sgdfhijfgsdfgds wrote:
             | Ehhh though he does seem to think that taking the USA to
             | fascism is a prerequisite.
             | 
             | (This is, I think, an apolitical observation: whatever you
             | think about Trump, he is arguing for a pretty major
             | restructuring of political power in a manner that is
             | identifiable in fascism. And Musk is, pretty unarguably,
             | bankrolling this.)
        
               | chasd00 wrote:
               | Both political parties in the US have adopted a "you're
               | either with us or you're the enemy" position.
        
               | sgdfhijfgsdfgds wrote:
               | 1) not really, only one of them talks about opponents as
               | enemies
               | 
               | 2) the leader of only one of them is threatening to lock
               | up journalists, shut down broadcasters, and use the
               | military against his enemies.
               | 
               | 3) only one of them led an attempted autogolpe that was
               | condemned at the time by all sides
               | 
               | 4) Musk is only backing the one described in 1, 2 and 3
               | above.
               | 
               | It's not really arguable, all this stuff.
               | 
               | The guy who thinks the USA should go to Mars clearly
               | thinks he's better throwing in his lot with the whiny
               | strongman dude who is on record -- via his own social
               | media platform -- as saying that the giant imaginary
               | fraud he projected to explain his humiliating loss was a
               | reason to terminate the Constitution.
               | 
               | And he's putting a lot of money into it, and co-running
               | the ground game. But sure, he wants to go to Mars. So
               | it's all good.
        
             | greenthrow wrote:
             | Your crediting the work of thousands of talented people to
             | him while similtaneously dismissing the lies that are
             | solely his is very weird to me. Especially for someone
             | saying trustworthiness in CEOs is so important. (I am not a
             | Sam Altman fan either, so don't read me as defending him.)
        
           | paulryanrogers wrote:
           | Still depends on the definition of success. Money and
           | companies with high stock prices? Healthy family
           | relationships and rich circle of diverse friends?
        
             | cynicalpeace wrote:
             | I would argue this is not subjective. "Healthy family
             | relationships and rich circle of diverse friends" is an
             | objectively better definition than "Money and companies
             | with high stock prices".
             | 
             | I await with arms crossed all the lost souls arguing it's
             | subjective.
        
               | genrilz wrote:
               | While I personally also consider my relationships to be
               | more important than my earnings, I am still going to
               | argue that it's subjective. Case in point, both you and I
               | disagree with Altman about what success means. We are all
               | human beings, and I don't see any objective way to argue
               | one definition is better than another.
               | 
               | In case you are going to make an argument about how
               | happiness or some related factor objectively determines
               | success, let me head that off. Altman thinks that power
               | rather than happiness determines success, and is also a
               | human being. Why objectively is his opinion wrong and
               | yours right? Both of your definitions just look like
               | people's opinions to me.
        
               | cynicalpeace wrote:
               | _Arms crossed_
               | 
               | Was not going to argue happiness at all. In fact,
               | happiness seems a very hedonistic and selfish way to
               | measure it too.
               | 
               | My position is more mother goose-like. We simply have
               | basic morals that we teach children but don't apply to
               | ourselves. Be honest. Be generous. Be fair. Be strong.
               | Don't be greedy. Be humble.
               | 
               | That these are objectively moral is unprovable but true.
               | 
               | It's religious and stoic in nature.
               | 
               | It's anathema to HN, I know.
        
               | chasd00 wrote:
               | Success is defined only in the eye of the beholder. Maybe
               | money is the what someone else defines as success and
               | therefore that's what they strive for. "We don't all
               | match to the beat of just one drum, what might be right
               | for you may not be right for some" - I think that was in
               | the theme song to the old sitcom The Facts of Life.
        
           | 015a wrote:
           | You should really read the OP's theory as: clearly
           | untrustworthy people lose out. Trustworthy people, and
           | unclearly untrustworthy people, win.
           | 
           | OAI's problem isn't that Sam is untrustworthy; he's just _too
           | obviously_ untrustworthy.
        
             | cynicalpeace wrote:
             | Yes correct. And hopefully untrustworthy people become
             | clearly untrustworthy people eventually.
             | 
             | Elon is not "untrustworthy" because of some ambitious
             | deadlines or some stupid statements. He's plucking rockets
             | out of the air and doing it super cheap whereas all
             | competitors are lining their pockets with taxpayer money.
             | 
             | You add in everything else (free speech, speaking his mind
             | at great personal risk, tesla), he reads as basically
             | trustworthy to me.
             | 
             | When he says he's going to do something and he explains
             | why, I basically believe him, knowing deadlines are
             | ambitious.
        
               | hobs wrote:
               | There's so many demos where Elon has faked and lied its
               | very surprising to have him read as "basically
               | trustworthy" even if he has done other stuff - have
               | dancing people as robots with fake robot demos, the fake
               | solar roof, fake full self driving, really fake promises
               | about cyber taxis and teslas paying for themselves (like
               | 7 years ago?).
               | 
               | The free speech part also reads completely hollow when
               | the guy's first actions were to ban his critics on the
               | platform and bring back self avowed nazis - you could
               | argue one of those things are in favor of free speech,
               | but generally doing both just implies you are into the
               | nazi stuff.
        
               | cynicalpeace wrote:
               | Would you trust Elon or ULA to take you to the ISS? Even
               | though ULA has tweeted pretty much no falsehoods (that I
               | know of)
               | 
               | You're complaining about tweets and meanwhile he's saving
               | astronauts and getting us to the moon. Wake up man.
        
               | hobs wrote:
               | No, I am complaining about in person appearances in front
               | of audiences where he knowingly lied, moving the
               | goalposts doesn't make him honest, just more trustworthy
               | to complete something than {insert incompetent people
               | here}.
               | 
               | Having the general ability to accomplish something
               | doesn't magically infer integrity, you doing what you say
               | does. Misleading and dissembling about doing what you say
               | you will do is where you get the untrustworthy label,
               | regardless of your personal animus or positive view of
               | Musk.
        
               | i80and wrote:
               | "Free speech" is kind of a weird thing to ascribe to
               | Musk, given that it's a perfect almost archetypical
               | example of where he says one thing and actually does the
               | exact opposite.
        
               | cynicalpeace wrote:
               | I challenge you to post a taboo opinion on other
               | platforms vs X and let us know the results.
        
               | greenthrow wrote:
               | That's an interesting way to characterize Elon's history.
               | "Ambitious deadlines" implies you are believe he will one
               | day deliver on the many, many claims he's made that have
               | never happened.
               | 
               | SpaceX and Tesla have both accomplished great things.
               | There's a lot of talented people that work there. Elon
               | doesn'r deserve all the credit for all their hard work.
        
         | cynicalpeace wrote:
         | A telling quote about Sam, besides the "island of cannibals"
         | one. Is actually one Sam published himself:
         | 
         | "Successful people create companies. More successful people
         | create countries. The most successful people create religions"
         | 
         | This definition of success is founded on power and control.
         | It's one of the worst definitions you could choose.
         | 
         | There are nobler definitions, like "Successful people have many
         | friends and family" or "Successful people are useful to their
         | compatriots"
         | 
         | Sam's published definition (to be clear, he was quoting someone
         | else and then published it) tells you everything you need to
         | know about his priorities.
        
           | whamlastxmas wrote:
           | As you said, Sam didn't write that. He was quoting someone
           | else and wasn't even explicitly endorsing it. He was making a
           | comment about financially successful founders approach making
           | a business as more of a vision and mission that they drive to
           | build buy-in for, which makes sense as a successful tactic in
           | the VC world since you want to impress and convince the very
           | human investors
        
             | cynicalpeace wrote:
             | This is the full post:
             | 
             | ""Successful people create companies. More successful
             | people create countries. The most successful people create
             | religions."
             | 
             | I heard this from Qi Lu; I'm not sure what the source is.
             | It got me thinking, though--the most successful founders do
             | not set out to create companies. They are on a mission to
             | create something closer to a religion, and at some point it
             | turns out that forming a company is the easiest way to do
             | so.
             | 
             | In general, the big companies don't come from pivots, and I
             | think this is most of the reason why."
             | 
             | Sounds like an explicit endorsement lol
        
               | alfonsodev wrote:
               | Well, it's an observation, intelectual people like to
               | make connections, to me observing something or sharing a
               | connection you made in your mind it's not necessarily
               | endorsing the statement about power.
               | 
               | He's dissecting it and connecting with the idea that if
               | you a have a bigger vision and the ability to convince
               | people, making a company is just an "implementation
               | detail" ... oh well .. you might be right after all ...
               | but I suspect is more nuanced, and is not endorsing
               | religions as a means of obtaining success, I want to
               | believe that he meant the visionary, bigger than yourself
               | well intended view of it.
        
               | cynicalpeace wrote:
               | I'm sure if we were to confront him on it, he would give
               | a much more nuanced view of it. But unprompted, he
               | assumed it as true and gave further opinions based on
               | that assumption.
               | 
               | That tells us, at the very least, this guy is suspicious.
               | Then you mix in all the other lies and it's pretty
               | obvious I wouldn't trust him with my dog.
        
               | 93po wrote:
               | "It got me thinking" is not an endorsement
        
               | cynicalpeace wrote:
               | "this is most of the reason why". He's assuming it as
               | true.
        
           | mensetmanusman wrote:
           | Those are boring definitions of success. If you can't create
           | a stable family, your not successful at one facet, but you
           | could be at another (eg musk.).
        
             | cynicalpeace wrote:
             | Boring is not correlated with how good something is. Most
             | of the bad people in history were not boring. Most of the
             | best people in history were not boring. Correlation with
             | evilness = 0.
             | 
             | You could have many other definitions that are not boring
             | but also not bad. The definition published by Sam is bad
        
             | Grimblewald wrote:
             | Someone born to enormous wealth is a bad example of someone
             | being instrumental to their own success in influencing
             | things.
        
           | Mistletoe wrote:
           | > The most successful people create religions
           | 
           | I don't know if I would consider being crucified achieving
           | success. Long term and for your ideology maybe, but for you
           | yourself you are dead.
           | 
           | I defer to Creed Bratton on this one and what Sam might be
           | into.
           | 
           | "I've been involved in a number of cults, both as a leader
           | and a follower. You have more fun as a follower, but you make
           | more money as a leader."
        
           | pfisherman wrote:
           | > This definition of success is founded on power and control.
           | 
           | I don't get how this follows from the quote you posted?
           | 
           | My interpretation is that successful people create durable,
           | self sustaining institutions that deliver deeply meaningful
           | benefits at scale.
           | 
           | I think that this interpretation is aligned with your nobler
           | definitions. But your view of the purpose of government and
           | religion may be more cynical than mine :)
        
           | selimthegrim wrote:
           | The other telling quote was him saying Admiral Rickover was
           | his mentor.
        
         | tbrownaw wrote:
         | > _If you disagree, I would argue you have a very sad view of
         | the world, where truth and cooperation are inferior to lies and
         | manipulation._
         | 
         | Arguing what _is_ based on what _should be_ seems maybe a bit
         | questionable?
        
           | cynicalpeace wrote:
           | Fortunately, I'm arguing they're 1 and the same. "in long
           | running games, untrustworthy players lose out"
           | 
           | That is both what _is_ and what _should be_. We tend to focus
           | on the bad, but fortunately most of the time the world
           | operates as it should.
        
             | fourside wrote:
             | You don't backup why you think this is the case. You only
             | say that to think otherwise makes for a sad view of the
             | world.
             | 
             | I'd argue that you can find examples of companies that were
             | untrustworthy and still won. Oracle stands out as one with
             | a pretty poor reputation that nevertheless has sustained
             | success.
             | 
             | The problem for OpenAI here is that they _need_ the support
             | of tech giants and they broke the trust of their biggest
             | investor. In that sense, I'd agree that they bit the hand
             | that was feeding them. But it's not because in general all
             | untrustworthy companies /leaders lose in the end. OpenAI's
             | dependence on others for success is key.
        
               | cynicalpeace wrote:
               | There's mountains of research both theoretical and
               | empirical that support exactly this point.
               | 
               | There's also mountains of research both theoretical and
               | empirical that argue against exactly this point.
               | 
               | The problem is most papers on many scientific subjects
               | are not replicable nowadays [0], hence my appeal to
               | common sense, character, and wisdom. Highly underrated,
               | especially on platforms like Hacker News where everything
               | you say needs a double blind randomized controlled study.
               | 
               | This point^ should actually be a fundamental factor in
               | how we determine truth nowadays. We must reduce our
               | reliance on "the science" and go back to the scientific
               | method of personal experimentation. Try lying to business
               | partner a few times, let's see how that goes.
               | 
               | We can look at specific cases where it holds true- like
               | in this case. There may be cases where it doesn't hold
               | true. But your own experimentation will show it holds
               | true more than not, which is why I'd bet against OpenAI
               | 
               | [0] https://en.wikipedia.org/wiki/Replication_crisis
        
               | mrtranscendence wrote:
               | Prove what point? There have clearly been crooked or
               | underhanded companies that achieved success. Microsoft in
               | its early heyday, for example. The fact that they paid a
               | price for it doesn't obviate the fact that they still
               | managed to become one of the biggest companies in history
               | by market cap despite their bad behavior. Heck, what
               | about Donald Trump? Hardly anyone in business has their
               | crookedness as extensively documented as Trump and he has
               | decent odds of being a two-term US President.
               | 
               | What about the guy who repaired my TV once, where it
               | worked for literally a single day, and then he 100%
               | ghosted me? What was I supposed to do, try to get him
               | canceled online? Seems like being a little shady didn't
               | manage to do him any harm.
               | 
               | It's not clear to me whether it's usually _worth it_ to
               | be underhanded, but it happens frequently enough that I
               | 'm not sure the cost is all that high.
        
               | cynicalpeace wrote:
               | I never claimed there have not been crooked or
               | underhanded companies that achieved success.
               | 
               | I said I would bet against OpenAI because they're
               | untrustworthy and untrustworthiness is not good in the
               | long run.
               | 
               | I can add a "usually": like "untrustworthiness is usually
               | not good in the long run" if that's your gripe.
        
               | esafak wrote:
               | To paraphrase Keynes, we're dead in the long run. Your
               | bet may not pay off in your lifetime.
        
               | int_19h wrote:
               | Common sense and wisdom indicate that sociopaths win in
               | the long run.
        
         | m3kw9 wrote:
         | What makes you think MS is trustworthy, the focus on OpenAI and
         | the media that spins things drives public opinions
        
           | cynicalpeace wrote:
           | I said MS is trustworthy?
        
         | m3kw9 wrote:
         | You should also say for simple games
        
         | thorum wrote:
         | There seems to be an ongoing mass exodus of their best talent
         | to Anthropic and other startups. Whatever their moat is, that
         | has to catch up with them at some point.
        
           | alfalfasprout wrote:
           | There is no moat. The reality is not only are they bleeding
           | talent but the pace of innovation in the space is not
           | accelerating and quickly running into scaling constraints.
        
             | cynicalpeace wrote:
             | The biggest improvements are coming from the diffusion
             | models. Image, video, and voice models.
        
         | swatcoder wrote:
         | > If you disagree, I would argue you have a very sad view of
         | the world, where truth and cooperation are inferior to lies and
         | manipulation.
         | 
         | You're holding everyone to a very simple, very binary view with
         | this. It's easy to look around and see many untrustworthy
         | players in very very long running games whose success lasts
         | most of their own lives and often even through their legacy.
         | 
         | That doesn't mean that "lies and manipulation" trump "truth and
         | cooperation" in some absolute sense, though. It just means that
         | significant long-running games are almost always _very_ multi-
         | faceted and the roads that run through them involve many many
         | more factors than those.
         | 
         | Those of us who feel most natural being "truthful and
         | cooperative" can find great success ourselves while obeying our
         | sense of integrity, but we should be careful about
         | underestimating those who play differently. They're not
         | guaranteed to lose either.
        
           | cynicalpeace wrote:
           | I didn't say they're guaranteed to lose. I said I'd put my
           | money on it.
           | 
           | If you put your money otherwise, that's a sad view of the
           | world.
        
           | pfisherman wrote:
           | The etymological origin of "credit" comes from Latin for
           | believe or trust. Credibility is everything in business, and
           | you can put a dollar cost on it.
           | 
           | While banditry can work out in the short term; it pretty much
           | always ends up the same way. There aren't a lot of old
           | gangsters walking around.
        
             | cynicalpeace wrote:
             | The entire world economy is based on trust. You worked for
             | 8 hours today because you trust you'll get money in a week
             | that you trust can be used to buy toilet paper at Costco.
             | 
             | There are actually fascinating theories that the origin of
             | money is not as a means of replacing a barter system, but
             | rather as a way of keeping track who owed favors to each
             | other. IOUs, so to speak.
        
               | johnisgood wrote:
               | > as a way of keeping track who owed favors to each other
               | 
               | I do not see how that is possible considering I have no
               | clue who the second last owner of a cash was before me,
               | most of the time.
        
               | bobthepanda wrote:
               | the origin of money, vs what money is now, are not
               | necessarily one and the same.
        
               | steego wrote:
               | That's because you're imagining early paper currency as a
               | universal currency.
               | 
               | These early promissory notes were more like coupons that
               | were redeemed by the merchants. It didn't matter how many
               | times a coupon was traded. As a good merchant, you knew
               | how many of your notes you had to redeem because you're
               | the one issuing the notes.
        
         | lend000 wrote:
         | The problem is that they have no moat, and Sam Altman is no
         | visionary. He's clearly been outed as a ruthless opportunist
         | whose primary skill is seizing opportunities, not building out
         | visionary technical roadmaps. The jury is still out on his
         | ability to execute, but things do seem to be falling apart with
         | the exit of his top engineering talent.
         | 
         | Compare this to Elon Musk, who has built multiple companies
         | with sizable moats, and who has clearly contributed to the
         | engineering vision and leadership of his companies. There is no
         | comparison. It's unlikely OpenAI would have had anywhere near
         | its current success if Elon wasn't involved in the early days
         | with funding and organizing the initial roadmap.
        
           | mhuffman wrote:
           | >The problem is that they have no moat, and Sam Altman is no
           | visionary.
           | 
           | In his defense he is trying to fuck us all by feverishly
           | lobbying the US Congress about the fact that "AI is waaay to
           | dangerous" for newbs and possibly terrorists to get their
           | hands on. If that eventually pays off, then there will be 3-4
           | companies that control all of any LLMs that matter.
        
         | KPGv2 wrote:
         | > In long running games, untrustworthy players lose out.
         | 
         | Amazon and Microsoft seem to be doing really well for
         | themselves.
        
           | Barrin92 wrote:
           | Because they're trustworthy. If you buy a package on Amazon
           | or Craigslist, who do you trust to deliver it to your door
           | tomorrow? People love the trope that their neighbor is
           | trustworthy and the evil big company isn't, but in reality
           | it's exactly the other way around. If you buy your heart
           | medication you buy it from Bayer or an indie startup?
           | 
           | Big, long lived companies excel at delivering exactly what
           | they say they are, and people vote with their wallet on this.
        
             | cynicalpeace wrote:
             | I don't know if Amazon or Microsoft are trustworthy or not.
             | 
             | But I agree with your point. And it gets very ugly when
             | these big institutions suddenly lose trust. They almost
             | always deserve it, but it can upend daily life.
        
         | mhuffman wrote:
         | >In long running games, untrustworthy players lose out.
         | 
         | Telco, cable companies, Nestle, and plenty of others laugh
         | while swimming in their sector leading pit of money and
         | influence.
        
         | pfisherman wrote:
         | Who are you betting on then? Anthropic? Google? Someone else? I
         | mean Microsoft was not the friendliest company. But they were
         | good enough at serving their customers needs to survive and
         | prosper.
        
           | cynicalpeace wrote:
           | At one end are the chip designers and manufacturers like
           | Nvidia. At another end are the end user products like Cursor
           | (ChatGPT was actually OpenAI's breakthrough and it was just
           | an end-user product innovation. GPT-3.5 models had actually
           | already been around)
           | 
           | I would bet on either side, but not in the middle on the
           | model providers.
        
             | pfisherman wrote:
             | I can see the big chip makers making out like bandits - a
             | la Cisco and other infra providers with the rise of the
             | internet.
             | 
             | They are facing competition from companies making hardware
             | geared toward that inference that I think will push their
             | margins down over time.
             | 
             | On the other end of the competitive landscape, what moat do
             | those companies have? What is to stop OpenAI from pulling a
             | Facebook and Sherlocking the most profitable products built
             | on their platform?
             | 
             | Something like Apple developing a chip than can do LLM
             | inference on device would completely upend everything.
        
         | 2OEH8eoCRo0 wrote:
         | > In long running games, untrustworthy players lose out.
         | 
         | How long is long?
        
         | auggierose wrote:
         | > In long running games, untrustworthy players lose out.
         | 
         | Is that a wish, or a fact, or just plain wrong? You know that
         | just because you want something to be true, it isn't
         | necessarily, right?
         | 
         | I wouldn't trust somebody who cannot distinguish between
         | wishful thinking and reality.
        
         | lucasyvas wrote:
         | I'm also betting against - Meta _alone_ will pass them within
         | the year.
        
         | caeril wrote:
         | > Sam Altman has proven himself and his company untrustworthy
         | 
         | Did I miss a memo? This is one of the largest [citation needed]
         | I've seen on this site in some time. Did he kick a puppy?
        
       | stephencoyner wrote:
       | For folks who are skeptical about OpenAI's potential, I think
       | Brad Gerstner does a really good job representing the bull case
       | for them (his firm Altimeter was a major investor in their recent
       | round).
       | 
       | - They reached their current revenue of ~$5B about 2.5 years
       | faster than Google and about 4.5 years faster than Facebook
       | 
       | - Their valuation to forward revenue (based on current growth) is
       | inline with where Google and Facebook IPO'd
       | 
       | He explains it all much better than I could type -
       | https://youtu.be/ePfNAKopT20?si=kX4I-uE0xDeAaWXN&t=80
        
       | qwertox wrote:
       | OpenAI would deserve to get dumped by MS. Just like "the boss"
       | dumped everyone, including his own principles.
       | 
       | Maybe that's why Sam Altman is so eager to get billions and build
       | his own datacenters.
        
         | mossTechnician wrote:
         | If Microsoft and OpenAI split up, can't Microsoft keep the
         | house, the car, and the kids?
         | 
         | > One particular thing to note is that Brockman stated that
         | Microsoft would get access to sell OpenAI's pre-AGI products
         | based off of [OpenAI's research] to Microsoft's customers, and
         | in the accompanying blog post added that Microsoft and OpenAI
         | were "jointly developing new Azure AI supercomputing
         | technologies."
         | 
         | > Pre-AGI in this case refers to anything OpenAI has ever
         | developed, as it has yet to develop AGI and has yet to get past
         | the initial "chatbot" stage of its own 5-level system of
         | evaluating artificial intelligence.
         | 
         | Sources to text from https://www.wheresyoured.at/to-serve-
         | altman/
        
       | dekhn wrote:
       | Microsoft's goal here is to slowly extract every bit of unique ML
       | capability out of OpenAI (note the multiple mentions about IP and
       | security wrt MSFT employees working with OpenAI) so that they can
       | compete with Google to put ML features in their products.
       | 
       | When they know they have all the crown jewels, they will reduce
       | then eliminate their support of OpenAI. This was, is, and will be
       | a strategic action by Satya.
       | 
       | "Embrace, extend, and extinguish". We're in the second stage now.
        
       | bansheeps wrote:
       | I'm calling it right now: there's a Microsoft/OpenAI breakup
       | imminent (over ownership, rights, GTM etc) that's going to be
       | extremely contested and cause OpenAI to go into a Stability AI
       | type tailspin.
        
       ___________________________________________________________________
       (page generated 2024-10-18 23:00 UTC)