[HN Gopher] Microsoft is plotting a future without OpenAI
       ___________________________________________________________________
        
       Microsoft is plotting a future without OpenAI
        
       Author : doublebind
       Score  : 238 points
       Date   : 2025-03-07 18:44 UTC (4 hours ago)
        
 (HTM) web link (techstartups.com)
 (TXT) w3m dump (techstartups.com)
        
       | doublebind wrote:
       | Original story: Microsoft's AI Guru Wants Independence From
       | OpenAI. That's Easier Said Than Done,
       | https://www.theinformation.com/articles/microsofts-ai-guru-w...
        
         | mirekrusin wrote:
         | I don't get this "easier said than done" part.
         | 
         | There are really not that many things in this world you can
         | swap as easily as models.
         | 
         | Api surface is stable and minimal, even at the scale that
         | microsoft is serving swapping is trivial compared to other
         | things they're doing daily.
         | 
         | There is enough of open research results to boost their phi or
         | whatever model and be done with this toxic to humanity, closed,
         | for profit company.
        
       | jsemrau wrote:
       | For cloud providers it makes sense to be model agnostic.
       | 
       | While we still live in a datacenter driven world, models will
       | become more efficient and move down the value chain to consumer
       | devices.
       | 
       | For Enterprise, these companies will need to regulate model risk
       | and having models fine-tuned on proprietary data at scale will be
       | an important competitive differentiator.
        
       | aresant wrote:
       | Thematically investing billions into startup AI frontier models
       | makes sense if you believe in first-to-AGI likely worth a
       | trillion dollars +
       | 
       | Investing in second/third place likely valuable at similar scales
       | too
       | 
       | But outside of that MSFTs move indicates that frontier models
       | most valuable current use case - enterprise-level API users - are
       | likely to be significantly commoditized
       | 
       | And likely majority of proceeds will be captured by (a) those
       | with integrated product distribution - MSFT in this case and (b)
       | data center partners for inference and query support
        
         | j45 wrote:
         | First to AGI for the big companies? Or for the masses?
         | 
         | Computationally, some might have access to it earlier before
         | it's scalable.
        
           | Retric wrote:
           | Profit from say 3 years of enterprise AGI exclusivity is
           | unlikely to be worth the investment.
           | 
           | It's moats that capture most value not short term profits.
        
         | alabastervlog wrote:
         | At this point, I don't see much reason to believe the "AGI is
         | imminent and these things are potentially dangerous!" line at
         | all. It looks like it was just Altman doing his thing where he
         | makes shit up to hype whatever he's selling. Worked great, too.
         | "Oooh, it's _so_ dangerous, we're so concerned about safety!
         | Also, you better buy our stuff."
        
           | torginus wrote:
           | but all those ominous lowercase tweets
        
         | only-one1701 wrote:
         | What even is AGI? Like, what does it look like? Genuine
         | question.
        
           | taneq wrote:
           | It's whatever computers can't do.
        
           | lwansbrough wrote:
           | An AI agent with superhuman coherence that can run
           | indefinitely without oversight.
        
             | only-one1701 wrote:
             | People sincerely think we're < 5 years away from this?
        
               | jimbokun wrote:
               | Is there some fundamental constraint keeping it from
               | happening? What cognitive capability do humans have that
               | machines won't be able to replicate in that time frame?
               | 
               | Each remaining barrier has been steadily falling.
        
               | bigstrat2003 wrote:
               | We don't even have AI which can do useful things yet. The
               | LLMs these companies make are fun toys, but not useful
               | tools (yes, I know that hype-prone people are using them
               | as such regardless). It beggars belief that we will go
               | from "it's a fun toy but can't do real work" to "this can
               | do things without even needing human supervision" without
               | a major leap in capabilities.
        
               | taco_emoji wrote:
               | What barriers have fallen? Computers still can't even
               | drive cars
        
               | bobsmooth wrote:
               | Even with cutting edge technology the number of
               | transistors on a chip is nowhere close to the number of
               | neurons in the brain.
        
               | saint_yossarian wrote:
               | Creativity, tastes, desires?
               | 
               | All the LLM tech so far still requires a human to
               | actually prompt them.
        
               | bashfulpup wrote:
               | Continual Learning, it's a barrier that's been there from
               | the very start and we've never had a solution to it.
               | 
               | There are no solutions even at the small scale. We
               | fundamentally don't understand what it is or how to do
               | it.
               | 
               | If you could solve it perfectly on Mnist just scale and
               | then we get AGI.
        
               | Spooky23 wrote:
               | People on HN in 2015 were saying that by now car
               | ownership would be dying and we'd be renting out our self
               | driving cars as we sat at work and did fuck all. Ben
               | Thompson had podcasts glazing Uber for 3 hours a month.
               | 
               | The hype cycle for tech people is like a light bulb for a
               | moth. We're attracted to potential, which is both our
               | superpower and kryptonite.
        
           | valiant55 wrote:
           | Obviously the other responder is being a little tongue-in-
           | cheek but AGI to me would be virtually indistinguishable from
           | a human in both ability to learn, grow and adapt to new
           | information.
        
             | Enginerrrd wrote:
             | Honestly it doesn't even need to learn and grow much if at
             | all if its able to properly reason about the world and its
             | context and deal with the inexhaustible supply of
             | imperfections and detail with reality.
        
               | bashfulpup wrote:
               | That implies learning. Solve continual learning and you
               | have agi.
               | 
               | Wouldn't it amaze you if you learned 10 years ago that we
               | would have AI that could do math and code better than 99%
               | of all humans. And at the same time they could barely
               | order you a hotdog on doordash.
               | 
               | Fundamental ability is lacking. AGI is just as likely to
               | be solved by Openai as it is by a college student with a
               | laptop. Could be 1yr or 50yrs we cannot predict when.
        
               | Enginerrrd wrote:
               | Strictly speaking I'm not sure if it does require
               | learning if information representing the updated context
               | is presented. Though it depends what you define as
               | learning. ("You have tried this twice, and it's not
               | working.") is often enough to get even current LLM's to
               | try something else.
               | 
               | That said, your second paragraph is one of the best and
               | most succinct ways of pointing out why current LLM's
               | aren't yet close to AGI if though they sometimes feel
               | like it's got the right idea.
        
             | samtp wrote:
             | Would it also get brainrot from consuming too much social
             | media & made up stories? Because I imagine it's reasoning
             | would have to be significantly better than the average
             | human to avoid this.
        
           | ge96 wrote:
           | Arnold, a killing machine that decides to become a handy man
           | 
           | Zima blue was good too
        
             | zombiwoof wrote:
             | I'm here to fix the cable
             | 
             | Logjammin AI
        
           | c0redump wrote:
           | A machine that has a subjective consciousness, experiences
           | qualia, etc.
           | 
           | See Thomas Nagels classic piece for more elaboration
           | 
           | https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
        
           | myhf wrote:
           | The official definition of AGI is a system that can generate
           | at least $100 billion in profits. For comparison, this would
           | be like if perceptrons in 1968 could generate $10 billion in
           | profits, or if LISP machines in 1986 could generate $35
           | billion in profits, or if expert systems in 1995 could
           | generate $50 billion in profits.
        
           | mirekrusin wrote:
           | Apparently according to ClosedAI it's when you charge for API
           | key the same as salary for employee.
        
           | coffeefirst wrote:
           | It's the messiah, but for billionaires who hate having to pay
           | people to do stuff.
        
         | lm28469 wrote:
         | Short term betting on AGI from current LLMs is like if you
         | betted on V10 F1s two weeks after we invented the wheel
        
           | oezi wrote:
           | Not the worst bet to invest in Daimler when they came up with
           | the car. Might not get you to F1, but certainly a good bet
           | they might.
        
       | laluser wrote:
       | I think they both want a future without each other. OpenAI will
       | eventually want to vertically integrate up towards applications
       | (Microsoft's space) and Microsoft wants to do the opposite in
       | order to have more control over what is prioritized, control
       | costs, etc.
        
         | Spooky23 wrote:
         | I think OpenAI is toxic. Weird corporate governmance shadiness.
         | The Elon drama, valuations based on claims that seem like the
         | AI version of the Uber for X hype of a decade ago (but
         | exponentially crazier). The list goes on.
         | 
         | Microsoft is the IBM of this century. They are conservative,
         | and I think they're holding back -- their copilot for
         | government launch was delayed months for lack of GPUs. They
         | have the money to make that problem go away.
        
           | skinnymuch wrote:
           | IBM of this century in a good way?
        
             | optimalsolver wrote:
             | IBM of the early 1940s.
        
             | Spooky23 wrote:
             | In this context, it's not good or bad, it just is.
        
       | bredren wrote:
       | Despite the actual performance and product implementation, this
       | suggests to me Apple's approach was more strategic.
       | 
       | That is, integrating use of their own model, amplifying
       | capability via OpenAI queries.
       | 
       | Again, this is not to drum up the actual quality of the product
       | releases so far--they haven't been good--but the foundation of
       | "we'll try to rely on our own models when we can" was the right
       | place to start from.
        
       | strangescript wrote:
       | I think they have realized that even if OpenAI is first, it won't
       | last long so really its just compute at scale, which is something
       | they already do themselves.
        
         | echelon wrote:
         | There is no moat in models (OpenAI).
         | 
         | There is a moat in infra (hyperscalers, Azure, CoreWeave).
         | 
         | There is a moat in compute platform (Nvidia, Cuda).
         | 
         | Maybe there's a moat with good execution and product, but it
         | isn't showing yet. We haven't seen real break out successes. (I
         | don't think you can call ChatGPT a product. It has zero
         | switching cost.)
        
           | drumhead wrote:
           | Is anyone other than Nvdia making money from this particular
           | gold rush?
        
             | xnx wrote:
             | Data center construction and power companies.
        
             | scarface_74 wrote:
             | Consulting companies
        
           | barumrho wrote:
           | Given xAI built its 100k gpu datacenter in a very short time,
           | is the infra really a moat?
        
             | freedomben wrote:
             | I'd say it is because the $ it takes to build out even a
             | small gpu data center is still way, way more than most
             | small cos can do. It's not an impenetrable moat, but it is
             | pretty insulating against startups. Still have a threat
             | from big tech, though I think that will always be true for
             | almost everything
        
             | eagerpace wrote:
             | I don't think the hardware is that easy to source just yet.
             | Musk pulled some strings and redirected existing inventory
             | and orders from his other companies, namely Tesla, to
             | accelerate delivery.
        
             | PKop wrote:
             | xAI does not have infra to sell the service and
             | integrations of it to enterprises and such. It's an open
             | question if "models" alone and simple consumer products
             | that use them are profitable. So, probably hyperscale cloud
             | platform infra is a moat yes. Microsoft has Semantic
             | Kernel, Microsoft.Extensions.AI, various RAG and search
             | services, and an entire ecosystem and platform around using
             | LLM's to build with that xAI does not have. Just having a
             | chat app as interface to one's model is part of the
             | discussion here about models as commodities. xAI does have
             | X/Twitter data which is a constantly updating source of
             | information so in that aspect they themselves do have
             | something unique.
        
           | YetAnotherNick wrote:
           | What moat does Nvidia have. AMD could have ROCm perfected if
           | they really want to. Also most of pytorch, specially those
           | relevant to transformers runs perfectly on Apple Silicon and
           | TPUs and probably other hardware as well.
           | 
           | If anyone has moat related to Gen AI, I would say it is the
           | data(Google, Meta).
        
             | klelatti wrote:
             | > AMD could have ROCm perfected if they really want to.
             | 
             | It's not an act of will or CEO dictat. It's about hiring
             | and incentivising the right people, putting the right
             | structures in place etc all in the face of competing
             | demands.
             | 
             | Nvidia have a huge head start and by the time AMD have
             | 'caught up' Nvidia with it's greater resources will have
             | moved further ahead.
        
               | YetAnotherNick wrote:
               | If head start is a moat, why wouldn't you count OpenAI's
               | headstart as moat?
        
               | echelon wrote:
               | Anyone can make an LLM. There are hundreds of choices in
               | the market today. Many of them are even open source.
               | 
               | OpenAI brings absolutely nothing unique to the table.
        
               | klelatti wrote:
               | Because we already see firms competing effectively with
               | OpenAI.
               | 
               | There is as yet no indication that AMD can match Nvidia's
               | execution for the very good reason that doing so is
               | extremely difficult. The head start is just the icing on
               | the cake.
        
               | PKop wrote:
               | Not all industries or product segments are equal is the
               | obvious answer. The point here whether one agrees or not
               | is models are easier to catch up to than GPUs
        
       | kittikitti wrote:
       | Surprising how Sam Altman's firing as CEO of OpenAI and moving to
       | Microsoft wasn't mentioned in this article.
        
         | electriclove wrote:
         | Do you have a source?
        
           | selimthegrim wrote:
           | They mean the past events.
        
       | DeathArrow wrote:
       | It's only logical. OpenAI it's too expensive for what it
       | produces. Deep Seek is on par with ChatGPT and the cost was
       | lower. Claude development costs less, too.
        
       | knowitnone wrote:
       | Good. I'm plotting a future without Microsoft
        
       | meepmeepinator wrote:
       | Microsoft's shift away from OpenAI reminds me of Google's early
       | AI struggles. Back in 2016, Google relied heavily on Nvidia GPUs
       | for training models but saw the long-term cost risk. So, they
       | built TPUs--custom AI chips--to take control of their
       | infrastructure. Now, Microsoft is doing the same: developing in-
       | house AI models (Phi-4) and custom silicon (Maia) to reduce
       | reliance on OpenAI and Nvidia. But history shows that model
       | independence is harder than it looks. Microsoft's models are
       | promising, but GPT-4 still outperforms them in general tasks.
       | Meanwhile, integrating multiple models (OpenAI, Meta, Anthropic)
       | into 365 Copilot is a major engineering challenge--consistency
       | and latency issues are inevitable. If they pull it off, they'll
       | transform Azure into an AI-agnostic powerhouse. If not, they risk
       | fragmentation and higher costs. Either way, this move signals the
       | next phase of AI competition: infrastructure control.
        
         | mattlondon wrote:
         | Why not just use GCP? It is already model agnostic
         | https://console.cloud.google.com/vertex-ai/model-garden
         | 
         | There is even deepseek on there.
        
       | agentultra wrote:
       | I had skimmed the headline and thought, "Microsoft is plotting a
       | future without AI," and was hopeful.
       | 
       | Then I read the article.
       | 
       | Plotting for a future without Microsoft.
        
         | mirekrusin wrote:
         | First quarter summary of this year is "AI is plotting future
         | without OpenAI or Microsoft".
        
       | CodeCompost wrote:
       | Just partner with Deepseek
        
         | Frederation wrote:
         | Why.
        
         | keernan wrote:
         | From the article:
         | 
         | Suleyman's team has also been testing alternatives from
         | companies like xAI, DeepSeek, and Meta
        
       | rdtsc wrote:
       | They probably saw the latest models like gpt 4.5 not being as
       | revolutionary as expected and deepseek and others catching up.
        
         | thewebguyd wrote:
         | I think Microsoft isn't buying the AGI hype from OpenAI, and
         | wants to move to be more model agnostic, and instead do what
         | Microsoft (thinks) it does best, and that's tooling, and
         | enterprise products.
         | 
         | MS wants to push Copilot, and will be better off not being tied
         | to OpenAI but having Copilot be model agnostic, like GH Copilot
         | can use other models already. They are going to try and
         | position Azure as "the" place to run your own models, etc.
        
           | rdtsc wrote:
           | > instead do what Microsoft (thinks) it does best, and that's
           | tooling, and enterprise products.
           | 
           | Definitely, but I think it's because they saw OpenAI's moat
           | get narrower and shallower, so to speak. As the article
           | mentions it's still looking like a longer timeline [quote]
           | "but Microsoft still holds exclusive rights to OpenAI's
           | models for its own products until 2030. That's a long
           | timeline to unravel."
        
       | only-one1701 wrote:
       | Maybe I'm just cynical, but I wonder how much of this initiative
       | and energy is driven by people at Microsoft who want their own
       | star to rise higher than it can when it's bound by a third-party
       | technology.
       | 
       | I feel like this is something I've seen a fair amount in my
       | career. About seven years ago, when Google was theoretically
       | making a big push to stage Angular on par with React, I remember
       | complaining that the documentation for the current major version
       | of Angular wasn't nearly good enough to meet this stated goal. My
       | TL at the time laughed and said the person who spearheaded that
       | initiative was already living large in their mansion on the hill
       | and didn't give a flying f about the fate of Angular now.
        
         | skepticATX wrote:
         | Listening to Satya in recent interviews I think makes it clear
         | that he doesn't really buy into OpenAI's religious-like view of
         | AGI. I think the divorce makes a lot of sense in light of this.
        
         | keeganpoppen wrote:
         | oh it is absolutely about that
        
         | bsimpson wrote:
         | There is a prominent subset of the tech crowd who are ladder
         | climbers - ruthlessly pursuing what is rewarded with
         | pay/title/prestige without regard to actually making good
         | stuff.
         | 
         | There are countless kidding-on-the-square jokes about projects
         | where the innovators left at launch and passed it off to the
         | maintenance team, or where a rebrand was in pursuit of
         | someone's promo project. See also, killedbygoogle.com.
        
           | teaearlgraycold wrote:
           | These people should be fired. I want a tech company where
           | people are there to make good products first and get paid
           | second. And the pay should be good. The lifestyle
           | comfortable. No grindset bullshit. But I am confident that if
           | you only employ passionate people working their dream jobs
           | you will excel.
        
             | escapecharacter wrote:
             | Unfortunately whether someone is checked out is a laggy
             | measure.
             | 
             | Even good honest motivated people can become checked out
             | without even being aware of it.
             | 
             | The alternative is to lay off people as soon as they hit
             | 1.0 (with a severance bonus on the scale of an
             | acquisition). This would obviously be worse, as you can't
             | take advantage of their institutional knowledge.
        
               | saturn8601 wrote:
               | This motivated part of Musk's moves at Twitter(and now
               | DOGE). You can't reliably evaluate which people are
               | checked out and when you are against the clock, you have
               | to take a hatchet and accept that you will break things
               | that are in motion.
        
             | scarface_74 wrote:
             | Why would those people be "fired" when the entire promotion
             | process and promo docs emphasize "scope" and "impact"?
             | 
             | No one works for any BigTech company because they think
             | they are making the world a better place. They do it
             | because a shit ton of money appears in their bank account
             | every pay period and stock appears in their brokerage
             | account every vesting period.
             | 
             | I personally don't have the shit tolerance to work in
             | BigTech (again) at 50. But I suggest to all of my younger
             | relatives who graduate in CS to "grind leetCode and work
             | for a FAANG" and tell them how to play the politics to get
             | ahead.
             | 
             | As the Dilbert author said, "Passion is Bullshit". I have
             | never been able to trade passion for goods and services.
        
               | bsimpson wrote:
               | > No one works for any BigTech company because they think
               | they are making the world a better place.
               | 
               | I'm sure there are plenty of people who work at big
               | companies for precisely this reason (or at least, with
               | that as _a_ reason among many).
               | 
               | Yes, much of the prestige has worn off as the old guard
               | retired and current leadership emphasizes chasing AI
               | buzzwords and cutting costs. But still, big companies are
               | one of the few places where an individual really can
               | point out something they worked on in day-to-day life.
               | (Pull out any Android phone and I can show you the parts
               | that my work touched.)
        
               | Severian wrote:
               | Funny what his passions turned into, so yeah, ironically
               | agree.
        
               | whstl wrote:
               | Yep. I've seen more people fired for being passionate
               | about their craft and their jobs than people getting
               | raises for the same reason.
               | 
               | It's always the same. People trying to make things better
               | for the next developer, people prioritizing delivers
               | instead of ego-projects or ego-features by someone
               | playing politics, developers wanting a seat at the table
               | with (dysfunctional) Product teams, people actual good
               | intentions trying to "change the world" (not counting the
               | misguided attempts here).
               | 
               | You are 100% correct, you gotta play the politics,
               | period.
        
             | JumpCrisscross wrote:
             | > _want a tech company where people are there to make good
             | products first and get paid second. And the pay should be
             | good. The lifestyle comfortable. No grindset bullshit_
             | 
             | Congratulations, you've invented the HR department in
             | corporate America.
        
             | saturn8601 wrote:
             | You are trying to combine two repelling magnets together.
             | 
             | Case in point: Tesla/SpaceX meets your first criteria: "I
             | want a tech company where people are there to make good
             | products first and get paid second."
             | 
             | Google meets your second criteria: "And the pay should be
             | good. The lifestyle comfortable. No grindset bullshit."
             | 
             | Other than small time boutique software firms like Fog
             | Creek Software or Panic Inc(and thats a BIG maybe) you are
             | not going to get this part of your message: "But I am
             | confident that if you only employ passionate people working
             | their dream jobs you will excel."
             | 
             | There are tradeoffs in life and each employee has to choose
             | what is important to them(and each company CEO has to set
             | standards on what is truly valued at the company).
        
           | supriyo-biswas wrote:
           | At my former employer, there was a team who were very much
           | into resume-driven development and wrote projects in Go even
           | when Java would have been the better alternative considering
           | the overall department and maintenance team expertise, all
           | the while they were informally grumbling about how Go doesn't
           | have the features they need...
        
             | darkhorse222 wrote:
             | I see that a lot from the Go crowd. That's why I consider
             | any strong opinions on languages to be a poor indicator for
             | ability. Sure there's differences, but a language does not
             | make the engineer. Someone who is attracted to flashy stuff
             | makes for an indulgent planner.
        
               | scubbo wrote:
               | > That's why I consider any strong opinions on languages
               | to be a poor indicator for ability.
               | 
               | Hmm. Can't say I agree here - at least not with the
               | literal text of what you've written (although maybe we
               | agree in spirit). I agree that _simplistic_ strong
               | opinions about languages are a sign of poor
               | thoughtfulness ("<thing> is good and <other thing> is
               | bad") - but I'd very much expect a Staff+ engineer to
               | have enough experience to have strong opinions about the
               | _relative_ strengths of various languages, where they're
               | appropriate to use and where a different language would
               | be better. Bonus points if they can tell me the worst
               | aspects about their favourite one.
               | 
               | Maybe we're using "opinion" differently, and you'd call
               | what I described there "facts" rather than opinions. In
               | which case - yeah, fair!
        
               | mikepurvis wrote:
               | Absolutely. Anyone senior should be able to fairly
               | quickly get a handle on the requirements for a particular
               | project and put forward a well-reasoned opinion on an
               | appropriate tech stack for it. There might be some blank
               | space in there for "I've heard of X and Y that actually
               | might fit this use case slightly better, so it's probably
               | worth a brief investigation of those options, but I've
               | used Z before so I know about the corner cases we may run
               | into, and that has value too."
        
               | pdimitar wrote:
               | And I see people who assume choosing a language was done
               | for "flashy stuff" the less capable.
               | 
               | See, we can all generalize. Not productive.
               | 
               | Only thing I ever saw from Golang devs was pragmatism. I
               | myself go either for Elixir or Rust and to me Golang sits
               | in a weird middle but I've also written 20+ small tools
               | for myself in Golang and have seen how much quicker and
               | more productive I was when I was not obsessed with
               | complete correctness (throwaway script-like programs,
               | small-to-mid[ish]-sized projects, internal tools etc.)
               | 
               | You would do well to stop stereotyping people based on
               | their choice of language.
        
               | zozbot234 wrote:
               | > how much quicker and more productive I was when I was
               | not obsessed with complete correctness
               | 
               | That's pretty much another way of saying that stuff
               | becomes a whole lot quicker and easier when you end up
               | getting things wrong. Which may even be true, as far as
               | it goes. It's just not very helpful.
        
               | pdimitar wrote:
               | Obviously. But I did qualify my statement. There are
               | projects where you're OK with not getting everything
               | right from the get go.
               | 
               | FWIW I very much share your exact thoughts on Rust
               | skewing metrics because it makes things too easy and
               | because stuff almost immediately moves to maintenance
               | mode. But that being said, we still have some tasks where
               | we need something yesterday and we can't argue with the
               | shot-callers about it. (And again, some personal projects
               | where the value is low and you derive more of it if you
               | try quickly.)
        
               | BobbyJo wrote:
               | Language matters quite a bit when deciding how to build
               | an application though. I see having no strong opinions on
               | language to be a sign the person hasn't developed a wide
               | enough variety of projects to get a feel for their
               | strengths and weaknesses.
        
               | ohgr wrote:
               | Yeah. I have a list of things I won't work with. That's
               | what experience looks like.
               | 
               | (Mostly .Net, PHP and Ruby)
        
             | synergy20 wrote:
             | golang is decent and is the only new lang climbed up to 7th
             | in popularity, it does shine at what it's good at
        
               | mvdtnz wrote:
               | You're missing the point completely.
        
               | pclmulqdq wrote:
               | The go and rust crowds both love writing things in their
               | own language for its own sake. Not because it's a good
               | choice. For a large web backend, go is great. For many
               | other things it's terrible.
        
               | pdimitar wrote:
               | > _The go and rust crowds both love writing things in
               | their own language for its own sake_
               | 
               | Hard to take you seriously when you do such weird
               | generalized takes.
               | 
               | While it's a sad fact that fanboys and zealots absolutely
               | do exist, most devs can't afford to be such and have to
               | be pragmatic. They pick languages based on merit and
               | analysis.
        
               | pclmulqdq wrote:
               | Most of the people who use Go and Rust do it for
               | pragmatic reasons. That doesn't influence the culture of
               | the zealots in each community.
               | 
               | You should search for headlines on HN that say "written
               | in Go" or "written in Rust" and then compare that to the
               | number of headlines that say "written in JavaScript" or
               | "written in Kotlin."
        
               | zozbot234 wrote:
               | Rust is pretty antithetical to resume-driven development
               | because a lot of the stuff that's written in Rust is
               | _too_ well-written and free of the usual kinds of
               | software defects. It immediately becomes  "done" and
               | enters low-effort maintenance mode, there's just very
               | little reason to get involved with it anymore since "it
               | just works". Believe it or not, this whole dynamic is
               | behind a lot of the complaints about Rust in the
               | workplace. It's literally making things _too_ easy.
        
               | ohgr wrote:
               | Having watched two entirely fucked up Rust projects get
               | written off I think you need to get out more.
        
               | LPisGood wrote:
               | It's not that I don't believe you, hut that I'm having
               | trouble seeing how what you say could be true.
               | 
               | Rust projects immediately become "done"??? They don't
               | also having changing requirements and dependencies? Why
               | aren't everyone at the best shops using it for everything
               | if it massively eliminates work load?
        
               | pclmulqdq wrote:
               | I have to say that the median crate I interact with has
               | the following readme:
               | 
               | " Version 0.2 - Unstable/buggy/slow unless you use
               | exactly like the example - not going to get updated
               | because I moved on to something else"
               | 
               | Rust is another programming language. It's easier to
               | write code without a certain class of bugs, but that
               | doesn't mean version 0.2 of a casual project is going to
               | be bug-free.
        
               | conjectures wrote:
               | Not from what I've seen. The compiler is slow af which
               | plays badly with how fussy the thing is.
               | 
               | It's easy to have no defects in functionality you never
               | got around to writing because you ran out of time.
        
               | whstl wrote:
               | Life is too short to program on languages one doesn't
               | love.
               | 
               | Those people, if they really exist, are right.
        
             | rurp wrote:
             | I've seen the exact same pattern play out with different
             | tools. The team used a shiny new immature platform for nice
             | sounding reasons and then spent 80% of their time
             | reinventing wheels that have already been solved in any
             | number of places.
        
             | whstl wrote:
             | I have lot of sympathy for resume-driven developers.
             | They're just answering to the labor market. More power to
             | them.
             | 
             | When companies do what the market expect we praise them.
             | When it's workers, we scorn them. This attitude is
             | seriously fucked up.
             | 
             | When companies start hiring based on experience,
             | adaptability, curiosity, potential and curiosity then you
             | get to complain. Until that, anyone doing it should be
             | considered a fucking genius.
        
               | usefulcat wrote:
               | Pretty sure most of the resentment comes from working
               | with such people. Which I think is understandable.
        
               | whstl wrote:
               | Understandable, but still wrongfully blaming a player
               | rather than the game itself.
        
             | fallingknife wrote:
             | What good does that do on a resume? I thought learning a
             | new language on the job was pretty standard.
        
             | ohgr wrote:
             | We have those! Turn up, make some micro-services or AWS
             | crap pile we don't need to solve a simple problem, then
             | fuck off somewhere else and leave everyone else to clean it
             | up.
             | 
             | Worst one is the data pipeline we have. It's some AWS
             | lambda mess which uses curl to download a file from
             | somewhere and put it into S3. Then another lambda turns up
             | at some point and parses that out and pokes it into
             | DynamoDB. This fucks up at least once a month because the
             | guy who wrote the parser uses 80s BASIC style string
             | manipulation and luck. Then another thing reads that out of
             | DynamoDB and makes a CSV (sometimes escaped improperly) and
             | puts that into another bucket.
             | 
             | I of course entirely ignore this and use one entire line of
             | R to do the same job
             | 
             | Along comes a senior spider and says "maybe we can fix all
             | these problems with AI". No you can stop hiring acronym
             | collectors.
        
               | conjectures wrote:
               | Ah, the good ole Rube Goldberg machine.
        
           | Lerc wrote:
           | I had not encountered the phrase kidding-on-the-square
           | before. Searching seems to reveal a spectrum of opinions as
           | to what it means. It seems distinct to the 'It's funny
           | because it's true' of darker humour.
           | 
           | It seems to be more on a spectrum of 'Haha, only joking'
           | where the joke teller makes a statement that is ambiguously
           | humorous to measure the values of the recipients, or if they
           | are not sure of the values of the recipients.
           | 
           | I think the distinction might be on whether the joke teller
           | is revealing (perhaps unintentionally) a personal opinion or
           | whether they are making an observation on the world in
           | general, which might even imply that they hold a counter-
           | opinion.
           | 
           | Where do you see 'kidding on the square' falling?
           | 
           | (apologies for thread derailment)
        
             | MWil wrote:
             | good god, lemon
        
             | bsimpson wrote:
             | It's a phrase I learned from my mom/grandpa growing up. "On
             | the square" means "but I also mean it."
        
               | gsf_emergency_2 wrote:
               | https://hn.algolia.com/?dateRange=pastWeek&page=0&prefix=
               | tru...
        
           | devsda wrote:
           | > There is a prominent subset of the tech crowd who are
           | ladder climbers - ruthlessly pursuing what is rewarded with
           | pay/title/prestige without regard to actually making good
           | stuff.
           | 
           | I think the hiring and reward practices of the organizations
           | & the industry as a whole also encourages this sort of
           | behavior.
           | 
           | When you reward people who are switching too often or only
           | when moving internally/externally, switching becomes the
           | primary goal and not the product. If you know beforehand that
           | you are not going to stay long to see it through, you tend to
           | take more shortcuts and risks that becomes the responsibility
           | of maintainers later.
           | 
           | We have a couple of job hoppers in our org where the number
           | of jobs they held is almost equal to their years of
           | experience and their role is similar to those with twice the
           | experience! One can easily guess what their best skill is.
        
           | deadbabe wrote:
           | This is wrong.
           | 
           | Google kills off projects because the legal liability and
           | security risks of those projects becomes too large to justify
           | for something that has niche uses or gives them no revenue.
           | User data is practically toxic waste.
        
           | grepLeigh wrote:
           | As an outsider looking at Microsoft, I've always been
           | impressed by the attention to maintaining legacy APIs and
           | backward compatibility in the Windows ecosystem. In my mind,
           | Microsoft is at the opposite end of the killedbygoogle.com
           | spectrum. However, none of this is grounded in real evidence
           | (just perception). Red Hat is another company I'd put forth
           | as an example of a long-term support culture, although I
           | don't know if that's still true under IBM.
           | 
           | I'd love to know if my superficial impression of Microsoft's
           | culture is wrong. I'm sure there's wild variance between
           | organizational units, of course. I'm excluding the Xbox/games
           | orgs from my mental picture.
        
         | mlazos wrote:
         | One of my friends stated this phenomenon very well "it's a
         | lever they can pull so they do it". Once you've tied your
         | career to a specific technology internally, there's really only
         | one option: keep pushing it regardless of any alternatives
         | because your career depends on it. So that's what they do.
        
         | roland35 wrote:
         | Unfortunately I don't think there is any real metric-based way
         | to prevent this type of behavior, it just has to be old
         | fashioned encouraged from the top. At a certain size it seems
         | like this stops scaling though
        
         | ambicapter wrote:
         | Does it not make sense to not tie your future to a third-party
         | (aka build your business on someone else's platform)? Seems
         | like basic strategy to me if that's the case.
        
           | pphysch wrote:
           | It's a good strategy. It should be obvious to anyone paying
           | attention that OpenAI doesn't have AGI secret sauce.
           | 
           | LLMs are a commodity and it's the platform integration that
           | matters. This is the strategy that Google, Apple embraced and
           | now Microsoft is wisely pivoting to the same.
           | 
           | If OpenAI cares about the long-term welfare of its employees,
           | they would beg Microsoft to acquire them outright, before the
           | markets fully realize what OpenAI is not.
        
             | Izikiel43 wrote:
             | > now Microsoft is wisely pivoting to the same.
             | 
             | I mean, they have been doing platform integration for a
             | while now, with all the copilot flavors and teams
             | integrations, etc. This would change the backend model to
             | something inhouse.
        
         | pradn wrote:
         | It's the responsibility of leadership to set the correct goals
         | and metrics. If leadership doesn't value maintenance, those
         | they lead won't either. You can't blame people for playing to
         | the tune of those above them.
        
           | ewhanley wrote:
           | This is exactly right. If resume driven development results
           | in more money, people are (rightly) going to do it. The
           | incentive structure isn't set by the ICs.
        
         | m463 wrote:
         | I wonder if incentives for most companies favor doing things
         | in-house?
        
           | esafak wrote:
           | Yes, you can say you built it from scratch, showing
           | leadership and impact, which is what big tech promotions are
           | gauged by.
        
         | HarHarVeryFunny wrote:
         | OpenAI already started divorce proceedings with their
         | datacenter partnership with Softbank/etc, and it'd hardly be
         | prudent for the world's largest software company NOT to have
         | it's own SOTA AI models.
         | 
         | Nadella might have initially been caught a bit flat footed with
         | the rapid rise of AI, but seems to be managing the situation
         | masterfully.
        
           | wkat4242 wrote:
           | In what world is what they are doing masterful? Their product
           | marketing is a huge mess, they keep changing the names of
           | everything every few months. Nobody knows which Copilot does
           | what anymore. It really feels like they're scrambling to be
           | first to market. It all feels so incredibly rushed.
           | 
           | Whatever is there doesn't work half the time. They're hugely
           | dependent on one partner that could jump ship at any moment
           | (granted they are now working to get away from that).
           | 
           | We use Copilot at work but I find it very lukewarm. If we
           | weren't a "Microsoft shop" I don't think would have chosen
           | it.
        
             | aaronblohowiak wrote:
             | > scrambling to be first
             | 
             | Third?
        
             | trentnix wrote:
             | _> Their product marketing is a huge mess, they keep
             | changing the names of everything every few months. Nobody
             | knows which Copilot does what anymore. It really feels like
             | they 're scrambling to be first to market. It all feels so
             | incredibly rushed._
             | 
             | Product confusion, inconsistent marketing, unnecessary
             | product renames, and rushing half-baked solutions has been
             | the Microsoft way for dozens of products across multiple
             | divisions for years.
        
               | eitally wrote:
               | Rule #1 for Microsoft product strategy: if you can't
               | yourselves figure out the SKUs and how they bundle
               | together, the odds are good that your customers will
               | overpay. It's worked for almost 50 years and there's no
               | evidence that it will stop working. Azure is killing it
               | and will continue to eat the enterprise even as AWS
               | starts/continues to struggle.
        
             | HarHarVeryFunny wrote:
             | > In what world is what they are doing masterful?
             | 
             | They got access to the best AI to offer to their customers
             | on what seems to be very favorable terms, and bought
             | themselves time to catch up as it now seems they have.
             | 
             | GitHub Copilot is a success even if Microsoft/Windows
             | Copilot isn't, but more to the point Microsoft are able to
             | offer SOTA AI, productized as they see fit (not every
             | product is going to be a winner) rather than having been
             | left behind, and corporate customers are using AI via Azure
             | APIs.
        
             | nyarlathotep_ wrote:
             | > In what world is what they are doing masterful?
             | 
             | Does *anyone* want "Copilot integration" in random MS
             | products?
        
         | saturn8601 wrote:
         | Ah man I don't want to hear things like that. I work in an
         | Angular project and it is the most pleasant thing I have worked
         | with (and i've been using it as my primary platform for almost
         | a decade now). If I could, i'd happily keep using this
         | framework for the rest of my career(27 years to go till
         | retirement).
        
         | hintymad wrote:
         | > but I wonder how much of this initiative and energy is driven
         | by people at Microsoft who want their own star to rise higher
         | than it can when it's bound by a third-party technology.
         | 
         | I guess it's human nature for a person or an org to own their
         | own destiny. That said, the driving force is not personal
         | ambition in this case though. The driving force behind this is
         | that people realized that OAI does not have a moat as LLMs are
         | quickly turning into commodities, if haven't yet. It does not
         | make sense to pay a premium to OAI any more, let alone at the
         | cost of not having the flexibility to customize models.
         | 
         | Personally, I think Altman did a de-service to OAI by
         | constantly boasting AGI and seeking regulatory capture, when he
         | perfectly knew the limitation of the current LLMs.
        
       | RobertDeNiro wrote:
       | xAI could do it, deepseek could do it . Microsoft can as well.
       | It's not hard to see
        
       | rafaelmn wrote:
       | I'd be willing to bet that the largest use of LLMs they have is
       | GitHub copilot and Claude should be the default there.
       | 
       | OpenAI has not been interesting to me for a long time, every time
       | I try it I get the same feeling.
       | 
       | Some of the 4.5 posts have been surprisingly good, I really like
       | the tone. Hoping they can distill that into their future models.
        
       | DebtDeflation wrote:
       | A couple of days ago it leaked that OpenAI was planning on
       | launching new pricing for their AI Agents. $20K/mo for their PhD
       | Level Agent, $10K/mo for their Software Developer Agent, and
       | $2K/mo for their Knowledge Worker Agent. I found it very telling.
       | Not because I think anyone is going to pay this, but rather
       | because this is the type of pricing they need to actually make
       | money. At $20 or even $200 per month, they'll never even come
       | close to breaking even.
        
         | paxys wrote:
         | It's pretty funny that OpenAI wants to sell access to a "PhD
         | level" model at a price with which you can hire like 3-5 real
         | human PhDs full-time.
        
           | laughingcurve wrote:
           | That is just not correct. As someone who has done the budgets
           | for PhD hiring and funding, you are just wildly
           | underestimating the overhead costs, benefits, cost of raising
           | money, etc.
        
             | zombiwoof wrote:
             | Respectfully disagree. I had two pHD on a project and spent
             | a total of 120k a year on them.
        
               | 0_____0 wrote:
               | What region and what field?
        
               | yifanl wrote:
               | Right, which is substantially less than the stated
               | $20k/month.
               | 
               | edit: I see we're actually in agreement, sorry, I read
               | the indentation level wrong.
        
               | robertlagrant wrote:
               | > Respectfully disagree. I had two pHD on a project and
               | spent a total of 120k a year on them.
               | 
               | Does that include all overheads such as HR, payroll, etc?
        
             | eszed wrote:
             | How many PhDs can you afford for $20k a month in your
             | field?
        
             | DebtDeflation wrote:
             | The "3-5" is certainly overstated, but you definitely can
             | hire ONE PhD for that price, just as you can hire a SWE for
             | $120K or a knowledge worker for $24K. The point is that
             | from a CEO's perspective "replacing all the humans with AI"
             | looks a lot less compelling when the AI costs the same as a
             | human worker or even a significant fraction of a human
             | worker.
        
               | zeroonetwothree wrote:
               | Although remember that the cost to the company is more
               | like double the actual salary.
        
               | DebtDeflation wrote:
               | Again, irrelevant. We're talking about orders of
               | magnitude here. Current pricing is in line with most SaaS
               | pricing - tens of dollars to hundreds of dollars per seat
               | per month. Now they're suddenly talking about thousands
               | of dollars to tens of thousands of dollars per seat per
               | month.
        
               | sailfast wrote:
               | Being able to control their every move, scale them to
               | whatever capacity is required, avoid payroll taxes,
               | health plans and surprise co-pay costs, equity sharing,
               | etc might make this worthwhile for many companies.
               | 
               | That said, the trade-off is that you're basically hiring
               | consultants since they really work for OpenAI :)
        
               | Izikiel43 wrote:
               | The AI can work 24/7 though.
        
               | mirsadm wrote:
               | Doing what?
        
               | booleandilemma wrote:
               | Don't you need to be awake to feed it prompts?
        
             | throwaway3572 wrote:
             | For a STEM PhD, in America, at an R1 University. YMMV
        
           | moelf wrote:
           | $20k can't get you that many PhD. Even PhD students, who's
           | nominal salary is maybe $3-5k a month, effectively costs
           | double that because of school overhead and other stuff.
        
             | meroes wrote:
             | Based on ubiquitous AI trainer ads on the internet that
             | advertise their pay, they probably make <=$50/hr training
             | these models. Trainers are usually remote and set their own
             | hours, so I wouldn't be surprised if PhDs are not making
             | much as trainers.
        
             | throw_m239339 wrote:
             | > $20k can't get you that many PhD. Even PhD students,
             | who's nominal salary is maybe $3-5k a month, effectively
             | costs double that because of school overhead and other
             | stuff.
             | 
             | But you are not getting a PhD worker for 20K with "AI",
             | that's just marketing.
        
             | notahacker wrote:
             | Does depend on where your PhD lives and what subject their
             | PhD is in from where, and how many hours of work you expect
             | them to do a week, and whether you need to full-time
             | "prompt" them to get them to function...
             | 
             | Would definitely rather have a single postdoc in a relevant
             | STEM subject from somewhere like Imperial for less than
             | half the overall cost than an LLM all in though. And I say
             | that _despite_ seeing the quality of the memes they produce
             | with generative AI....
        
             | BeetleB wrote:
             | > Even PhD students, who's nominal salary is maybe $3-5k a
             | month
             | 
             | Do they really get paid that much these days?
        
               | archermarks wrote:
               | Lmao no
        
               | hyperbrainer wrote:
               | That amount is standard at EPFL and ETH, but I don't know
               | about the USA.
        
               | BeetleB wrote:
               | I knew someone who got his PhD at EPFL. He earned almost
               | triple what I did in the US.
        
               | pclmulqdq wrote:
               | $3k/month is the very top of the market.
        
             | vinni2 wrote:
             | Depends on what these PhDs are supposed to do. Also is this
             | an average Phd or a brilliant PhD level? There is a huge
             | spectrum of PhDs out there. I highly doubt these phd level
             | models are able to solve any problems in a creative way or
             | discover new things other than regurgitating the knowledge
             | they are trained on.
        
             | madmask wrote:
             | Come to Italy where 1.1k is enough
        
           | drexlspivey wrote:
           | Next up: CEO level model to run your company. Pricing starts
           | at $800k/month plus stock options
        
             | slantaclaus wrote:
             | I won't considering trusting an AI to run a company until
             | it can beat me at Risk
        
             | hinkley wrote:
             | Early cancelation fee is $15M though so watch out for that.
        
             | th0ma5 wrote:
             | That no one is offering this says something very profound
             | to me. Either they don't work and are too risky to entrust
             | a company to, or leadership thinks they are immune and are
             | entitled to wield AI exclusively, or some mix of these
             | things.
        
             | marricks wrote:
             | Which is funny because the CEO level one is the easiest to
             | automate
        
           | mattmaroon wrote:
           | 1. Don't know where you live that the all-in costs on someone
           | with a PhD are $4k-$7k/mo. Maybe if their PhD is in
           | anthropology.
           | 
           | 2. How many such PhD people can it do the work of?
        
             | shellfishgene wrote:
             | Postdocs in Europe make about 3-4k eur/month in academic
             | research.
        
               | madmask wrote:
               | We wish, it's more like half in many places
        
           | Fernicia wrote:
           | Well, a model with PhD level intelligence could presumably
           | produce research in minutes that would take an actual PhD
           | days or months.
        
             | voxl wrote:
             | Presumably. What a powerful word choice.
        
           | kube-system wrote:
           | If truly equivalent (which LLMs aren't, but I'll entertain
           | it), that doesn't seem mathematically out of line.
           | 
           | Humans typically work 1/3rd duty cycle or less. A robot that
           | can do what a human does is automatically 3x better because
           | it doesn't eat, sleep, have a family, or have human rights.
        
             | bandrami wrote:
             | So this is just going to end up like AWS where they worked
             | out _exactly_ how much it costs me to run a physical server
             | and charge me just slightly less than that?
        
               | kube-system wrote:
               | Why would they ask for less?
        
           | jstummbillig wrote:
           | What funny is that people make the lamest strawman
           | assumptions and just run with it.
        
           | doitLP wrote:
           | Don't forget that this model would have a phd _in everything_
           | and work around the clock
        
             | esskay wrote:
             | Thats pretty useless for most applications though. If
             | you're hiring a phd level person you dont care that if in
             | addition to being great in contract law they're also great
             | in interior design.
        
             | burnte wrote:
             | Well, it works 24/7 as long as you have a human telling it
             | what to do. And checking all the output because these
             | cannot be trusted to work alone.
        
         | drumhead wrote:
         | Thats some rather eyewatering pricing, considering you could
         | probably roll your own model these days.
        
         | moduspol wrote:
         | Even worse: AFAIK there's no reason to believe that the $20k/mo
         | or $10k/mo pricing will actually make them money. Those numbers
         | are just thought balloons being floated.
         | 
         | Of course $10k/mo sounds like a lot of inference, but it's not
         | yet clear how much inference will be required to approximate a
         | software developer--especially in the context of maintaining
         | and building upon an existing codebase over time and not just
         | building and refining green field projects.
        
           | hinkley wrote:
           | Man. If I think about all of the employee productivity tools
           | and resources I could have purchased fifteen years ago when
           | nobody spent anything on tooling, with an inflation adjusted
           | $10K a month and it makes me sad.
           | 
           | We were hiring more devs to deal with a want of $10k worth of
           | hardware per year, not per month.
        
         | culi wrote:
         | It's bizarre. These are the pricing setups that you'd see for a
         | military-industrial contract. They're just doing it out in the
         | open
        
         | optimalsolver wrote:
         | Now that OAI has "PhD level" agents, I assume they're largely
         | scaling back recruitment?
        
         | mvdtnz wrote:
         | Do you have a source for these supposed leaks? Those prices
         | don't sound even remotely credible and I can't find anything on
         | HN in the past week with the keywords "openai leak".
        
           | DebtDeflation wrote:
           | https://techcrunch.com/2025/03/05/openai-reportedly-plans-
           | to...
           | 
           | It points to an article on "The Information" as the source,
           | but that link is paywalled.
        
         | hnthrow90348765 wrote:
         | There is too little to go on, but they could already have trial
         | customers and testimonials lined up. Actually demoing the
         | product will probably work better than just having a human-less
         | signup process, considering the price.
         | 
         | They could also just be trying to cash in on FOMO and their
         | success and reputation so far, but that would paint a bleak
         | picture
        
         | serjester wrote:
         | Never come close to breaking even? You can now get a GPT-4
         | class model for 1-2% of what it cost when they originally
         | released it. They're going to drive this even further down with
         | the amount of CAPEX pouring into AI / data centers. It's pretty
         | obvious that's their plan when they serve ChatGPT at a "loss".
        
       | paxys wrote:
       | Microsoft's corporate structure and company culture is actively
       | hostile to innovation of any kind. This was true in Ballmer's era
       | and is equally true today, no matter how many PR wins Nadella is
       | able to pull off. The company justifies its market cap by selling
       | office software and cloud services contracts to large
       | corporations and governments via an army of salespeople and
       | lobbyists, and that is what it will continue to be successful at.
       | It got lucky by backing OpenAI at the right time, but the
       | delusion of becoming an independent AI powerhouse like OpenAI,
       | Anthropic, Google, Meta etc. will never be a reality. Stuff like
       | this is simply not in the company's DNA.
        
         | slt2021 wrote:
         | you are right, Microsoft is a hodge podge of legacy on-premise
         | software, legacy software lifted and shifted to the cloud, and
         | some innovation pockets.
         | 
         | Microsoft bread and butter is Enterprise bloatware and large
         | Enterprise deals where everything in the world is bundled
         | together for use-it-or-lose-it contracts.
         | 
         | Its not really much different from IBM like a two decades ago
        
         | feyman_r wrote:
         | How does one define an AI powerhouse? If its building models, a
         | smart business wouldn't bank on that alone. There is no moat.
         | 
         | If the definition of an AI Powerhouse is more about the
         | capability to host models and process workloads, Amazon (the
         | other company missing in that list) and Microsoft are
         | definitely them.
        
       | asciii wrote:
       | Clear as day when he said this during the openai fiasco:
       | 
       | "we have the people, we have the compute, we have the data, we
       | have everything. we are below them, above them, around them." --
       | satya nadella
        
         | optimalsolver wrote:
         | Sounds like just the kind of person you'd want in command of a
         | powerful AGI.
        
       | lemoncookiechip wrote:
       | Insert Toy Story "I don't want to play with you anymore." meme
       | here.
        
       | cft wrote:
       | OpenAI will in the end be aquired for less than its current
       | valuation. Initially, I've been paying for Claude (coding),
       | Cursor (coding), OpenAI (general, coding), and then started
       | paying for Claude Code API credits.
       | 
       | Now I canceled OpenAI and Claude general subscriptions, because
       | for general tasks, Grok and DeepSeek more than suffice. General
       | purpose AI will unlikely be subscription-based, unlike the
       | specialized (professional) one. I'm now only paying for Claude
       | Code API credits and still paying for Cursor.
        
         | skinnymuch wrote:
         | I have to look at Claude Code. I pay for Cursor right now.
        
           | cft wrote:
           | Claude Code is another level, because it's agentic. It
           | iterates. Although it keeps you further from the codebase
           | than Cursor and thus you may lose the grasp of what it
           | generates- that's why I still use Cursor, before the manual
           | review.
        
             | BeetleB wrote:
             | Consider Aider. Open source. Agentic as well. And you can
             | control the context it sends (apparently not as much in
             | Code).
        
       | partiallypro wrote:
       | Microsoft is just so bad at marketing their products, and their
       | branding is confusing. Unfortunately, until they fix that, any
       | consumer facing product is going to falter. Look at the new
       | Microsoft 365 and Office 365 rebrands just of late. The business
       | side of things will still make money but watching them flounder
       | on consumer facing products is just so frustrating. The Surface
       | and Xbox brand are the only 2 that seem to have somewhat escaped
       | the gravity of the rest of the organization in terms of that, but
       | nothing all that polished or groundbreaking has really come out
       | of Microsoft from a consumer facing standpoint in over a decade
       | now. Microsoft could build the best AI around but it doesn't
       | matter without users.
        
         | Enginerrrd wrote:
         | Yeah, the office suite is such a cash cow. It is polished,
         | feature rich, and ubiquitous compared to alternatives and
         | somehow has remained so for decades. And yet, I'm increasingly
         | getting seriously concerned they are going to break it so badly
         | I'll need to find an alternative.
        
         | nyarlathotep_ wrote:
         | I get that "growth" must be everything or whatever, but can't a
         | company just be stable and reliable for a while? What's wrong
         | with enterprise contracts and more market penetration for cloud
         | services of (oftentimes) dubious use?
        
       | JumpCrisscross wrote:
       | Softbank's Masa's magic is convincing everyone, every time, that
       | he hasn't consistently top ticked every market he's invested in
       | for the last decade. Maybe Satya's finally broken himself of the
       | spell [1].
       | 
       | [1]
       | https://www.nytimes.com/2024/10/01/business/dealbook/softban...
        
       | DidYaWipe wrote:
       | Meanwhile I'm enjoying a present without Microsoft.
        
       | iambateman wrote:
       | If I invested $13 billion dollars, I'd expect to get answers to
       | questions like "how does the product work" too.
        
       | maxrmk wrote:
       | If it's Mustafa vs Sam Altman, I know where I'd put my money. As
       | much as I like Satya Nadella I think he's made some major hiring
       | mistakes.
        
       | throwaway5752 wrote:
       | They don't buy or acquire what they can build internally, and
       | they partner with startups to learn if they can build it. This is
       | not new.
        
       | outside1234 wrote:
       | The more surprising thing would be if Microsoft wasn't hedging
       | their bets and planning for both a future WITH and WITHOUT
       | OpenAI.
       | 
       | This is just want companies at $2T scale do.
        
       | d--b wrote:
       | OpenAI is over ambitious.
       | 
       | Their chasing of AGI is killing them.
       | 
       | They probably thought that burning cash was the way to get to
       | AGI, and that on the way there they would make significant
       | improvements over GPT 4 that they would be able to release as GPT
       | 5.
       | 
       | And that is just not happening. While pretty much everyone else
       | is trying to increase efficiency, and specialize their models to
       | niche areas, they keep on chasing AGI.
       | 
       | Meanwhile more and more models are being delivered within apps,
       | where they create more value than in an isolated chat window. And
       | OpenAi doesn't control those apps. So they're slowly being pushed
       | out.
       | 
       | Unless they pull off yet another breakthrough, I don't think they
       | have much of a great future
        
       | guccihat wrote:
       | Currently, it feels like many of the frontier models have reached
       | approximately the same level of 'intelligence' and capability. No
       | one is leaps ahead of the rest. Microsoft probably figured this
       | is a good time to reconsider their AI strategy.
        
       | danielovichdk wrote:
       | Ballmer would have caught this earlier.
       | 
       | Watch.
       | 
       | Nadella will not steer this correctly
        
       | debacle wrote:
       | It's clear that OpenAI has peaked. Possibly because the AI hype
       | in general has peaked, but I think moreso because the opportunity
       | has become flooded and commoditized, and only the fetishists are
       | still True Believers (which is something we saw during the crypto
       | hype days, but most at the time decried it).
       | 
       | Nothing against them, but the solutions have become commoditized,
       | and OpenAI is going to lack the network effects that these other
       | companies have.
       | 
       | Perhaps there will be new breakthroughs in the near future that
       | produce even more value, but how long can a moat be sustained?
       | All of them in AI are filled in faster than the are dug.
        
       | crowcroft wrote:
       | I mean, obviously? There is no good reason to go all in on OpenAI
       | for Microsoft?
       | 
       | Also a bit hyperbolic. I'm sure there are good reasons Microsoft
       | would want to build it's own products on top of their own models
       | and have more fine control of things. That doesn't mean they are
       | plotting a future where they do nothing at all with OpenAI.
        
       | testplzignore wrote:
       | > OpenAI's models, including GPT-4, the backbone of Microsoft's
       | Copilot assistant, aren't cheap to run. Keeping them live on
       | Azure's cloud infrastructure racks up significant costs, and
       | Microsoft is eager to lower the bill with its own leaner
       | alternatives.
       | 
       | Am I reading this right? Does Microsoft not eat its own dog food?
       | Their own infra is too expensive?
        
         | wejick wrote:
         | Cost is cost wherever that would be.
        
       | mmaunder wrote:
       | That OpenAI would absolutely dominate the AI space was received
       | wisdom after the launch of GPT-4. Since then we've had a major
       | corporate governance shakeup, lawsuits around the non-profit
       | status which is trying to convert into for-profit, and
       | competitors out-innovating OpenAI. So OpenAI is no longer a shoo-
       | in, and Microsoft have realized that they may actually be
       | hamstrung through their partnership because it prevents them from
       | innovating in-house if OpenAI loses their lead. So the obvious
       | strategic move is to do this. To make sure that MS has everything
       | they need to innovate in-house while maintaining their
       | partnership with OpenAI, and try to leverage that partnership to
       | give in-house every possible advantage.
        
       | quantadev wrote:
       | It would be absolutely insane for Microsoft to use DeepSeek. Just
       | because a model is open weights doesn't mean there's not a
       | massive threat-vector of a Trojan horse in those weights that
       | would be undetectable until exploited.
       | 
       | What I mean is you could train a model to generate harmful code,
       | and do so covertly, whenever some specific sequence of keywords
       | is in the prompt. Then China could take some kind of action to
       | cause users to start injecting those keywords.
       | 
       | For example: "Tribble-like creatures detected on Venus". That's a
       | highly unlikely sequence, but it could be easily trained into
       | models to trigger a secret "Evil Mode" in the LLM. I'm not sure
       | if this threat-vector is well known or not, but I know it can be
       | done, and it's very easy to train this into the weights, and
       | would remain undetectable until it's too late.
        
         | mirekrusin wrote:
         | ...unless you operate in China.
        
           | quantadev wrote:
           | If DeepSeek is indeed a poisoned model, then they (China)
           | will be aware not to ever trust any code it generates, or
           | else they'll know what it's triggers are, and just not
           | trigger it.
        
       ___________________________________________________________________
       (page generated 2025-03-07 23:00 UTC)