[HN Gopher] Acquisitions, consolidation, and innovation in AI
       ___________________________________________________________________
        
       Acquisitions, consolidation, and innovation in AI
        
       Author : pfarago
       Score  : 74 points
       Date   : 2025-04-24 18:31 UTC (4 hours ago)
        
 (HTM) web link (frontierai.substack.com)
 (TXT) w3m dump (frontierai.substack.com)
        
       | no_wizard wrote:
       | I read the article and while it doesn't say this nor imply it,
       | this is my takeaway, though correct me if I'm wrong:
       | 
       | Model innovation is effectively converging and slowing down
       | considerably. The big companies in this space doing the research
       | are not making leap over leap with each release, and the
       | downstream open source projects are coming closer to the same
       | quality or in fact can produce the same quality (e.g DeepSeek or
       | LLAMA) hence why it's becoming a commodity.
       | 
       | Around the edges model innovation - particularly speed ups in
       | returning accurate results - will help companies differentiate
       | but fundamentally, all this tech is shovels in search of miners,
       | IE you aren't really going to make money hand over fist by simply
       | being an LLM model provider.
       | 
       | In another words, this latest innovation has hit commodity level
       | within a few short years of going mainstream and the winners are
       | going to be the companies that make products on top of this tech,
       | and as the tech continues to become a commodity, the value
       | proposition for pure research companies drops considerably
       | relative to application builders.
       | 
       | To me this leaves a central question: when does it hit a relative
       | equilibrium where the technology and the applications on top of
       | it have largely hit their maximal ability to add utility to
       | applicable situations? That's the next question, and I think the
       | far more important one
       | 
       | One other thing, at the end of the article they wrote:
       | 
       | >Ultimately, businesses won't rearrange themselves around AI --
       | the AI systems will have to meet businesses where they are.
       | 
       | This is demonstrably untrue. CEOs are chomping at the bit to
       | reorganize their business around AI, as in, AI doing things
       | humans used to do and getting the same effective results or
       | better, thereby they can reduce staff across the board while
       | supposedly maintaining the same output or better.
       | 
       | Look at the leaked Shopify memo for an example or the trend of
       | _"I can vibe code with an LLM making software engineers
       | obsolete"_ that has taken off as of late, if LinkedIn is to be
       | believed
        
         | epistasis wrote:
         | I would agree with this and also say that it's been clear this
         | is true for at least a year. Innovations like Deepseek may not
         | have been around a year ago, but it was very clear that "AI" is
         | actually information retrieval and transformation, that the
         | chat UI had limited applicability (nobody wants to "chat with
         | their documents"), and that those who could shape the tech to
         | match use cases would be the ones capturing the value. Just as
         | SaaS uses databases, but creates and captures value by shaping
         | the database to the particular use case.
        
           | nemomarx wrote:
           | so when do we get to the point where AI apps are just CRUD
           | apps essentially? RAG kinda feels like a better version of
           | those to me
        
             | babelfish wrote:
             | now!
        
         | skeeter2020 wrote:
         | I agree with this but I think it's still an open question if
         | anyone can build a successful product on top of the tech. There
         | will likely be some but it feels eerily similar to the dot com
         | boom (and then bust) when the vast majority of new products
         | built on top of this (internet) technology didn't produce and
         | didn't survive. Most AI products so far are fun toys or
         | interesting proofs, and mediocre when evaluated against other
         | options. They'll need to be applied to a much smaller set of
         | problems (that doesn't support the current level of investment)
         | or find some new miracle set of problems where they change the
         | rules.
         | 
         | Businesses are definitely rearranging themselves structurally
         | around AI - at least to try and get the AI valuation multiplier
         | and Executives have levels of FOMO I've never seen before. I
         | report to a CTO and the combination of 100,000 foot hype
         | combined with down in the weeds focus on the "protocol de jour"
         | (with nothing in between that looks like a strategy) is
         | astounding. I just find it exhausting.
        
           | adpirz wrote:
           | The dot com boom is an apt analogy: the internet took off, we
           | understood it had potential, but the innovation didn't all
           | come in the first wave. It took time for the internet to
           | bake, and then we saw another boom with the advent of mobile
           | phones, higher bandwidth, and more compute per user.
           | 
           | It is still simply too early to tell exactly what the new
           | steady state is, but I can tell you that where we're at
           | _today_ is already a massive paradigm shift from what my day-
           | to-day looked like 3 years ago, at least as a SWE.
           | 
           | There will be lots of things thrown at the wall and the
           | things that stick will have a big impact.
        
             | dingnuts wrote:
             | other than constantly feeling gaslit about the quality of
             | these tools, I can tell you where we are _today_ is
             | basically the same in my day to day as it was three years
             | ago.
             | 
             | oh except, sometimes someone tells me I could use the bot
             | to generate a thing, and it doesn't work, and I waste some
             | time, and then do it manually.
        
         | vonneumannstan wrote:
         | >Model innovation is effectively converging and slowing down
         | considerably. The big companies in this space doing the
         | research are not making leap over leap with each release, and
         | the downstream open source projects are coming closer to the
         | same quality or in fact can produce the same quality (e.g
         | DeepSeek or LLAMA) hence why it's becoming a commodity.
         | 
         | You're just showing how disconnected from the progress of the
         | field you are. o3/o4 aren't even in the same universe as
         | anything from open source. Deepseek R1, LLama 4? Are you
         | joking?
        
           | no_wizard wrote:
           | Depends on how they're applied. I've had success using LLama
           | and while we check to see if OpenAI or Google's Gemini would
           | give us any noticeable improvement, it really doesn't for our
           | use case.
           | 
           | While certainly newer models are more capable on the whole,
           | it doesn't mean I need all that capability to accomplish the
           | business goal.
        
             | vonneumannstan wrote:
             | This is kind of a useless statement. If your use case is so
             | easy that "old" models work for it then obviously you won't
             | care about or be following the latest developments but its
             | just not accurate to say that Deepseek R1 is equivalent to
             | o3 or Gemini 2.5.
        
               | no_wizard wrote:
               | Producing quality results is not the same thing as saying
               | Deepseek R1 is the equivalent of o3 or Gemini 2.5
               | 
               | Again, its not about capabilities alone, (on this, many
               | models lag behind, I already said as much). I follow
               | these developments quite closely, and I purposely said
               | _results_ as to not say they 're equivalent in
               | capability. They aren't.
               | 
               | However, if a business is getting acceptable results from
               | older models or cheaper models than capability doesn't
               | matter, the results do. Gemini 2.5 can be best of breed
               | but why switch if it shows no meaningful improvement in
               | results for the business?
               | 
               | If I need more capability or results are substandard, I
               | can always upgrade, but its like saying there's no room
               | for cheaper processors and you'd be out of your mind not
               | to be using only the latest at all times no matter the
               | results.
        
               | luckylion wrote:
               | That's not what GP was saying though. To stay with that
               | analogy, the assertion was that "all processors are kinda
               | the same, there's no real qualitative difference", which
               | sounds pretty strange. It's somewhat accurate if your
               | use-case is covered by the average processor and the
               | faster one doesn't benefit you. They're not equal, but
               | all of them surpass your needs.
               | 
               | > If I need more capability or results are substandard, I
               | can always upgrade
               | 
               | You wouldn't be able to upgrade (and see improved
               | results) if the model you use today was close to equal to
               | the top of the line.
        
               | no_wizard wrote:
               | That wasn't the assertion. The results - not the models
               | themselves, not strictly speaking their over all
               | capabilities - if they have no meaningful improvement by
               | moving to a newer model, why then would I want to switch
               | if I'm not getting any tangible improvement in results?
        
           | Der_Einzige wrote:
           | Your belief that O3 and O4 are that superior to open source
           | models comes from the fact that models are often using shit,
           | garbage, trash samplers like top_p and top_k.
           | 
           | Switch them to good samplers and write the tool calling code
           | to allow tool calls in the reasoning chain and you'll see
           | close to parity in performance.
           | 
           | The remaining advantages left to closed source come from
           | better long context, and later data cutoff points.
           | 
           | If you don't believe me let's see the receipts of your ICLR
           | or NeurIPS publications - otherwise sit down and listen to
           | your elders.
        
         | bongodongobob wrote:
         | > This is demonstrably untrue. CEOs are chomping at the bit to
         | reorganize their business around AI, as in, AI doing things
         | humans used to do and getting the same effective results or
         | better, thereby they can reduce staff across the board while
         | supposedly maintaining the same output or better.
         | 
         | Nah. Maybe tech CEOs. Companies are blocking AI carte blanche
         | at the direction of their security teams and/or only allowing
         | an instanced version of MS Copilot, if anything. Other than
         | write emails, it doesn't do much for the average office worker
         | and we all know it.
         | 
         | The value is going to be the apps that build on AI, as you
         | said.
        
           | borski wrote:
           | > Companies are blocking AI carte blanche at the direction of
           | their security teams
           | 
           | What companies?
        
             | epistasis wrote:
             | I know many IP-heavy and health-centric companies are
             | blocking AI use severely. For example, pharma depends on
             | huge amounts of secrecy and does not want _any_ data leaked
             | to OpenAI, and often has barely-competent IT and security
             | staff that don 't know what "threat model" means. Those who
             | deal with controlled health data also block with a heavy
             | hand.
        
               | no_wizard wrote:
               | I imagine it'll take time for any of this tech to
               | permeate and the lower barrier of entry will see adoption
               | faster - as is usually the case with new tech - but it'll
               | make its way eventually. On premise AI will be a thing
        
               | borski wrote:
               | Once upon a time, they blocked docker too. Things change.
        
           | no_wizard wrote:
           | It certainly isn't maybe, look at the recent Shopify memo
           | leak, and the way that lots of companies are talking about
           | AI.
           | 
           | Any company with any sort of large customer service presence
           | are looking at AI to start replacing alot of customer service
           | roles, for example. There is huge demand for this across many
           | industries, not only tech. Whether it actually delivers is
           | the question, but the demand is there.
        
           | warkdarrior wrote:
           | Claiming these AIs "don't do much" overlooks the very real
           | productivity gains already happening - automating tedious
           | tasks and accelerating content creation. This isn't trivial
           | and will lead to the deeper integrations and streamlined
           | (read: downsized) workforces. The reorganization isn't a
           | distant fantasy; it's already here.
        
         | o1inventor wrote:
         | One of the possible alternative routes is this:
         | 
         | Model providers and model labs stop opensourcing/listing their
         | innovations/papers and start patenting instead.
        
       | nc wrote:
       | One thing this article gets wrong is how OpenAI isn't an
       | application layer company, they built the original ChatGPT "app"
       | with model innovation to power it. They're good at UX and
       | actually have the strongest shot at owning the most common apps
       | (like codegen).
        
         | mattmanser wrote:
         | I personally find their UX frustrating, basically a junior
         | developer's attempt at doing a front end. What do you think is
         | so good about it?
         | 
         | It's also janky as hell and crashes regularly.
        
           | monoid73 wrote:
           | I think the UX of chatgpt works because it's familiar, not
           | because it's good. Lowers friction for new users but doesn't
           | scale well for more complex workflows. if you're building
           | anything beyond Q&A or simple tasks, you run into limitations
           | fast. There's still plenty of space for apps that treat the
           | model as a backend and build real interaction layers on top
           | -- especially for use cases that aren't served by a chat
           | metaphor
        
         | bilbo0s wrote:
         | I don't disagree. But that's a pretty good reason to make sure
         | you're making something _other_ than the obvious common apps if
         | you want a big chunk of acquisition money.
        
       | xnx wrote:
       | There's a lot of opportunity to apply leading edge AI models to
       | specific business applications, but success here is determined
       | more by experience with those business domains than with AI
       | generally.
       | 
       | An AI startup could still be a useful "resume" to get acqui/hired
       | by one of the big players.
        
         | lenerdenator wrote:
         | I think too many people are focused on the idea of AGI instead
         | of doing what you're suggesting, which is where the _real_
         | value-add is for customers.
         | 
         | I don't need God in a datacenter. I need help diagnosing an
         | Elastic Search problem.
        
           | riku_iki wrote:
           | Its just it is not easy to come into specific pre-occupied
           | space for outsiders.
        
       | dismalaf wrote:
       | The LLM space was never going to be kind to those without deep
       | pockets. And right now there's no point getting in it because
       | it's hit a wall. So yeah, startups should steer clear of trying
       | to make frontier LLM models.
       | 
       | On the other hand, there's a ton of hype and money looking for
       | the next AI related thing. If someone creates the next
       | transformer, or a different AI paradigm that pushes things
       | forward, they'll get billions.
        
       | lemax wrote:
       | This take doesn't really highlight the fact that the most
       | competitive foundational model companies _are_ innovative
       | application builders. Anthropic and OpenAI are vying for
       | consumers to use their models by building these sort of super
       | applications (ChatGPT, Claude) that can run code, plot graphs,
       | spin up text editors, create geographic maps, etc. These are well
       | staffed and strategically important areas of their businesses.
       | There 's competition to attract consumers to these apps and they
       | will grow more capable and commoditize more compliments along the
       | way. Who needs Jasper when you can edit copy in ChatGPT, or an AI
       | python notebook app, or, now, Cursor?
        
       | imoreno wrote:
       | This focuses on case where the acquirer seeks to capture the
       | value of the startup's business. But this is not always the case,
       | sometimes the startup is dubious, but a cash-rich enterprise can
       | purchase startups simply to eliminate potential avenues of
       | competition. They may not be interested in adding a better
       | product to their portfolio, only in quashing any nascent attempts
       | at building the better product so they can keep selling their own
       | mediocre one.
       | 
       | Also, "model innovation" strikes me as missing the point these
       | days. The models are really good already. The majority of
       | applications is capturing only a tiny bit of their value.
       | Improving the models is not that important because model
       | capability is not the bottleneck anymore, what matters is how the
       | model is used. We just don't have enough tools to use them fully,
       | and what we have is not even close to penetrating the market,
       | while all the dominant tools are garbage. Of course application
       | innovation is the place to be!
        
       | blitzar wrote:
       | Every startup should generate a shitty Ai wrapper product, write
       | one or two lines of code, generate hype and have 2025's version
       | of Softbank give you a billion $'s.
       | 
       | Frankly it's bordering on irresponsible to not be targeting
       | acquisition in this climate.
        
       | Bloating wrote:
       | 1) Collect Underpants
       | 
       | 2) ?
       | 
       | 3) Profit!
        
       | stuart_real wrote:
       | The fact that a VSCode-based GPT-wrapper is being offered $3B
       | tells you how desperate the LLM companies are.
       | 
       | Anthropic and xAI will also make similar acquisitions to increase
       | their token usage.
        
       | paulsutter wrote:
       | Work just to be a part of it. This is the most consequential time
       | in history.
       | 
       | It's the best time ever to build. Don't work on anything that
       | could have been done two years ago.
       | 
       | Learn the current tools - so that you can adapt to the new tools
       | that much faster as they come out.
        
       ___________________________________________________________________
       (page generated 2025-04-24 23:01 UTC)