[HN Gopher] Microsoft drops AI sales targets in half after sales...
___________________________________________________________________
Microsoft drops AI sales targets in half after salespeople miss
their quotas
Author : OptionOfT
Score : 333 points
Date : 2025-12-04 15:31 UTC (7 hours ago)
(HTM) web link (arstechnica.com)
(TXT) w3m dump (arstechnica.com)
| jqpabc123 wrote:
| _AI agent technology likely isn't ready for the kind of high-
| stakes autonomous business work Microsoft is promising._
|
| It's unbelievable to me that tech leaders lack the insight to
| recognize this.
|
| So how to explain the current AI mania being widely promoted?
|
| I think the best fit explanation is simple con artistry. They
| know the product is fundamentally flawed and won't perform as
| being promised. But the money to be made selling the fantasy is
| simply too good to ignore.
|
| In other words --- pure greed. Over the longer term, this is a
| weakness, not a strength.
| ahartmetz wrote:
| Imagine your supplier effectively telling you that they don't
| even value you (and your money) enough to bother a real human.
| jollyllama wrote:
| They've gotten away with shipping garbage for years and still
| getting paid for it. They think we're all stupid.
| jqpabc123 wrote:
| _They think we 're all stupid._
|
| As time goes by, I'm starting to think they may be right more
| than they're wrong.
|
| And this is a sad and depressing statement about humanity.
| somenameforme wrote:
| Not really. It's just that the point you have to push
| people to get them to start pushing back on something tends
| to be quite high. And it's very different for different
| people on different topics.
|
| In the past this wasn't such a big deal because businesses
| weren't so large or so frequently run by myopic sociopaths.
| Ebenezer Scrooge was running some small local business, not
| a globe spanning empire entangling itself with government
| and then imposing itself on everybody and everything.
| sallveburrpi wrote:
| Scrooge is a fictional person and Microsoft have been
| getting away with it since I'm alive with people hating
| it probably just as long. So I think GP definitely has a
| point.
| stocksinsmocks wrote:
| Given that they aren't meeting their sales targets at all, I
| guess that's a little bit of encouraging about the
| discernment of their customers. I'm not sure how Microsoft
| has managed to escape market discipline for so long.
| SAI_Peregrinus wrote:
| Their customers largely aren't their users. Their customers
| are the purchasing departments at Dell, Lenovo, and other
| OEMs. Their customers are the purchasing departments at
| large enterprises who want to buy Excel. Their customers
| are the advertisers. The products where the customers and
| the users are the same people (Excel, MS flight simulator,
| etc.) tend to be pretty nice. The products where the
| customers aren't the users inevitably turn to shit.
| thewebguyd wrote:
| > I'm not sure how Microsoft has managed to escape market
| discipline for so long.
|
| How would they? They are a monopoly, and partake in
| aggressive product bundling and price manipulation tactics.
| They juice their user numbers by enabling things in
| enterprise tenants by default.
|
| If a product of theirs doesn't sell, they bundle it for
| "free" in the next tier up of license to drive adoption and
| upgrades. Case in point, the InTune suite (includes EntraID
| P2, Remote assistance, endpoint privilege management) will
| now be included in E5, and the price of E5 is going up (by
| $10/user/month, less than the now bundled features cost
| when bought separately). People didn't buy it otherwise, so
| now there's an incentive to move customers off E3 and into
| E5.
|
| Now their customers are in a place where Microsoft can
| check boxes, even if the products aren't good, so there's
| little incentive to switch.
|
| Try to price out Google Workspace (and also, an office
| license still because someone will need Excel), Identity,
| EDR, MDM for Windows, mac, mobile, slack, VoIP, DLP, etc.
| You won't come close to Microsoft's bundled pricing by
| piecing together the whole M365 stack yourself.
|
| So yeah, they escape market discipline because they are the
| _only_ choice. Their customers are fully captive.
| zdragnar wrote:
| I was just in a thread yesterday with someone who genuinely
| believed that we're only seeing the beginnings of what the
| current breed of AI will get us, and that it's going to be as
| transformative as the introduction of the internet was.
|
| Everything about the conversation felt like talking to a true
| believer, and there's plenty out there.
|
| It's the hopes and dreams of the Next Big Thing after
| blockchain and web3 fell apart and everyone is desperate to
| jump on the bandwagon because ZIRP is gone and everyone who is
| risk averse will only bet on what everyone else is betting on.
|
| Thus, the cycle feeds itself until the bubble pops.
| MengerSponge wrote:
| All these boosters think we're on the leading edge of an
| exponential, when it's way more likely that we're on the
| midpoint to tail of a logistic
| rm_-rf_slash wrote:
| AI research has always been a series of occasional great
| leaps between slogs of iterative improvements, from Turing
| and Rosenblatt to AlexNet and GPT-3. The LLM era will
| result in a few things becoming invisible architecture* we
| stop appreciating and then the next big leap starts the
| hype cycle anew.
|
| *Think toll booths ("exact change only!") replaced by
| automated license plate readers in just the span of a
| decade. Hardly noticeable now.
| empath75 wrote:
| Two things can be true:
|
| 1) We have barely scratched the surface of what is possible
| to do with existing AI technology. 2) Almost all of the money
| we are spending on AI now is ineffectual and wasted.
|
| ---
|
| If you go back to the late 1990s, that is the state that most
| companies were at with _computers_. Huge, wasteful projects
| that didn't improve productivity at all. It took 10 years of
| false starts sometimes to really get traction.
| rizzom5000 wrote:
| It's interesting to think Microsoft was around back then
| too, taking approximately 14 years to regain the loss of
| approximately 58% of their valuation.
| mikkupikku wrote:
| > _" someone who genuinely believed that we're only seeing
| the beginnings of what the current breed of AI will get us,
| and that it's going to be as transformative as the
| introduction of the internet was."_
|
| I think that. It's new technology and it always takes some
| years before all the implications and applications of new
| technology are fully worked out. I also think that we're in a
| bubble that will hose a lot of people when it pops.
| treis wrote:
| I don't see how people don't see it. LLMs are a revolutionary
| technology and are for the first time since the iPhone are
| changing how we interact with computers. This isn't block
| chains. This is something we're going to use until something
| better replaces it.
| solumunus wrote:
| I agree to some extent, but we're also in a bubble. It
| seems completely obvious that huge revenue numbers aren't
| around the corner, not enough to justify the spend.
| Gormo wrote:
| > In other words --- pure greed.
|
| _Pure_ greed would have a strong incentive to understand what
| the market is actually demanding in order to maximize profits.
|
| These attempts to try to steer demand despite clear indicators
| that it doesn't want to go in that direction aren't just driven
| by greed, they're driven by abject incompetence.
|
| This isn't pure greed, it's _stupid_ greed.
| wubrr wrote:
| Pure greed is stupid greed.
|
| Also, if the current level of AI investment and valuations
| aren't justified by market demand (I believe so), many of
| these people/companies are getting more money than they would
| without the unreasonable hype.
| paganel wrote:
| > Pure greed would have a strong incentive to understand what
| the market is actually demanding in order to maximize
| profits.
|
| Not necessarily, just look at this clip [1] from _Margin
| Call_ , an excellent movie on the GFC. As Jeremy Irons is
| saying in that clip, the market (as usually understood in
| classical economy, with producers making things for
| clients/customers to purchase) is of no importance to today's
| market economy, almost all that matters, at the hundreds of
| billions - multi-trillion dollars-levels, is for your company
| "to play the music" as best as the other (necessarily very
| big) market participants, "nothing more, nothing less"
| (again, to quote Irons in that movie).
|
| There's nothing in it about "making what people/customers
| want" and all that, which is regarded as accessory, that is
| if it is taken into consideration at all. As another poster
| is mentioning in this thread, this is all the direct result
| of the financialization of much of the Western economy, this
| is how things work at this level, given these (financiliazed)
| inputs.
|
| [1] https://www.youtube.com/watch?v=UOYi4NzxlhE
| binary132 wrote:
| you seem to be committing the error of believing that the
| problem here is just that they're not selling what people
| want to buy, instead of identifying the clear intention to
| _create_ the market.
| empath75 wrote:
| It's not "fundamentally flawed". It is brilliant at what it
| does. What is flawed is how people are applying it to solve
| specific problems. It isn't a "do anything" button that you can
| just push. Every problem you apply AI to still has a ton of
| engineering work that needs to be done to make it useful.
| stingraycharles wrote:
| You're correct, you need to learn how to use it. But for some
| reason HN has an extremely strong anti-AI sentiment, unless
| it's about fundamental research.
|
| At this point, I consider these AI tools to be an invaluable
| asset to my work in the same way that search engines are.
| It's integrated into my work. But it takes practice on how to
| use it correctly.
| rtp4me wrote:
| My suspicion is because they (HN) are very concerned this
| technology is pushing hard into their domain expertise and
| feel threatened (and, rightfully so).
| seanw444 wrote:
| While it will suck when that happens (and inevitably it
| will), that time is not now. I'm not one to say LLMs are
| useless, but they aren't all they're being marketed to
| be.
| LtWorf wrote:
| Or they might know better than you. A painful idea.
| rtp4me wrote:
| Painful? What's painful when someone has a different
| opinion? I think that is healthy.
| bigstrat2003 wrote:
| > for some reason HN has an extremely strong anti-AI
| sentiment
|
| It's because I've used it and it doesn't come even close to
| delivering the value that its advocates claim it does.
| Nothing mysterious about it.
| ToValueFunfetti wrote:
| I think what it comes down to is that the advocates
| making false claims are relatively uncommon on HN. So,
| for example, I don't know what advocates you're talking
| about here. I know people exist who say they can vibe-
| code quality applications with 100k LoC, or that guy at
| Anthropic who claims that software engineering will be a
| dead profession in the first half of '26, and I know that
| these people tend to be the loudest on other platforms. I
| also know sober-minded people exist who say that LLMs
| save them a few hours here and there per week trawling
| documentation, writing a 200 line SQL script to seed data
| into a dev db, or finding some off-by-one error in a
| haystack. If my main or only exposure to AI discourse was
| HN, I would really only be familiar with the latter group
| and I would interpret your comment as very biased against
| AI.
|
| Alternatively, you are referring to the latter group and,
| uh, sorry.
| mrob wrote:
| There is no scenario where AI is a net benefit. There are
| three possibilities:
|
| 1. AI does things we can already do but cheaper and worse.
|
| This is the current state of affairs. Things are mostly the
| same except for the flood of slop driving out quality. My
| life is moderately worse.
|
| 2. Total victory of capital over labor.
|
| This is what the proponents are aiming for. It's disastrous
| for the >99% of the population who will become economically
| useless. I can't imagine any kind of universal basic income
| when the masses can instead be conveniently disposed of
| with automated killer drones or whatever else the victors
| come up with.
|
| 3. Extinction of all biological life.
|
| This is what happens if the proponents succeed better than
| they anticipated. If recursively self-improving ASI pans
| out then nobody stands a chance. There are very few goals
| an ASI can have that aren't better accomplished with
| everybody dead.
| ToValueFunfetti wrote:
| What is the motivation for killing off the population in
| scenario 2? That's a post-scarcity world where the elites
| can have everything they want, so what more are they
| getting out of mass murder? A guilty conscience,
| potentially for some multiple of human lifespans?
| Considerably less status and fame?
|
| Even if they want to do it for no reason, they'll still
| be happier if their friends and family are alive and
| happy, which recurses about 6 times before everybody on
| the planet is alive and happy.
| mrob wrote:
| It's not a post-scarcity world. There's no obvious upper
| bound on resources AGI could use, and there's no obvious
| stopping point where you can call it smart enough. So
| long are there are other competing elites the incentive
| is to keep improving it. All the useless people will be
| using resources that could be used to make more
| semiconductors and power plants.
| dbspin wrote:
| I'd consider hallucinations to be a fundamental flaw that
| currently sets hard limits on the current utility of LLMs in
| any context.
| SoftTalker wrote:
| I thought this for a while, but I've also been thinking
| about all the stupid, false stuff that actual humans
| believe. I'm not sure AI won't get to a point where even if
| it's not perfect it's no worse than people are about
| selectively observing policies, having wrong beliefs about
| things, or just making something up when they don't know.
| bigstrat2003 wrote:
| > Every problem you apply AI to still has a ton of
| engineering work that needs to be done to make it useful.
|
| Ok, but that isn't useful to me. If I have to hold the bot's
| hand to get stuff done, I'll just _do it myself_ , which will
| be both faster and higher quality.
| solumunus wrote:
| That's not my experience at all, I'm getting it done much
| faster and the quality is on par. It's hard to measure, but
| as a small business owner it's clear to me that I now
| require fewer new developers.
| nightski wrote:
| I think on some level it is being done on the premise that
| further advancement requires an enormous capital investment and
| if they can find a way to fund that with today's sales it will
| give the opportunity for the tech to get there (quite a
| gamble).
| giancarlostoro wrote:
| I have a feeling that Microsoft is setting themselves up for a
| serious antitrust lawsuit if they do what they are intending
| on. They should really be careful about introducing products
| into the OS that take away from all other AI shops. I fear this
| would cripple innovation if allowed to do so as well, since
| Microsoft has drastically fatter wallets than most of their
| competition.
| delfinom wrote:
| Under the current US administration the only thing Microsoft
| is getting is numerous piles of taxpayer bailouts.
| shevy-java wrote:
| Corruption is indeed going strong in the current corporate-
| controlled US group of lame actors posing as government
| indeed. At the least Trump is now regularly falling asleep
| - that's the best example that you can use any surrogate
| puppet and the underlying policies will still continue.
| eastbound wrote:
| If I mention a president who was more of a general
| secretary of the party, taking notes of decisions taken
| for him by lobbies from the largest corporations, falling
| asleep and having incoherent speech to the point that he
| seems to be way past the point of stroke, I don't think
| anyone will guess Trump.
| dreamcompiler wrote:
| There's no such thing as antitrust in the US right now.
| Google's recent slap on the wrist is all the proof you need.
| Eddy_Viscosity2 wrote:
| Trump has ushered in a truly lawless phase of american
| politics. I mean, it was kind of bad before, but at least
| there was a pretense of rule of law. A trillion dollar
| company can easily just buy its way out of any enforcement of
| such antitrust action.
| danans wrote:
| > So how to explain the current AI mania being widely promoted?
|
| > I think the best fit explanation is simple con artistry.
|
| Yes, perhaps, but many industries are built on a little bit of
| technology and a lot of stories.
|
| I think of it as us all being caught in one giant infomercial.
|
| Meanwhile as long as investors buy the hype it's a great story
| to use for triming payrolls.
| h2zizzle wrote:
| It's part of a larger economic con centered on the financial
| industry and the financialization of American industry. If you
| want this stuff to stop, you have to be hoping (or even working
| toward) a correction that wipes out the incumbents who
| absolutely are working to maintain the masqerade.
|
| It will hurt, and they'll scare us with the idea that it will
| hurt, but the secret is that we get to choose where it hurts -
| the same as how they've gotten to choose the winners and losers
| for the past two decades.
| hereme888 wrote:
| It's like when a child doesn't want something, you "give them
| a choice": would you like to put on your red or white shoes?
| tech_ken wrote:
| > correction that wipes out the incumbents who absolutely are
| working to maintain the masqerade
|
| You need to also have a robust alternative that grows quickly
| in the cleared space. In 2008 we got a correction that
| cleared the incumbents, but the ensuing decade of policy
| choices basically just allowed the thing to re-grow in a new
| form.
| h2zizzle wrote:
| I thought we pretty explicitly bailed out most of the
| incumbents. A few were allowed to be sacrificed, but most
| of the risk wasn't realized, and instead rolled into new
| positions that diffused it across the economy. 2008's
| "correction" should have seen the end of most of our
| investment banks and auto manufacturers. Say what you want
| to about them (and I have no particular love for either),
| but Tesla and Bitcoin are ghosts of the timeline where
| those two sectors had to rebuild themselves from scratch.
| There should have been more, and Goldman Sachs and GM et
| al. should not currently exist.
| tech_ken wrote:
| > A few were allowed to be sacrificed, but most of the
| risk wasn't realized, and instead rolled into new
| positions that diffused it across the economy.
|
| Yeah that's a more accurate framing, basically just
| saying that in '08 we put out the fire and rehabbed the
| old growth rather than seeding the fresh ground.
|
| > Tesla and Bitcoin are ghosts of the timeline where
| those two sectors had to rebuild themselves from scratch
|
| I disagree, I think they're artifacts of the rehab
| environment (the ZIRP policy sphere). I think in a world
| where we fully ate the loss of '08 and started in a new
| direction you might get Tesla, but definitely not TSLA,
| and the version we got is really (Tesla+TSLA) IMO.
| Bitcoin to me is even less of a break with the pre-08
| world; blockchain is cool tech but Bitcoin looks very
| much "Financial Derivatives, Online". I think an honest
| correction to '08 would have been far more of a focus on
| "hard tech and value finance", rather than inventing new
| financial instruments even further distanced from the
| value-generation chain.
|
| > Goldman Sachs and GM et al. should not currently exist.
|
| Hard agree here
| h2zizzle wrote:
| I would say yes and no on Tesla. Entities that survived
| becaue of the rehab environment actually expected it to
| fail, and shorted it heavily. TSLA as it currently exists
| is a result of the short squeeze on the stock that ensued
| when it became clear that the company was likely to
| become profitable. Its current, ridiculous valuation
| isn't a product of its projected earnings, but recoil
| from those large shorts blowing up.
|
| In our hypothetical alternate timeline, I imagine that
| there would have still been capital eager to fill the
| hole left by GM, and possibly Ford. Perhaps Tesla would
| have thrived in that vacuum, alongside the likes of
| Fisker, Mullen, and others, who instead faced incumbent
| headwinds that sunk their ventures.
|
| Bitcoin, likewise, was warped by the survival of
| incumbents. IIUC, those interests influenced governance
| in the early 2010s, resulting in a fork of the project's
| original intent from a transactional medium that would
| scale as its use grew, to a store of value, as controlled
| by them as traditional currencies. In our hypothetical,
| traditional banks collapsed, and even survivors lost all
| trust. The trustless nature of Bitcoin, or some other
| cryptocurrency, maybe would have allowed it to supercede
| them. Deprived of both retail and institutional deposits,
| they simply did not have the capital to warp the crypto
| space as they did in the actual 2010s.
|
| I call them "ghosts" because, yes, whatever they might
| have been, they're clearly now just further extensions of
| that pre-2008 world, enabled by the our post-2008
| environment (including ZIRP).
| vjvjvjvjghv wrote:
| "In 2008 we got a correction that cleared the incumbents,"
|
| I thought in 2008 we told the incumbents "you are the most
| important component of our economy. We will allow everybody
| to go down the drain but you. That's because you caused the
| problem, so you are the only ones to guide us out of it"
| mason_mpls wrote:
| This assumes fair competition in the tech industry, which has
| evaporated without a path for return years ago.
| jbs789 wrote:
| Looking forward to the OpenAI (and Anthropic) IPOs. It's
| funny to me that this info is being "leaked" - they are
| sussing out the demand. If they wait too long, they won't be
| able to pull off the caper (at these valuations). And we will
| get to see who has staying power.
|
| It's obvious to me that all of OpenAIs announcements about
| partnerships and spending is gearing up for this. But I do
| wonder how Altman retains the momentum through to next year.
| What's the next big thing? A rocket company?
| the__alchemist wrote:
| Hell yes! Would love to short.
| cmiles8 wrote:
| Increasing signs the ship has sailed on the IPO window for
| these folks but let's see.
| piva00 wrote:
| > But I do wonder how Altman retains the momentum through
| to next year. What's the next big thing? A rocket company?
|
| Hmm, there were news about Sam Altman wanting to buy/invest
| on a rocket company. [0]
|
| [0] https://www.wsj.com/tech/ai/sam-altman-has-explored-
| deal-to-...
| ranger207 wrote:
| How do you guarantee your accelerationism produces the right
| results after the collapse? If the same systems of regulation
| and power are still in place then it would produce the same
| result afterwards
| pjmlp wrote:
| Yeah, it started with the whole Wall Street, with all the
| depression and wars that it brought, and it hasn't stopped,
| at each cycle the curve has to go up, with exponential
| expectations of growth, until it explodes taking the world
| economy to the ground.
| LeifCarrotson wrote:
| Agreed! I recently listened to a podcast (video) from the
| "How Money Works" channel on this topic:
|
| "How Short Term Thinking Won" - https://youtu.be/qGwU2dOoHiY
|
| The author argues that this con has been caused by three
| relatively simple levers: Low dividend yields, legalization
| of stock buybacks, and executive compensation packages that
| generate lots of wealth under short pump-and-dump timelines.
|
| If those are the causes, then simple regulatory changes to
| make stock buybacks illegal again, limit the kinds of
| executive compensation contracts that are valid, and
| incentivize higher dividend yields/penalize sales yields
| should return the market to the previous long-term-optimized
| behavior.
|
| I doubt that you could convince the politicians and
| financiers who are currently pulling value out of a fragile
| and inefficient economy under the current system to make
| those changes, and if the changes were made I doubt they
| could last or be enforced given the massive incentives to
| revert to our broken system. I think you're right that it
| will take a huge disaster that the wealthy and powerful are
| unable to dodge and unable to blame on anything but their own
| actions, I just don't know what that event might look like.
| baggachipz wrote:
| One need only look at 1929 to understand what's in store.
| Of course, the rich/powerful will say "who could have seen
| this coming?"
| robot-wrangler wrote:
| Stupidity, greed, and straight-up evil intentions do a
| bunch of the work, but ultimately short-term thinking wins
| because it's an attractor state. The influence of the
| wealthy/powerful is always outsized, but attractors and
| common-knowledge also create a natural conspiracy that
| doesn't exactly have a center.
|
| So with AI, the way the natural conspiracy works out is
| like this. Leaders at the top might suspect it's bullshit,
| but don't care, they always fail upwards anyway. Middle
| management at non-tech companies suspect their jobs are in
| trouble on _some_ timeline, so they want to "lead a
| modernization drive" to bring AI to places they know don't
| need it, even if it's a doomed effort that basically
| defrauds the company owners. Junior engineers see a tough
| job market, want to devalue experience to compete.. decide
| that only AI matters, everything that came before is the
| old way. Owners and investors hate expensive senior
| engineers who don't have to bow and scrape, think they have
| to much power, would love to put them in their place.
| Senior engineers who are employed and maybe the most clear-
| eyed about the actual capabilities of technology see the
| writing on the wall.. you _have_ to make this work even if
| it 's handed to you in a broken state, because literally
| everyone is gunning for you. Those who are unemployed are
| looking around like well.. this is apparently the game one
| must play. Investors will invest in any horrible doomed
| thing regardless of what it is because they all think they
| are smarter than other investors and will get out in just
| in time. Owners are typically too disconnected from
| whatever they own, they just want to exit/retire and
| already mostly in the position of listening to lieutenants.
|
| At every level for every stakeholder, once things have
| momentum they don't need be a
| healthy/earnest/noble/rational endeavor any more than the
| advertising or attention economy did before it. Regardless
| of the ethics there or the current/future state of any
| specific tech.. it's a huge problem when being locally
| rational pulls us into a state that's globally irrational
| LeifCarrotson wrote:
| Yes, that "attractor state" you describe is what I meant
| by "if the changes were made I doubt they could last or
| be enforced given the massive incentives to revert to our
| broken system". The older I get and the more I learn, the
| less I'm willing to ascribe faults in our society to
| individual evils or believe in the existence of
| intentionally concealed conspiracies rather than just
| seeing systemic flaws and natural conspiracies.
| h2zizzle wrote:
| I disagree. Those place the problem at the corporate level,
| when it's clearly extended through to being a monetary
| issue. The first thing I would like to see is the various
| Fed and banking liquidity and credit facilities go away.
| They don't facilitate stability, but a fiscal shell game
| that has allowed numerous zombie companies to live far past
| their solvency. This in turn encourages widespread fiscal
| recklessness.
|
| We're headed for a crunch anyway. My observation is that a
| controlled demolition has been attempted several times over
| the past few years, but in every instance, someone has
| stepped up to cry about the disaster that would occur if
| incumbents weren't shored up. Of course, that just makes
| the next occurrence all the more dire.
| somat wrote:
| What is wrong with stock buybacks?
|
| Genuine question, I don't understand the economics of the
| stock market and as such I participate very little
| (probably to my detriment) I sort of figure the original
| theory went like this.
|
| "We have an idea to run a for profit endeavor but do not
| have money to set it up. If you buy from us a portion of
| our future profit we will have the immediate funds to set
| up the business and you will get a payout for the
| indefinite future."
|
| And the stock market is for third party buying and selling
| of these "shares of profit"
|
| Under these conditions are not all stocks a sort of
| millstone of perpetual debt for the company and it would
| behoove them to remove that debt, that is, buyback the
| stock. Naively I assume this is a good thing.
| oblio wrote:
| My view is that you don't want more layers. Chasing ever
| increasing share prices favor shareholders (limited
| amount of generally rich people) over customers (likely
| to be average people). The incentives get out of whack.
| tokioyoyo wrote:
| There was a long standing illusion that people care about
| long-term thinking. But given the opportunity, people seem
| to take the short-term road with high risks, instead of
| chasing a long-term gain, as they, themselves, might not
| experience the gain.
|
| The timeframe of expectations have just shifted, as
| everyone wants to experience everything. Just knowing the
| possibility of things that can happen already affects our
| desires. And since everyone has a limited time in life, we
| try to maximize our opportunities to experience as many
| things as possible.
|
| It's interesting to talk about this to older generation
| (like my parents in their 70s), because there wasn't such a
| rush back then. I took my mom out to some cities around the
| world, and she mentioned how she really never even dreamed
| of a possibility of being in such places. On the other
| hand, when you grow in a world of technically unlimited
| possibilities, you have more dreams.
|
| Sorry for rambling, but in my opinion, this somewhat
| affects economics of the new generation as well. Who cares
| of long term gains if there's a chance of nobody
| experiencing the gain, might as well risk it for the short
| term one for a possibility of some reward.
| theflyinghorse wrote:
| Problem with "it will hurt" is that it will actually hurt
| middle class by completely wiping it out, and maybe slightly
| inconvenience the rich. More like annoy the rich, really.
| bluefirebrand wrote:
| > you have to be hoping (or even working toward) a correction
| that wipes out the incumbents who absolutely are working to
| maintain the masqerade.
|
| I'm not hoping for a market correction personally, I'm hoping
| that mobs reinvent the guillotine
|
| They deserve nothing less by now. If they get away with
| nothing worse than "a correction" then they have still made
| out like bandits
| h2zizzle wrote:
| I tend to agree, but there's something to be said for a
| retribution focus taking time and energy away from problem-
| solving. When market turmoil hits, stand up facilities to
| guarantee food and healthcare access, institute a
| nationwide eviction moratorium, and then let what remains
| of the free market play out. Maybe we pursue justice by
| actually prosecuting corporate malfeasance this time. The
| opposite of 2008.
| stingraycharles wrote:
| Don't attribute to malice that which can equally be contributed
| to incompetence.
|
| I think you're over-estimating the capabilities of these tech
| leaders, especially when the whole industry is repeating the
| same thing. At that point, it takes a lot of guts to say "No,
| we're not going to buy into the hype, we're going to wait and
| see" because it's simply a matter of corporate politics: if AI
| fails to deliver, it fails to deliver for _everyone_ and the
| people that bought into the hype can blame the consultants /
| whatever.
|
| If, however, AI ended up delivering and they missed the boat,
| they're going to be held accountable.
|
| It's much less risky to just follow industry trends. It takes a
| lot of technical knowledge, gut, and confidence in your own
| judgement to push back against an industry-wide trend at that
| level.
| foobarchu wrote:
| > if AI fails to deliver, it fails to deliver for everyone
| and the people that bought into the hype can blame the
| consultants / whatever.
|
| Understatement of the year. At this point, if AI fails to
| deliver, the US economy is going to crash. That would not be
| the case if executives hadn't bought in so hard earlier on.
| saubeidl wrote:
| And if it does deliver, everyone's gonna be out of a job
| and the US economy is _also_ going to crash.
|
| Nice cul-de-sac our techbro leaders have navigated us into.
| rwyinuse wrote:
| Yep, either way things are going to suck for ordinary
| people.
|
| My country has had bad economy and high unemployment for
| years, even though rest of the world is doing mostly OK.
| I'm scared to think what will happen once AI bubble
| either bursts or eats most white collar jobs left here.
| anjel wrote:
| Race to "Too big to fail" on hype and your losses are
| socialized
| tokioyoyo wrote:
| There's also a case that without the AI rush, US economy
| would look even weaker now.
| morkalork wrote:
| It's mass delusion
| Teever wrote:
| Ultimately it's a distinction without a difference.
| Maliciously stupid or stupidly malicious invariably leads to
| the same place.
|
| The discussion we should be having is how we can come
| together to remove people from power and minimize the
| influence they have on society.
|
| We don't have the carbon budget to let billionaires who
| conspires from island fortresses in Hawaii do this kind of
| reckless stuff.
|
| It's so dismaying to see these industries muster the capital
| and political resources to make these kinds of infrastructure
| projects a reality when they've done nothing comparable w.r.t
| to climate change.
|
| It tells me that the issue around the climate has always been
| a lack of will not ability.
| avidiax wrote:
| I suspect that AI is in an "uncanny valley" where it is
| definitely good enough for some demos, but will fail pretty
| badly when deployed.
|
| If it works 99% of the time, then a demo of 10 runs is 90%
| likely to succeed. Even if it fails, as long as it's not
| spectacular, you can just say "yeah, but it's getting better
| every day!", and "you'll still have the best 10% of your
| human workers in the loop".
|
| When you go to deploy it, 99% is just not good enough. The
| actual users will be much more noisy than the demo executives
| and internal testers.
|
| When you have a call center with 100 people taking 100 calls
| per day, replacing those 10,000 calls with 99% accurate AI
| means you have to clean up after 100 bad calls per day. Some
| percentage of those are going to be really terrible, like the
| AI did reputational damage or made expensive legally binding
| promises. Humans will make mistakes, but they aren't going to
| give away the farm or say that InsuranceCo believes it's
| cheaper if you die. And your 99% accurate-in-a-lab AI isn't
| 99% accurate in the field with someone with a heavy accent on
| a bad connection.
|
| So I think that the parties all "want to believe", and to an
| untrained eye, AI seems "good enough" or especially "good
| enough for the first tier".
| gdulli wrote:
| Agreed, but 99% is being very generous.
| s1mplicissimus wrote:
| And that's for tasks it's actually suited for
| lawlessone wrote:
| >I suspect that AI is in an "uncanny valley" where it is
| definitely good enough for some demos
|
| Sort of a repost on my part, but the LLM's are all really
| good at marketing and other similar things that fool CEO's
| and executives. So they think it must be great at
| everything.
|
| I think that's what is happening here.
| inetknght wrote:
| > _Don't attribute to malice that which can equally be
| contributed to incompetence._
|
| At this point I think it might actually be _both_ rather than
| just one or the other.
| bwfan123 wrote:
| "Worldly wisdom teaches that it is better for reputation to
| fail conventionally than to succeed unconventionally." -
| Keynes.
|
| Convention here is that AI is the next sliced bread. And big-
| tech managers care about their reputation.
| bluefirebrand wrote:
| It's pretty pathetic that they can build a brand based on
| "doing the exact same thing everyone else is doing" though
| brazukadev wrote:
| > Don't attribute to malice that which can equally be
| contributed to incompetence.
|
| This discourse needs to die. Incompetence + lack of empathy
| is malice. Even competence in the scenario they want to
| create is malice. It's time to stop sugar-coating it.
| bluefirebrand wrote:
| > At that point, it takes a lot of guts to say "No, we're not
| going to buy into the hype, we're going to wait and see"
| because it's simply a matter of corporate politics
|
| Isn't that the whole mythos of these corporate leaders
| though? They are the ones with the vision and guts to cut
| against the fold and stand out among the crowd?
|
| I mean it's obviously bullshit, but you would think at least
| a couple of them actually would do _something_ to distinguish
| themselves. They all want to be Steve Jobs but none of them
| have the guts to even try to be visionary. It is honestly
| pathetic
| kevin_thibedeau wrote:
| What you have is a lot of middle managers imposing change
| with random fresh ideas. The ones that succeed rise up the
| ranks. The ones that failed are forgotten, leading to
| survivorship bias.
| ReptileMan wrote:
| It was the same with the cloud adoption. And I still think that
| cloud is expensive, wasteful and in the vast majority of cases
| not needed.
| dehrmann wrote:
| > In other words --- pure greed.
|
| It's the opposite; it's FOMO.
| bgwalter wrote:
| They want to exfiltrate the customers' data under the guise of
| getting better "AI" responses.
|
| No company or government in the EU should use this spyware.
| GoblinSlayer wrote:
| Fake it till you make it.
| dfedbeef wrote:
| outside of the recovery community, this is known as 'fraud'
| didibus wrote:
| Thing is, it's hard to predict what can be done and what
| breakthrough or minor tweak can suddenly open up an avenue for
| a profitable use-case.
|
| The cost of missing that opportunity is why they're heavily
| investing in AI, they don't want to miss the boat if there's
| going to be one.
|
| And what else would they do? What's the other growth path?
| binary132 wrote:
| this idea that AI is the only thing anyone could possibly do
| that might be useful has absolutely got to go
| layer8 wrote:
| > And what else would they do? What's the other growth path?
|
| Are you arguing that if LLMs didn't exist as a technology,
| they wouldn't find anything to do and collapse?
| rsynnott wrote:
| _Number would not go up sufficiently steeply_, would be the
| major concern, not collapse. Microsoft might end up valued
| as (whisper it) a normal mature stable company. That would
| be something like a quarter to a half what it's currently
| valued. For someone paid mostly in options, this is clearly
| a problem (and people at the top in these companies mostly
| _are_ compensated with options, not RSUs; if the stock
| price halves, they get _nothing_).
| solumunus wrote:
| The cost of the boat sinking is also very high and that's
| looking like the more likely scenario. Watching your
| competitors sink huge amounts of capital into a probably
| sinking boat is a valid strategy. The growth path they were
| already on was fine no?
| layer8 wrote:
| US technocapitalism is built on the premise of technological
| innovation driving exponential growth. This is why they are
| fixated on whatever provides an outlook for that. The risk that
| it might not work out is downplayed, because (a) they don't
| want to hazard _not_ being at the forefront in the event that
| it does work out, and (b) if it doesn't work out, nobody will
| really hold them accountable for it, not the least because
| everybody does it.
|
| After the mobile and cloud revolution having run out of steam,
| AI is what promises most growth by far, even if it is a dubious
| promise.
|
| It's a gamble, a bet on "the next big thing". Because they
| would never be satisfied with there not being another "big
| thing", or not being prominently part of it.
| binary132 wrote:
| Riding hype waves forever is the most polar opposite thing to
| "sustainable" that I can imagine
| apercu wrote:
| It's not just AI mania, it's been this way for over a decade.
|
| When I first started consulting, organizations were afraid
| enough of lack of ROI in tech implementations that projects
| needed an economic justification in order to be approved.
|
| Starting with cloud, leadership seemed so become rare, and
| everything was "us too!".
|
| After cloud it was data/data visualization, then it was over-
| hiring during Covid, the it was RTO, and now it's AI.
|
| I wonder if we will ever return to rationalization? The
| bellwether might be Tesla stock price (at a rational
| valuation).
| eastbound wrote:
| If rationalization comes back, everyone will talk like in
| Michael Moore's documentary about GM and Detroit. A manager's
| salary after half a career will be around $120k, like in an
| average bank, and that would be succeeding. I don't think we
| even imagine how much of a tsunami we've been surfing since
| 2000.
| Balinares wrote:
| > So how to explain the current AI mania being widely promoted?
|
| Probably individual actors have different motivations, but
| let's spitball for a second:
|
| - LLMs are genuinely a revolution in natural language
| processing. We can do things now in that space that were
| unthinkable single-digit years ago. This opens new opportunity
| spaces to colonize, and some might turn out quite profitable.
| Ergo, land rush.
|
| - Even if the new spaces are not that much of a value leap
| intrinsically, some may still end up obsoleting earlier-
| generation products pretty much overnight, and no one wants to
| be the next Nokia. Ergo, _defensive_ land rush.
|
| - There's a non-zero chance that someone somewhere will
| actually manage to build the tech up into something close
| enough to AGI to serve, which in essence means deprecating the
| labor class. The benefits (to that specific someone, anyway...)
| would be staggering enough to make that a goal worth pursuing
| even if the odds of reaching it are unclear and arguably quite
| low.
|
| - The increasingly leveraged debt that's funding the land
| rush's capex needs to be paid off somehow and I'll venture
| everyone knows that the winners will possibly be able to, but
| not everyone will be a winner. In that scenario, you really
| don't want to be a non-winner. It's kind of like that joke
| where you don't need to outrun the lions, you only need to
| outrun the other runners, except in this case the harder
| everyone runs and the bigger the lions become. (Which is a
| funny thought _now_ , sure, but the feasting, when it comes,
| will be a bloodbath.)
|
| - A few, I'll daresay, have perhaps been huffing each other's
| farts too deep and too long and genuinely believe the words of
| ebullient enthusiasm coming out of their own mouths. That,
| and/or they think everyone's job except theirs is simple
| actually, and therefore just this close to being replaceable
| (which is a distinct flavor of fart, although coming from
| largely the same sources).
|
| So basically the mania is for the most part a natural
| consequence of what's going on in the overlap of the tech
| itself and the incentive structure within which it exists,
| although this might be a good point to remember that cancer and
| earthquakes too are natural. Either way, take care of
| yourselves and each other, y'all, because the ride is only
| going to get bouncier for a while.
| 12_throw_away wrote:
| > There's a non-zero chance that someone somewhere will
| actually manage to build the tech up into something close
| enough to AGI
|
| Bullshit
| adventured wrote:
| It's not "pure greed." It's keeping up with the Joneses. It's
| fear.
|
| There are three types of humans: mimics, amplifiers,
| originators. ~99% of the population are basic mimics, and
| they're always terrified - to one degree or another - of being
| out of step with the herd. The hyper mimicry behavior can be
| seen everywhere and at all times, from classrooms to Tiktok &
| Reddit to shopping behaviors. Most corporate leadership are
| highly effective mimics, very few are originators. They
| desperately herd follow ('nobody ever got fired for buying
| IBM').
|
| This is the dotcom equivalent of every business must be e and @
| ified (the advertising was aggressively targeted to that at the
| time). 1998-2000, you must be e ready. Your hotdog stand must
| have its own web site.
|
| It is not greed-driven, it's fear-driven.
| Workaccount2 wrote:
| People think that because AI cannot replace a senior dev, it's
| a worthless con.
|
| Meanwhile, pretty much _every single person in my life is using
| LLMs almost daily_.
|
| Guys, these things are not going away, and people will pay more
| money to use them in future.
|
| Even my mom asks ChatGPT to make a baking applet with a picture
| she uploads of the recipe, that creates a simple checklist for
| adding ingredients (she forgets ingredients pretty often). She
| loves it.
|
| This is where LLMs shine for regular people. She doesn't need
| it to create a 500k LOC turn-key baking tracking SaaS AWS back-
| end 5 million recipes on tap kitchen assistant app.
|
| She just needs a bespoke one off check list.
| trollbridge wrote:
| Is she going to pay enough to fund the multitrillion dollars
| it costs to run the current AI landscape?
| Workaccount2 wrote:
| Yeah, she is, because when reality sets in, these models
| will probably have monthly cellphone/internet level costs.
| And training is the main money sink, whereas inference is
| cheap.
|
| 500,000,000 people paying $80/mo is roughly a 5-yr ROI on a
| $2T investment.
|
| I cannot believe on a tech forum I need to explain the "Get
| them hooked on the product, then jack up the price"
| business model that probably 40% of people here are kept
| employed with.
|
| Right now they are (very successfully) getting everyone
| dependent on LLMs. They will pull rug, and people will pay
| to get it back. And none of the labs care if 2% of people
| use local/chinese models.
| joshstrange wrote:
| I think there are 2 things at play here. LLMs are, without a
| doubt, absolutely useful/helpful but they have shortcomings
| and limitations (often worth the cost of using). That said,
| businesses trying to add "AI" into their products have a much
| lower success rate than LLM-use directly.
|
| I dislike almost every AI feature in software I use but love
| using LLMs.
| hagbarth wrote:
| > People think that because AI cannot replace a senior dev,
| it's a worthless con.
|
| Quite the strawman. There are many points between "worthless"
| and "worth 100s of billions to trillions of investment".
| billywhizz wrote:
| is this really the best use case you could come up with? says
| it all really if so.
| sensanaty wrote:
| Are your mother's cooking recipes gonna cover the billions
| and even trillions being spent here? I somehow doubt that,
| and it's funny to me that the killer usecase the hypesters
| use is stupid inane shit like this (no offense to your mom,
| but a recipe generator isn't something we should be
| speedrunning global economic collapse for)
| YY843792387 wrote:
| This false dichotomy is still frustratingly all over the
| place. LLMs are useful for a variety of benign everyday use
| cases, that doesn't mean that they can replace a human for
| anything. And if those benign use cases is all they're good
| at, then the entire AI space right now is maybe worth
| $2B/year, tops. Which is still a good amount of money! Except
| that's roughly the amount of money OpenAI spends every
| minute, and it's definitely not "the next invention of fire"
| like Sam Altman says.
| ncr100 wrote:
| Use case == Next iteration of "You're Fired" may be more
| like it.
| keeda wrote:
| Even these everyday use-cases are infinitely varied and can
| displace entire industries. E.g. ChatGPT helped me get $500
| in airline delay compensation after multiple companies like
| AirHelp blew me off:
| https://news.ycombinator.com/item?id=45749803
|
| For reference, AirHelp alone had a revenue of $153M last
| year (even without my money ;-P):
| https://rocketreach.co/airhelp-profile_b5e8e078f42e8140
|
| This single niche industry as a whole is probably worth
| billions alone.
|
| Now multiply that by the number of niches that exist in
| this world.
|
| The consider the entire universe of formal knowledge work,
| where large studies (from self-reported national surveys to
| empirical randomized controlled trials on real-world tasks)
| have already shown significant productivity boosts, in the
| range of 30%. Now consider their salaries, and how much
| companies would be willing to pay to make their employees
| more productive.
|
| Trillions is not an exaggeration.
| rsynnott wrote:
| I mean, see Windows Vista. It was eventually patched up to the
| point where it was semi-usable (and then quietly killed off),
| but on introduction it was a complete mess. But... something
| had to be shipped, and this was something, so it was shipped.
|
| (Vista wasn't the only one; Windows ME never even made it to
| semi-usable, and no-one even remembers that Windows 8
| _existed_.)
|
| Microsoft has _never_, as far as I know, been a company to be
| particularly concerned about product quality. The copilot stuff
| may be unusually bad, but it's not that aberrant for MS.
| DebtDeflation wrote:
| >So how to explain the current AI mania being widely promoted?
|
| CEOs have been sold on the ludicrous idea that "AI" will
| replace 60-80% of their total employee headcount over the next
| 2-3 years. This is also priced into current equity valuations.
| int_19h wrote:
| At this point, the people in charge have signed off on so much
| AI spending that they _need_ it to succeed, otherwise they are
| the ones responsible for massive losses.
| ChrisArchitect wrote:
| [dupe] https://news.ycombinator.com/item?id=46135388
| forks wrote:
| If you click through to the article shared yesterday[0]:
|
| > Microsoft denies report of lowering targets for AI software
| sales growth
|
| This Ars Technica article cites the same reporting as that
| Reuters piece but doesn't (yet) include anything about MSFT's
| rebuttal.
|
| [0]: https://news.ycombinator.com/item?id=46135388
| mwkaufma wrote:
| Semantics + Spin
| nba456_ wrote:
| made up story
| shevy-java wrote:
| Have we finally reached peak AI already? In that event we will
| see the falling down phase next.
| verdverm wrote:
| Yea, we're getting their, had some people reach out to me who
| only do so once a hype bubble is well formed
| ares623 wrote:
| What do you do and why do people reach out to you?
| meindnoch wrote:
| Top signal. Phase transition is imminent.
| justonceokay wrote:
| Blaming slow sales on salespeople is almost always a scapegoat.
| Reality is that either the product sells or it doesn't.
|
| Not saying that sales is useless, far from it. But with an
| established product that people know about, the sales team is
| more of a conduit than they are a resource-gathering operation.
| seanw444 wrote:
| > Reality is that either the product sells or it doesn't.
|
| Why do people use this useless phrase template?
|
| Yeah, the point is that it's not selling, and it's not
| selling because people are getting increasingly skeptical
| about its actual value.
| Ylpertnodi wrote:
| > it's not selling because people are getting increasingly
| skeptical about its actual value.
|
| So why are the sales-peops being blamed?
| jccooper wrote:
| I think the point of this headline is that they're not
| being blamed in this one instance.
| throwawaylaptop wrote:
| I worked car sales for years. The same large dealership can
| have a person anyone would call a decent salesperson, and
| they made $4k a month. There was also two people at that
| dealership making $25k+ a month each.
|
| If your organization is filled with the $4k type and not the
| $25k type, you're going to have a bad time.
|
| I was #7 in the US while working at a small dealership. I
| moved the the large dealership mentioned above and instantly
| that dealership became #1 for that brand in the country,
| something they had never done before. Because not only did I
| sell 34 cars a month without just cannibalizing others sales,
| I showed others that you can show up one day and do well so
| there weren't many excuses. The output of the entire place
| went up.
|
| So, depending on the pay plan and hiring process, who exactly
| is working at Microsoft right now selling AI? I honestly have
| no idea. It could be rock stars and it could be the $4k guys
| happy they're making $10k at Microsoft.
| cosmicgadget wrote:
| Lol "Microsoft can't make something work ergo the technology is
| not feasible".
| parliament32 wrote:
| "The technology is not useful", at least in enterprise
| contexts, is what this comes out to. Which is really where
| the money is, because some vibecoder paying $20/mo for Claude
| really doesn't matter (especially when it costs $100/mo to
| run inference for his queries in the first place). Enterprise
| is the only place this could possibly make money.
|
| Think about it: MS has a giant advantage over every other AI
| vendor, that they can directly insert the product into the OS
| and LOB apps _without_ the business needing to onboard a new
| vendor. This is best case scenario, and by far the easiest
| sell for these tools. Given how badly they 're failing, yeah,
| turns out orgs just don't see the value in it.
|
| Next year will be interesting too: I suspect a large portion
| of the meager sales they managed to make will not renew,
| it'll be a bloodbath.
| cosmicgadget wrote:
| MS has a giant advantage over every other vendor for all
| kinds of products (including defunct ones). Sometimes they
| function well, sometimes they do not. Sometimes they make
| money, sometimes they do not. MS isn't the tech (or even
| enterprise tech) bellcow.
|
| Considering enterprise typically is characterized by
| perfunctory tasks, information silos, and bit rot, they're
| a perfect application of LLMs. It's just Microsoft kind of
| sucks at a lot of things.
| lysace wrote:
| Is "The Information" credible? It's the sole source.
| type0 wrote:
| But is it sold enough to regular Windows Home users? If MS brings
| an ultimatum: "you need to buy AI services to use Windows", they
| might get a bunch more clueless subscribers. In the same way as
| there's no ability to set up Windows without internet connection
| and MS account they could make it mandatory to subscribe to
| Copilot.
| N19PEDL2 wrote:
| I think Microsoft's long-term plan is exactly that: to make
| Windows itself a subscription product. Windows 12 Home for
| $4.99 a month, Copilot included. It will be called OSaaS.
| add-sub-mul-div wrote:
| > In the same way as there's no ability to set up Windows
| without internet connection and MS account
|
| Not true. They're clearly unwilling or unable to remove this
| code path fully, or they would have done so by now. There's
| just a different workaround for it every few years.
| mythz wrote:
| Despite having an unlimited warchest I'm not expecting Microsoft
| to come out as a winner from this AI race whilst having the
| necessary resources. The easy investment was to throw billions at
| OpenAI to gain access to their tech, but that puts them in a
| weird position of not investing heavily in cultivating their own
| AI talent and being in control of their own destiny by having
| their own horse in the race with their own SOTA models.
|
| Apple's having a similar issue, unlimited wealth that's
| outsourcing to external SOTA model providers.
| ceroxylon wrote:
| As someone who appreciates machine learning, the main dissonance
| I have with interacting with Microsoft's implementation of AI
| feels like "don't worry, we will do the thinking for you".
|
| This appears everywhere, with every tool trying to autocomplete
| every sentence and action, creating a very clunky ecosystem where
| I am constantly pressing 'escape' and 'backspace' to undo some
| action that is trying to rewrite what I am doing to something I
| don't want or didn't intend.
|
| It is wasting time and none of the things I want are optimized,
| their tools feel like they are helping people write "good morning
| team, today we are going to do a Business, but first we must
| discuss the dinner reservations" emails.
| xnorswap wrote:
| I broadly agree. They package "copilot" in a way that
| constantly gets in your way.
|
| The one time I thought it could be useful, in diagnosing why
| two Azure services seemingly couldn't talk to each other, it
| was completely useless.
|
| I had more success describing the problem in vague terms to a
| different LLM, than an AI supposedly plugged into the Azure
| organisation that could supposedly directly query information.
| yoyohello13 wrote:
| I had the experience too. Working with Azure is already a
| nightmare, but the copilot tool built in to Azure is
| completely useless for troubleshooting. I just pasted log
| output into Claude and got actual answers. Mincrosoft's first
| party stuff just seems so half assed and poorly thought out.
| foobarian wrote:
| Why is this, I wonder? Aren't the models trained on about
| the same blob of huggingface web scrapes anyway? Does one
| tool do a better job of pre-parsing the web data, or pre-
| parsing the prompts, or enhancing the prompts? Or a better
| sequence of self-repair in an agent-like conversation? Or
| maybe more precision in the weights and a more expensive
| model?
| blibble wrote:
| > Why is this, I wonder?
|
| because that's Microsoft's business model
|
| their products are just just good enough to allow them to
| put a checkbox in a feature table to allow it to be sold
| to someone who will then never have to use it
|
| but not even a penny more will be spent than the absolute
| bare minimum to allow that
|
| this explains Teams, Azure, and everything else they make
| you can think of
| elorant wrote:
| Probably compute isn't enough to serve everyone from a
| frontier LLM.
| smileson2 wrote:
| that's what happens when everyone is under the guillotine and
| their lives depend on overselling this shit ASAP instead of
| playing/experimenting to figure things out
| vjvjvjvjghv wrote:
| "They package "copilot" in a way that constantly gets in your
| way."
|
| And when you try to make it something useful, the response is
| usually "I can't do that"
| greazy wrote:
| I asked copilot in outlook webmail to search my emails for
| something I needed.
|
| I can't do that.
|
| that's the one use case where LLM is helpful!
| Arwill wrote:
| I had a WTF moment last week, i was writing SQL, and there
| was no autocomplete at all. Then a chunk of autocomplete code
| appeared, what looked like an SQL injection attack, with some
| "drop table" mixed in. The code would have not worked, it was
| syntactically rubbish, but still looked spooky, should have
| made a screenshot of it.
| xnorswap wrote:
| This is the most annoying thing, and it's even happened to
| Jetbrains' rider too.
|
| Some stuff that used to work well with smart autocomplete /
| intellisense got worse with AI based autocomplete instead,
| and there isn't always an easy way to switch back to the
| old heuristic based stuff.
|
| You can disable it entirely and get dumb autocomplete, or
| get the "AI powered" rubbish, but they had a very
| successful heuristic / statistics based approach that
| worked well without suggesting outright rubbish.
|
| In .NET we've had intellisense for 25 years that would only
| suggest properties that could exist, and then suddenly I
| found a while ago that vscode auto-completed properties
| that don't exist.
|
| It's maddening! The least they could have done is put in a
| roslyn pass to filter out the impossible.
| cyberax wrote:
| The regular JetBrains IDEs have a setting to disable the
| AI-based inline completion, you can then just assign it
| to a hotkey and call it when needed.
|
| I found that it makes the AI experience so much better.
| mrguyorama wrote:
| There is no setting to revert back to the very reliable
| and high quality "AI" autocomplete that reliably did not
| recommend class methods that _do not exist_ and reliably
| figured out the pattern I was writing 20 lines of without
| randomly suggesting _100 lines of new code_ that only
| disrupts my view of the code I am trying to work on.
|
| I even clicked the "Don't do multiline suggestions"
| checkbox because the above was so absurdly anti-
| productive, but it was _ignored_
| blackadder wrote:
| This is my biggest frustration. Why not check with the
| compiler to generate code that would actually compile?
| I've had this with Go and .Net in the Jetbrains IDE. Had
| to turn ML auto-completion off. It was getting in the
| way.
| harvey9 wrote:
| Loosely related: voice control on Android with Gemini is
| complete rubbish compared to the old assistant. I used to
| be able to have texts read out and dictate replies whilst
| driving. Now it's all nondeterministic which adds
| cognitive load on me and is unsafe in the same way touch
| screens in cars are worse than tactile controls.
| vel0city wrote:
| I've been immensely frustrated by no longer being able to
| set reminders by voice. I got so used to saying "remind
| me in an hour to do x" and now that's just entirely not
| an option.
|
| I'm a very forgetful person and easily distracted. This
| feature was incredibly valuable to me.
| netsharc wrote:
| I got Gemini Pro (or whatever it's called) for free for a
| year on my new Pixel phone, but there's an option to keep
| Assistant, which I'm using.
|
| Gotta love the enshittification: "new and better" being
| more CPU cycles being burned for a worse experience.
|
| I just have a shortcut to the Gemini webpage on my home
| screen if I want to use it, and for some reason I can't
| just place a shortcut (maybe it's my ancient launcher
| that's not even in the play store anymore), so I have to
| make a tasker task that opens the webpage when run.
| tbd23 wrote:
| The most WTF moment for me was that recent Visual Studio
| versions hooked up the "add missing import" quick fix
| suggestion to AI. The AI would spin for 5s, then delete
| the entire file and only leave the new import statement.
|
| I'm sure someone on the VS team got a pat on the back for
| increasing AI usage but it's infuriating that they broke
| a feature that worked perfectly for a decade+ without AI.
| Luckily there was a switch buried in settings to disable
| the AI integration.
| zoeysmithe wrote:
| The problem with scrapping the web for teaching AI is that
| the web is full of 'little bobby tables' jokes.
| a_t48 wrote:
| The last time I asked Gemini to assist me with some SQL I
| got (inside my postgres query form): This
| task cannot be accomplished USING standard
| SQL queries against the provided database schema.
| Replication slots managed through PostgreSQL system
| views AND functions, NOT through user-defined
| tables. Therefore, I must return
|
| It's feels almost haiku-like.
| wubrr wrote:
| Gemini weirdly messes things up, even though it seems to
| have the right information - something I started noticing
| more often recently. I'd ask it to generate a curl
| command to call some API, and it would describe
| (correctly) how to do it, and then generate the
| code/command, but the command would have obvious things
| missing like the 'https://' prefix in some case,
| sometimes the API path, sometimes the auth header/token -
| even though it mentioned all of those things correctly in
| the text summary it gave above the code.
|
| I feel like this problem was far less prevalent a few
| months/weeks ago (before gemini-3?).
|
| Using it for research/learning purposes has been pretty
| amazing though, while claude code is still best for
| coding based on my experience.
| mk89 wrote:
| Same thing happened to me today in vs code. A simple helm
| template:
|
| ```{{ .default .Values.whatever 10 }}``` instead of the
| correct ```{{ default 10 .Values.whatever }}```.
|
| Pure garbage which should be solved by now. I don't
| understand how it can make such a mistake.
| mk89 wrote:
| My 2 cents. It's when OKRs are executed without a vision, or
| the vision is that one and well, it sucks.
|
| The goal is AI everywhere, so this means top-down everyone
| will implement it and will be rewarded for doing so, so thrre
| are incentives for each team to do it - money, promotions,
| budget.
|
| 100 teams? 100 AI integrations or more. It's not 10 entry
| points as it should be (maybe).
|
| This means for a year or more, a lot of AI everywhere,
| impossible to avoid, will make usability sink.
|
| Now, if this was only done by Microsoft, I would not mind.
| The issue is that this behavior is getting widespread.
|
| Things are becoming increasingly unusable.
| wlesieutre wrote:
| Reminds me of when Google's core mission was to put Google
| Plus integrations in everything
| raw_anon_1111 wrote:
| I have had great luck with ChatGPT trying to figure out a
| complex AWS issue with
|
| "I am going to give you the problem I have. I want you to
| help me work backwards step by step and give me the AWS cli
| commands to help you troubleshoot. I will give you the output
| of the command".
|
| It's a combination of advice that ChatGPT gives me and my own
| rubberducking.
| lostphilosopher wrote:
| This seems like what should be a killer feature: Copilot
| having access to configuration and logs and being able to
| identify where a failure is coming from. This stuff is
| tedious manually since I basically run through a checklist of
| where the failure could occur and there's no great way to
| automate that plus sometimes there's subtle typo type issues.
| Copilot can generate the checklist reasonably well but can't
| execute on it, even from Copilot within Azure. Why not??
| butlike wrote:
| That's because in its current form, that's all it's good for
| reliably. Can't sell that it might hallucinate the numbers in
| the Q4 report
| PyWoody wrote:
| > ...Microsoft's implementation of AI feels like "don't worry,
| we will do the thinking for you"
|
| I feel like that describes nearly all of the "productivity"
| tools I see in AI ads. Sadly enough, it also aligns with how
| most people use it, in my personal experience. Just a total
| off-boarding of needing to think.
| netsharc wrote:
| The term is "cognitive offloading".
| https://duckduckgo.com/?q=cognitive+offloading
|
| Sheesh, I notice I also just ask an assistant quite a bit
| rather than putting effort to think about things. Imagine
| people who drive everywhere with GPS (even for routine
| drives) and are lost without it, and imagine that for
| everything needing a little thought...
| latchkey wrote:
| Dissonance runs straight through from top of the org chart.
|
| https://x.com/satyanadella/status/1996597609587470504
|
| Just 22 hours ago...
| https://news.ycombinator.com/item?id=46138952
| Edmond wrote:
| >As someone who appreciates machine learning, the main
| dissonance I have with interacting with Microsoft's
| implementation of AI feels like "don't worry, we will do the
| thinking for you".
|
| This the nightmare scenario with AI, ie people settling for
| Microsoft/OpenAI et al to do the "thinking" for you.
|
| It is alluring but of course it is not going to work. It is
| similar to what happened to the internet via social media, ie
| "kickback and relax, we'll give you what you really want, you
| don't have to take any initiative".
|
| My pitch against this is to vehemently resist the chatbot-style
| solutions/interfaces and demand intelligent workspaces:
|
| https://codesolvent.com/botworx/intelligent-workspace/
| pupppet wrote:
| Too many companies have bolted AI on to their existing products
| with the value-prop _Let us do the work (poorly) for you_.
| jfarmer wrote:
| I've worked in tech and lived in SF for ~20 years and there's
| always been something I couldn't quite put my finger on.
|
| Tech has always had a culture of aiming for "frictionless"
| experiences, but friction is necessary if we want to maneuver
| and get feedback from the environment. A car can't drive if
| there's no friction between the tires and the road, despite
| being helped when there's no friction between the chassis and
| the air.
|
| Friction isn't fungible.
|
| John Dewey described this rationale in Human Nature and Conduct
| as thinking that "Because a thirsty man gets satisfaction in
| drinking water, bliss consists in being drowned." He concludes:
|
| "It is forgotten that success is success of a specific effort,
| and satisfaction the fulfillment of a specific demand, so that
| success and satisfaction become meaningless when severed from
| the wants and struggles whose consummations they are, or when
| taken universally."
|
| In "Mind and World", McDowell criticizes this sort of thinking,
| too, saying:
|
| > We need to conceive this expansive spontaneity as subject to
| control from outside our thinking, on pain off representing the
| operations of spontaneity as a frictionless spinning in a void.
|
| And that's really what this is about, I think. Friction-free is
| the goal but friction-free "thought" isn't thought at all. It's
| frictionless spinning in a void.
|
| I teach and see this all the time in EdTech. Imagine if
| students could just ask the robot XYZ and how much time it'd
| free up! That time could be spent on things like relationship-
| building with the teacher, new ways of motivating students,
| etc.
|
| Except...those activities supply the "wants and struggles whose
| consummations" build the relationships! Maybe the robot could
| help the student, say, ask better questions to the teacher, or
| direct the student to peers who were similarly confused but
| figure it out.
|
| But I think that strikes many tech-minded folks as
| "inefficient" and "friction-ful". If the robot knows the answer
| to my question, why slow me down by redirecting me to another
| person?
|
| This is the same logic that says making dinner is a waste of
| time and we should all live off nutrient mush. The purposes of
| preparing dinner is to make something you can eat and the
| purpose of eating is nutrient acquisition, right? Just beam
| those nutrients into my bloodstream and skip the rest.
|
| Not sure how to put this all together into something pithy, but
| I see it all as symptoms of the same cultural impulse. One
| that's been around for decades and decades, I think.
| greenavocado wrote:
| People want the cookie, but they also want to be healthy.
| They want to never be bored, but they also want to have
| developed deep focus. They want instant answers, but they
| also want to feel competent and capable. Tech optimizes for
| revealed preference in the moment. Click-through rates,
| engagement metrics, conversion funnels: these measure
| immediate choices. But they don't measure regret, or what
| people wish they had become, or whether they feel their life
| is meaningful.
|
| Nobody woke up in 2005 thinking "I wish I could outsource my
| spatial navigation to a device." They just wanted to not be
| lost. But now a generation has grown up without developing
| spatial awareness.
| seg_lol wrote:
| > They want to never be bored
|
| This is the problem. Learning to embrace boredom is best
| thing I have ever done.
| phantasmish wrote:
| > Tech optimizes for revealed preference in the moment.
|
| I appreciate the way you distinguish this from actual
| revealed preference, which I think is key to understanding
| why what tech is doing is so wrong (and, bluntly, evil)
| despite it being what "people want". I like the term
| "revealed impulse" for this distinction.
|
| It's the difference between choosing not to buy a bag of
| chips at the store or a box of cookies, because you know
| it'll be a problem _and your actual preference is not to
| eat those things_ , and having someone leave chips and
| cookies at your house without your asking, and giving in to
| the impulse to eat too many of them when _you did not want
| them in the first place_.
|
| Example from social media: My "revealed preference" is that
| I sometimes look at and read comments from shit on my
| Instagram algo feed. My _actual_ preference is that I have
| no algo feed, just posts on my "following" tab, or at
| least that I could default my view to that. But IG's gone
| out of their way (going so far as disabling deep link
| shortcuts to the following tab, which used to work) to make
| sure I don't get any version of my preference.
|
| So I "revealed" that my preference is to look at those algo
| posts sometimes, but _if you gave me the option_ to use the
| app to follow the few accounts I care about (local
| businesses, largely) but never see algo posts at all, ever,
| I 'd hit that toggle and never turn it off. That's my
| _actual_ preference, despite whatever was "revealed". That
| other preference isn't "revealed" because it's not even an
| option.
| greenavocado wrote:
| Just like the chips and cookies the costs of social meida
| are delayed and diffuse. Eating/scrolling feels good now.
| The cost (diminished attention span, shallow
| relationships, health problems) shows up gradually over
| years.
| TheOtherHobbes wrote:
| I think that's partially true. The point is to have the
| freedom to pursue higher-level goals. And one thing tech
| doesn't do - and education in general doesn't do either - is
| give experience of that kind of goal setting.
|
| I'm completely happy to hand over menial side-quest
| programming goals to an AI. Things like stupid little
| automation scripts that require a lot of learning from poor
| docs.
|
| But there's a much bigger issue with tech products - like
| Facebook, Spotify, and AirBnB - that promise lower friction
| and more freedom but actually destroy collective and cultural
| value.
|
| AI is a massive danger to that. It's not just about
| forgetting how to think, but how to _desire_ - to make
| original plans and have original ideas that aren 't pre-
| scripted and unconsciously enforced by algorithmic control
| over motivation, belief systems, and general conformity.
|
| Tech has been immensely destructive to that impulse. Which is
| why we're in a kind of creative rut where too much of the
| culture is nostalgic and backward-looking, and there isn't
| that sense of a fresh and unimagined but inspiring future to
| work towards.
| ecshafer wrote:
| I don't think I could agree with you more. I think that more
| in tech and business should think about and read about
| philosophy, the mind, social interactions, and society.
|
| ED Tech for example I think really seems to neglect the kind
| of bonds that people form when they go through difficult
| things together, and the pushing through difficulties is how
| we improve. Asking a robot xyz does not improve ourselves. AI
| and LLMs do not know how to teach, they are not Socratic
| pushing and prodding at our weaknesses and assessing us to
| improve. The just say how smart we are.
| isk517 wrote:
| In my experience part of the 'frictionless' experience is
| also to provide minimal information about any issues and no
| way to troubleshoot. Everything works until it doesn't, and
| when it doesn't you are now at the mercy of the customer
| support que and getting an agent with the ability to fix your
| problem.
| davidivadavid wrote:
| Looking at it from a slightly different angle, one I find
| most illuminating, removing "friction" is like removing
| "difficulty" from a game, and "friction free" as an ideal is
| like "cheat codes from the start" as an ideal. It's making a
| game where there's a single button that says "press here to
| win." The goal isn't the remove "friction", it's the remove a
| specific type of valueless friction, to replace it with
| valuable friction.
| whatever1 wrote:
| I don't know. You can be banging your head against the wall
| to demolish it or you can use manual/mechanical equipment to
| do so. If the wall is down, it is down. Either way you did
| it.
| bwfan123 wrote:
| > but friction is necessary if we want to maneuver and get
| feedback from the environment
|
| You are positing that we are active learners whose goal is
| clarity of cognition and friction and cognitive-struggle is
| part of that. Clarity is attempting to understand the "know-
| how" of things.
|
| Tech and dare I say the natural laziness inherent in us
| instead wants us to be zombies being fed the "know-that" as
| that is deemed sufficient. ie the dystopia portrayed in the
| matrix movie or the rote student regurgitating memes. But
| know-that is not the same as know-how, and know-how is
| evolving requiring a continuously learning agent.
| jjkaczor wrote:
| This is perhaps one of the most articulate takes on this I
| have ever read - thank-you!
|
| And - for myself, it was friction that kickstarted my
| interest in "tech" - I bought a janky modem, and it had IRQ
| conflicts with my Windows 3 mouse at the time - so, without
| internet (or BBS's at that time), I had to troubleshot and
| test different settings with the 2-page technical manual that
| came with it.
|
| It was friction that made me learn how to program and read
| manuals/syntax/language/framework/API references to
| accomplish things for hobby projects - which then led to
| paying work. It was friction not having my "own" TV and
| access to all the visual media I could consume "on-demand" as
| a child, therefore I had to entertain myself by reading
| books.
|
| Friction is good.
| stogot wrote:
| The disappointing thing is I'd rather them spend the time
| improving security but it sounds like all cycles are shoved
| into making AI shovels. Last year, the CEO promised security
| would come first but it's not the case
|
| https://www.techspot.com/news/102873-microsoft-now-security-...
| dustingetz wrote:
| Dear MS please use AI to autocomplete my billing address
| correctly when I fill out web forms, thanks
| justapassenger wrote:
| AI is people looking at EV hype and saying - I'll 100x it.
|
| It has all the same components, just on much higher scale:
|
| 1. Billionaire con-man convincing large part of market and
| industry (Altman in AI vs Musk in EV) that new tech will take
| over in few years.
|
| 2. Insane valuations not supported by an actual ROI.
|
| 3. Very interesting and amazing underlying technology.
|
| 4. Governments jumping on the hype and enabling it.
| cosmicgadget wrote:
| The valuations are based on value, not revenue.
| grim_io wrote:
| What can you even do in the ms enterprise ecosystem with their
| copilot integration?
|
| Is it just for chatting? Is it a glorified RAG?
|
| Can you tell copilot co to create a presentation? Make a
| visualisation in a spreadsheet?
| cosmicgadget wrote:
| It wants to help create things in Office documents, I imagine
| just saving you the copy and paste from the app or web form.
| The one thing I tried to get it to do was to take a spreadsheet
| of employees and add a column with their office numbers (it has
| access to the company directory). The response was something
| like "here's how you would look up a office number, you're
| welcome!"
|
| It is functional at RAG stuff on internal docs but definitely
| not good - not sure how much of this is Copilot vs corporate
| disarray and access controls.
|
| It won't send emails for me (which I would think is the agentic
| mvp) but that is likely a switch my organization daren't turn
| on.
|
| Tldr it's valuable as a normal LLM, very limited as a add-on to
| Microsoft's software ecosystem.
| jabroni_salad wrote:
| Chatting and everything you normally do in chats is there.
| needle hunting info out of all my Teams group chats is probably
| my favorite thing. It can retrieve info out of sharepoint I
| guess.
|
| Biggest complaint for me personally is that you run out of
| context very quickly. If you are used to having longer running
| chats on other platforms you won't be happy when Copilot tells
| you to make a new chat like 5 messages in.
|
| For most of my clients they are only interested in meeting
| minutes and otter does that for 25% of the price. I think in
| any given business the qty of people who actually use textgen
| regularly is pretty low. My workplace is looking to downsize
| licenses and asking people to use it or lose it because
| $21/user/mo is too much to have as a every now and then
| novelty.
| LtWorf wrote:
| It's basically clippy without the funny animations.
| glimshe wrote:
| The difference between poison and medicine is the amount. AI is
| great and very useful, but they want the AI to replace you
| instead of supporting your needs.
|
| "AI everywhere" is worse than "AI nowhere". What we need is "AI
| somewhere".
| gdulli wrote:
| That's what we had before LLMs. Without the financially imposed
| contrivance of it needing to be used everywhere, it was free to
| be used where it made sense.
| pjmlp wrote:
| Even Devblogs and anything related to Java,.NET, C++ and Python
| out of Redmond seems to be all around AI and anything else are
| now low priority tickets on their roadmaps.
|
| No wonder there is this exhaustion.
| kryogen1c wrote:
| I went to Ignite a few weeks ago, and the theme of the event and
| most talks was "look at how we're leveraging AI in this product
| to add value".
|
| Separately, the theme from talking to Every. Single. Person on
| the buy-side was _gigantic eye roll_ yes I cant wait for AI to
| solve all my problems.
|
| Companies I support are being directed from their presidents to
| use ai, literally a solution in search of a problem.
| derekcheng08 wrote:
| Super interesting how this arc has played out for Microsoft. They
| went from having this massive advantage in being an early OpenAI
| partner with early access to their models to largely losing the
| consumer AI space: Copilot is almost never mentioned in the same
| breath as Claude and ChatGPT. Though I guess their huge stake in
| OpenAI will still pay out massively from a valuation perspective.
| Zigurd wrote:
| Microsoft seems to be actively discarding the consumer PC
| market for Windows. It's gamers and enterprise, it seems.
| Enterprise users don't get a lot of say in what's on their
| desktop.
| xnorswap wrote:
| It's because Copilot isn't (just) a model, it's a brand that's
| been slapped on any old rubbish.
|
| If Clippy were still around, that'd have been rebranded as
| Copilot by now.
| blitzar wrote:
| If they resurected Clippy and made it the face of their Ai I
| would switch in a heartbeat.
| int_19h wrote:
| https://felixrieseberg.github.io/clippy/
| blitzar wrote:
| That is impressive! I really want clippy to chime in and
| tell me it looks like i am writing a letter and offer to
| help.
| downrightmike wrote:
| They made Copilot the term for AI and smeared it everywhere to
| the point that it has no meaning and therefore no usage when
| talking about AI.
| cmiles8 wrote:
| Hearing similar stories play out elsewhere too with targets being
| missed left and right.
|
| There's definitely something there with AI but a giant chasm
| between reality and the sales expectations on what's needed to
| make the current financial engineering on AI make any sense.
| more_corn wrote:
| I wonder if it's because Microsoft is hyper focused on a bunch of
| crap people don't want or need?
| rewilder12 wrote:
| Anyone who has had the pleasure of being forced to migrate to
| their new Fabric product can tell you why sales are low. It's
| terrible not just because it's a rushed buggy pile of garbage
| they want people to Alpha test on users but because of the "AI
| First" design they are forcing into it. They hide so much of
| what's happening in the background it is hard to feel like you
| can trust any of it. Like agentic "thinking" models with zero way
| to look into what it did to get to the conclusion.
| yoyohello13 wrote:
| Every new Microsoft product is like this. It all has that
| janky, slapped together at the last minute feeling.
| codr7 wrote:
| I can see why Microsoft likes AI and thinks it's great for
| writing code.
|
| The kind of code AI writes is the kind of code Microsoft has
| always written.
| mayhemducks wrote:
| Hopefully this is the beginning of the trough of disillusionment,
| and the steady return of rationalism.
| Bluescreenbuddy wrote:
| Good. Go make your OS useful and stop alienating your enterprise
| customers.
| lawlessone wrote:
| People are wondering how we got here when these AI's make so many
| mistakes.
|
| But the one thing they're really good at is marketing.
|
| That's why it's all over linkedin etc, marketing people see how
| great it is and think it must be great at everything else too.
| aabajian wrote:
| I think MSFT really needs some validated user stories. How many
| users want to, "Improve my writing," "Create an image,"
| "Understand what is changed" (e.g. recent edits), or "Visualize
| my data."?
|
| Those are the four use cases featured by the Microsoft 365
| Copilot App (https://m365.cloud.microsoft/).
|
| Conversely, I bet there are a lot of people who want AI to
| improve things _they are already doing repeatedly._ For example,
| I click the same button in Epic every day because Epic can 't
| remove a tab. Maybe Copilot could learn that I do this and
| just...do it for me? Like, Copilot could watch my daily habits
| and offer automation for recurring things.
| blitzar wrote:
| > Copilot could watch my daily habits and offer automation for
| recurring things
|
| Pretty sure the advertising department already watches you and
| helpfully suggests things that you need to buy.
| trollbridge wrote:
| I can't find any use case for Copilot at all, and I frequently
| "sell" people Microsoft 365. (I don't earn a commission; I just
| help them sign up for it.) I cannot come up with a reason
| anyone needs Copilot.
|
| Meanwhile I spent 3-4 hours working with a client yesterday
| using Dreamhost's free AI tools to get them up and running with
| a website quickly whilst I configured Microsoft 365,
| Cloudflare, email and so forth for them.
| gibsonsmog wrote:
| >improve things they are already doing repeatedly. For example,
| I click the same button in Epic every day because Epic can't
| remove a tab. Maybe Copilot could learn that I do this and
| just...do it for me?
|
| You could solve that issue (and probably lot's of similar
| issues) with something like Auto Hotkey. Seems like extreme
| overkill to have an autonomous agent watch everything you do,
| so it might possibly click a button.
| fainpul wrote:
| And in an ideal world, one could report this as a bug or
| improvement and get it fixed for every single user without
| them needing to do anything at all.
| aabajian wrote:
| Well, it isn't every user. We use a version of Epic called
| Epic Radiant. It's designed for radiologists. The tab that
| always opens is the radiologist worklist. The thing is, we
| don't use that worklist for procedures (I'm an
| interventional radiologist). So that tab is always there,
| always opens first, and always shows an empty list. It
| can't be removed in the Radiant version of Epic.
| PenguinCoder wrote:
| I'm sure you have, but try be bringing that up to Epic,
| not introducing AI slop and Data gathering into HIPPA
| workflows.
| MengerSponge wrote:
| But why would Epic spend money improving or fixing their
| software? If they spend money developing their product then
| they can't spend that money on their adult playground of a
| campus!
| aabajian wrote:
| Auto Hotkey doesn't work well for Epic manipulation because
| Epic runs inside of a Citrix Virtual Machine. You can't just
| read Window information and navigate that way. You'd have to
| have some sort of on-screen OCR to detect whether Epic is
| open, has focus, and is showing the tab that I want to close.
| Also, the tab itself can't be closed...I'm just clicking on
| the tab next to it.
| BeetleB wrote:
| Doable in Autohotkey. You can take a screenshot of what to
| look for, and tell AutoHotKey to navigate the mouse to it
| on the screen if it finds it.
|
| I've done similar things.
| mey wrote:
| But do you (or MSFT) trust it to do that correctly,
| consistently, and handle failure modes (what happens when the
| meaning of that button/screen changes)?
|
| I agree, an assistant would be fantastic in my life, but LLMs
| aren't AGI. They can not reason about my intentions, don't ask
| clarifing questions (bring back ELIZA), and handle state in an
| interesting way (are there designs out there that automatically
| prune/compress context?).
| abrichr wrote:
| > Like, Copilot could watch my daily habits and offer
| automation for recurring things.
|
| We're working on it at
| https://github.com/openadaptai/openadapt.
| ManuelKiessling wrote:
| I think what people want in the long term is truly malleable
| software: https://manuel.kiessling.net/2025/11/04/what-if-
| software-shi...
| BeetleB wrote:
| I actually would like it to improve my writing. Problem is LLMs
| aren't particularly good for this (yet).
| mwkaufma wrote:
| Meanwhile, divisions that make actual products people wants are
| expected to subsidize the hype department:
| https://www.geekwire.com/2025/new-report-about-crazy-xbox-pr...
| niceworkbuddy wrote:
| I wonder what part of these failed sales is due to GDRP
| requirements in the IT enterprise industry. I have my own
| european view, and it seems our governments are treating the
| matter very seriously. How do you ensure an AI agent won't leak
| anything? It just so happened that it wiped entire database or
| cleared a disk and later being very "sorry" about it. Is the risk
| worth it?
| knowitnone3 wrote:
| Why do they have salespeople when AI could have done the job?
| zkmon wrote:
| >> The Information notes that much of Microsoft's AI revenue
| comes from AI companies themselves renting cloud infrastructure
| rather than from traditional enterprises adopting AI tools for
| their own operations.
|
| And MS spends on buying AI hardware. That's a full circle.
| drumhead wrote:
| Too much money being spent on a technology that isnt ready to do
| what they're saying it can do. It feels like the 3G era all over
| again. Billion spent on 3G licences which didnt deliver what they
| expected it would.
| breve wrote:
| Why wasn't AI able to help them meet their sales targets?
|
| Can't Microsoft supercharge its workflow with these five weird
| prompts that bring a new layer of intelligence to its
| productivity:
|
| https://fortune.com/2025/09/02/billionaire-microsoft-ceo-sat...
| maxdo wrote:
| It's almost a revenge of the engineers. The big players' path to
| "success" has been to slap together some co-pilot loaded with
| enterprise bloat and try to compete with startups that solve the
| same problems in a much cleaner way.
|
| Meanwhile, they believed the market was already theirs--so their
| logic became: fire the engineers, buy more GPUs.
|
| I have mixed feelings about this. I've interviewed several people
| who were affected by these layoffs, and honestly, many of them
| were mediocre engineers by most measures. But that still doesn't
| make this a path to success.
| kace91 wrote:
| >I've interviewed several people who were affected by these
| layoffs, and honestly, many of them were mediocre engineers by
| most measures. But that still doesn't make this a path to
| success.
|
| How mediocre are we talking about here? (I'm curious)
| fingerlocks wrote:
| Very poor & mediocre.
|
| You can find secret little pockets within Microsoft where
| individuals & small teams do nothing at all, day in and day
| out. I mean literally nothing. The game is to maximize life
| and minimize work at the expense of the company. The managers
| are in on the game and help with the cover-up. I find it
| hilariously awesome and kind of sad at the same time.
|
| Anyway, one round of layoffs this year was specifically
| targeted at finding these pockets and snuffing them out. The
| evidence used to identify said pocket was slowly built out
| over a year ahead of time. It's very likely that these
| pockets also harbored poor & mediocre developers, it stands
| to reason that a poor or mediocre developer is more likely to
| gravitate to such a place.
|
| Not saying all the developers that were laid off were in a
| free-loader pocket, or that this cohort must be the ones that
| were interviewed. I'm only suggesting that the mediocre
| freeloaders form a significant slice of the Venn diagram.
| lawlessone wrote:
| Damn that is crazy, how do you measure it? , AI use? , i
| hope you saying this doesn't affect the employment
| prospects of the ones that aren't "mediocre" but happened
| to be on those teams.
| lawlessone wrote:
| Aren't most of us mediocre?
| tinyhouse wrote:
| Microsoft is strange cause it reports crazy growth numbers for
| Azure but I never hear about any tech company using Azure (AWS
| and GCP dominate here). I know it's more popular in big
| enterprises, banks, pharma, government, etc. and companies like
| Openai use their GPU offerings. Then there's all the Office stuff
| (Sharepoint, One Drive, etc). Who knows what they include under
| Azure numbers. Even Github can be considered "cloud".
|
| My point is, outside of co-pilot, very few consider Microsoft
| when they are looking for AI solutions, and if you're not already
| using Azure, why would you even bother check what they offer. At
| this point, their biggest ticket is their OpenAI stake.
|
| With that being said, I should give them some credit. They do
| some interesting research and have some useful open source
| libraries they release and maintain in the AI space. But that's
| very different than building AI products and solutions for
| customers.
| ChrisArchitect wrote:
| Again? https://news.ycombinator.com/item?id=46135388
|
| Not only that but the headline and story changed by the time Ars
| went to print:
|
| _Microsoft denies report of lowering targets for AI software
| sales growth_
| gwerbret wrote:
| A bit tangential and pedantic, but:
|
| > At the heart of the problem is the tendency for AI language
| models to confabulate, which means they may confidently generate
| a false output that is stated as being factual.
|
| "Confabulate" is precisely the correct term; I don't know how we
| ended up settling on "hallucinate".
| kace91 wrote:
| >fabricate imaginary experiences as compensation for loss of
| memory
|
| Uh, TIL. This is wildly different to the Spanish meaning,
| confabular means to plot something bad (as in a conspiracy).
|
| Which is a weird evolution in both languages, as the Latin root
| seems to mean simply "talking together".
| rsynnott wrote:
| I mean, neither is a great term, in that they both refer to
| largely dissimilar psychological phenomena, but confabulate is
| at least a lot _closer_.
| delecti wrote:
| The bigger problem is that, whichever term you choose
| (confabulate or hallucinate), that's what they're _always_
| doing. When they produce a factually correct answer, that 's
| just as much of a random fabrication based on training data as
| when they're factually incorrect. Either of those terms falsely
| implies that they "know" the answer when they get it right, but
| "confabulate" is worse because there isn't "gaps in their
| memory", they're just always making things up.
| gjmveloso wrote:
| "They just have no taste" - Steve Jobs
|
| Microsoft had a great start with the exclusive rights over OpenAI
| tech but they're not capable of really talking with developers
| within those large companies in the same sense Google and AWS are
| rapidly catching-up.
| cleandreams wrote:
| For the first time I have begun to doubt Microsoft's chosen
| course. (I am a retired MS principal engineer.) Their integration
| of copilot shows all the taste and good tradeoff choices of Teams
| but to far greater consequence. Copilot is irritating. MS
| dependence on OpenAI may well become dicey because that company
| is going to be more impacted by the popping of the AI bubble than
| any other large player. I've read that MS can "simply" replace
| ChatGPT by rolling their own -- maybe they can. I wouldn't bet
| the company on it. Is google going to be eager to license Gemini?
| Why would they?
| keeda wrote:
| This is annoying because Ars is one of the better tech blogs out
| there, but it still has instances of biased reporting like this
| one. It's interesting to decipher this article with an eye on
| what they said, what they implied, and what they _didn 't say_.
|
| Would be good if a sales person chime could in to keep me honest,
| but:
|
| 1. There is a difference between _sales quotas_ and _sales growth
| targets_. The former is a goal, latter is aspirational, a
| "stretch goal". They were not hitting their stretch goals.
|
| 2. The stretch goals were, like, _doubling_ the sales in a year.
| And they dropped it to 25% or 50% growth. No idea what the
| adoption of such a product should be, but doubling sounds pretty
| ambitious? I really can 't say, and neither did TFA.
|
| 3. Only a fraction met their _growth goals_ , but I guess it's
| safe to assume most hit their sales quotas, otherwise that's what
| the story would be about. Also, this implies some DID hit their
| growth goals, which implies at least some doubled their sales in
| a year. Could be they started small so doubling was easy, or
| could be a big deal, we don't know.
|
| 4. Sales quotas get revised all the time, especially for new
| products. Apparently, this was for a single product, Foundry,
| which was launched a year ago, so I expect some trial and error
| to figure out the real demand.
|
| 5. From the reporting it seems Foundry is having problems
| connecting to internal data sources... indicating it's a problem
| with engineering, and not a problem with the _AI_ itself. But TFA
| focuses on AI issues like hallucinations.
|
| 6. No reporting on the dozens of other AI products that MSFT has
| churned out.
|
| As an aside, it seems data connectivity issues are a stickier
| problem than most realize (e.g. organizational issues) and I
| believe Palantir created the FDE role for just this purpose:
| https://nabeelqu.substack.com/p/reflections-on-palantir
|
| Maybe without that strategy it would be hard for a product like
| this to work.
| naves wrote:
| It truly looks like they didn't learn anything from Clippy...
___________________________________________________________________
(page generated 2025-12-04 23:00 UTC)