[HN Gopher] Apple's AI isn't a letdown. AI is the letdown
       ___________________________________________________________________
        
       Apple's AI isn't a letdown. AI is the letdown
        
       Author : ndr42
       Score  : 94 points
       Date   : 2025-03-29 20:45 UTC (2 hours ago)
        
 (HTM) web link (www.cnn.com)
 (TXT) w3m dump (www.cnn.com)
        
       | ndr42 wrote:
       | The Quote: "AI can never fail, it can only be failed" is
       | something to think about
        
       | bigyabai wrote:
       | Ooh I like this one. "Apple's chips aren't slowing down. TSMC
       | is."
        
       | gibbitz wrote:
       | I think AI is just running up against a company whose mantra was
       | "it just works" and finding consumers who expect a working
       | product won't tolerate the lack of quality "AI" has delivered.
       | Welcome to reality venture capitalists...
        
       | upcoming-sesame wrote:
       | No, Apple AI is a letdown regardless
        
         | taytus wrote:
         | I agree. Both things could be true at the same time.
        
       | bbarnett wrote:
       | "Hey, I know! We should spend billions replacing code and data
       | that provide the precise same output every time (or random from
       | data we choose), with completely random, uncurated data that
       | changes with every new model, because why not! It's awesome!",
       | says every company now.
       | 
       | AI is not useful if you want curated fact, if you want consistent
       | output, if you want repeated quality.
       | 
       | How about training an AI on 1990s style encyclopedias, with their
       | low error rate.
       | 
       | Even wikipedia has random yahoos coming in and changing pages
       | about the moon landing, to say it was filmed in a studio.
       | 
       | AI is being trained on random, it outputs random.
        
         | simmerup wrote:
         | Surely even if you train it on an encyclopedia, if you ask a
         | question that isnt in the encyclopedia it'll just make
         | something up still
        
       | nixpulvis wrote:
       | If you can't explain how it works, I don't want it.
       | 
       | If your explanation boils down to a bunch of "it should do..." or
       | "most of the time it does..." then I still don't want it.
        
       | MarkusWandel wrote:
       | The scenario in the article, about how AI is "usually" right in
       | queries like "which airport is my mom's flight landing at and
       | when?" is exactly the problem with Google's AI summaries as well.
       | Several times recently I've googled something really obscure like
       | how to get fr*king suspend working in Linux on a recent-ish
       | laptop, and it's given me generic pablum instad of the actual,
       | obscure trick that makes it work (type a 12-key magic sequence,
       | get advanced BIOS options, pick an option way down a scrolling
       | list to nuke fr*king modern suspend and restore S3 sleep...
       | happiness in both Windows and Linux in the dual boot
       | environment). So it just makes the answers harder to find,
       | instead of helping.
        
       | pram wrote:
       | I've been experiencing "AI" making things worse. Grammarly worked
       | fine for a decade+ but now since, I guess, they've been trying to
       | cram more LLM junk into it the recommendations have been a lot
       | less reliable. Now it's sometimes missing even obvious typos.
        
       | martinald wrote:
       | I just do not understand this attitude. ChatGPT alone has
       | hundreds of millions of active users that are clearly getting
       | value from it, despite any mistakes it may make.
       | 
       | To me the almost unsolvable problem Apple has is wanting to do as
       | much as possible on device, but also have been historically very
       | stingy with RAM (on iOS and Mac devices - iOS more
       | understandably, given it doesn't really need huge amounts of RAM
       | until LLMs came along). This gives them a real real problem,
       | having to use very small models which hallucinate a lot more than
       | giant cloud hosted ones.
       | 
       | Even if they did manage to get 16GB of RAM on their new iPhones
       | that is still only going to be able to fit a 7b param model at a
       | push (leaving 8GB for 'system' use).
       | 
       | In my experience even the best open source 7B local models are
       | close to unusable. They'd have been mindblowning a few years ago
       | but when you are used to "full size" cutting edge models it feels
       | like an enormous downgrade. And I assume this to always be the
       | case; while small models are always improving, so are the full
       | size ones, so there will always be a big delta between them, and
       | people are already used to the large ones.
       | 
       | So I think Apple probably needs to shift to using cloud services
       | more like their Private Compute idea, but they have an issue
       | there in so much that they have 1b+ users and it is not trivial
       | at all to be able to handle that level of cloud usage for core
       | iOS/Mac features (I suspect this is why virtually nothing uses
       | Private Compute at the moment). Even if each iOS user only did 10
       | "cloud LLM" requests a day, that's over 10b/requests a day (10x
       | the scale that OpenAI currently handles). And in reality it'd
       | ideally be orders of magnitude more than that given how many
       | possible integration options they are for mobile devices alone.
        
         | inetknght wrote:
         | > _ChatGPT alone has hundreds of millions of active users that
         | are clearly getting value from it, despite any mistakes it may
         | make._
         | 
         | You assume hundreds of millions of users could identify serious
         | mistakes when they see them.
         | 
         | But humans have demonstrated repeatedly that they can't.
         | 
         | I don't think it can _ever_ be understated how dangerous this
         | is.
         | 
         | > _I think Apple probably needs to shift to using cloud
         | services more_
         | 
         | You ignore lessons from the the recent spat between Apple and
         | the UK.
        
         | eddythompson80 wrote:
         | > ChatGPT alone has hundreds of millions of active users that
         | are clearly getting value from it
         | 
         | True, but it's been years now since the debut of the chat-
         | interface-AI to the general public and we have yet to figure
         | out another interface that would work for generative AI for the
         | general public. I'd say the only other example is Adobe and
         | what they are doing with generative AI in their photo editing
         | tools, but thats a far cry from a "general public" type thing.
         | You have all the bumbling nonsense coming out of Microsoft and
         | Google trying to shove AI into whatever tools they are selling
         | while still getting 0 adoption. The copilot and Gemini
         | corporate sales teams have been both "restructured" this year
         | because they managed to sign up so many clients in 2023/2024
         | and all those clients refused to renew.
         | 
         | When it comes to the general public, we have yet to find a
         | better application of AI than a chat interface. Even outside of
         | the general public, I oversee few teams that are building
         | "agentic AI tools/workflows" and the amount of trouble they
         | have to go through to make something slightly coherent is
         | insane. I still believe that the right team with the right
         | architecture and design can probably achieve things that are
         | incredible with LLMs, but it's not as easy as the term "AI"
         | makes it sound.
        
         | crooked-v wrote:
         | I suspect an issue at least as big is that they're running into
         | a lot of prompt injection issues (even totally accidentally)
         | with their attempts at personal knowledge base/system awareness
         | stuff, whether remotely processed or not. Existing LLMs are
         | already bad at this even with controlled inputs; trying to
         | incorporate broad personal files in a Spotlight-like manner is
         | probably terribly unreliable.
        
           | sethhochberg wrote:
           | This is my experience as pretty heavy speech-to-text user
           | (voice keyboard) - as they've introduced more AI features,
           | I've started to have all sorts of nonsense from recent emails
           | or contacts get mixed into simple transcriptions
           | 
           | It used to have no problem with simple phrases like "I'm
           | walking home from the market" but now I'll just as often have
           | it transcribe "I'm walking home from the Mark Betts",
           | assuming Mark Betts was a name in my contacts, despite that
           | sentence making much less structural sense
           | 
           | It's bad enough that I'm using the feature much less because
           | I have to spend as much time copyediting transcribed text
           | before sending as I would if I just typed it out by hand. I
           | can turn off stuff like the frequently confused notification
           | summaries, but the keyboard has no such control as far as I
           | know
        
         | csdvrx wrote:
         | > In my experience even the best open source 7B local models
         | are close to unusable. They'd have been mindblowning a few
         | years ago but when you are used to "full size" cutting edge
         | models it feels like an enormous downgrade
         | 
         | Everything has limits - the only differences is where they are,
         | and therefore how often you meet them.
         | 
         | If you are working with AI, using local models shows you where
         | the problems can (and will) happen, which helps you write more
         | robust code because you will be aware of these limits!
         | 
         | It's like how you write more efficient code if you have to use
         | a resource constrained system.
        
         | jajko wrote:
         | Its just another tool (or toy), great at some stuff, almost
         | useless or worse for another, and its fucking downed our
         | throats at every corner, from every direction. I start to hate
         | everything AI-infused with passion. Even here on HN, many
         | people are not rational. I am willing to pay _less_ for AI-
         | anything, not the same and f_cking definitely not more.
         | 
         | Cargo culting of clueless managers which make long term
         | usability of products much worse, everything requiring some
         | stupid cloud, basic features locked up and you will be
         | analyzed, this is just another shit on top.
         | 
         | You have any massive hype, you normally get this shit. Once big
         | wave dies down with unavoidable sad moments for some, and tech
         | progresses further (as it will) real added value for everybody
         | may show up.
         | 
         | As for work - in my corporation, despite having pure dev senior
         | role, coding is 10-20% of the work, and its part I can handle
         | just fine on my own, I don't need babysitting from almost-
         | correct statistical models. In fact I learn and keep fresh much
         | better when still doing it on my own. You don't become or stay
         | senior when solutions are handed down to you. Same reason I use
         | git in command line and not clicking around. For code
         | sweatshops I can imagine much more added value, but not here in
         | this massive banking corporation. Politics, relationships,
         | knowing processes and their quirks and limitations is what
         | progresses stuff and gets it done. AI won't help here, if
         | anybody thinks differently they have f_cking no idea what I
         | talk about. In 10 years it may be different, lets open the
         | discussion again then.
        
         | mingus88 wrote:
         | Private compute cloud is apples solution. It doesn't matter
         | what specs your phone has because the inference is sent to a
         | data center.
         | 
         | They literally have data centers worth of devices running
         | inferences anonymously
        
         | manquer wrote:
         | > ChatGPT alone has hundreds of millions of active user that
         | are clearly getting value from it
         | 
         | So does OG Siri or Alexa, Letdown does not mean completely
         | useless, it just means what the users are getting is far less
         | than what they were promised, not that they get nothing.
         | 
         | In this context AI will be a letdown regardless of improvements
         | in offline or even cloud models. It is not only because of
         | additional complexity of offline model Apple will not deliver,
         | their product vision just does not look achievable in the
         | current state of tech in LLMs [1].
         | 
         | Apple itself while more grounded compared to peers who
         | regularly talk about building AGI, or God etc, has been still
         | showing public concept demos akin to what gaming studios or
         | early stage founders do. Reality usually fall short when you
         | run ahead of product development in marketing, it will be no
         | different for Apple.
         | 
         | This is a golden rule of brand and product development - never
         | show what have not built fully to the public if you want them
         | to trust your brand.
         | 
         | To be clear, it is not bad for the company per se to do this,
         | top tier AAA gaming studios do just fine as businesses despite
         | letting down fans game after game with oversell and under
         | deliver, but suffer as brands nobody will have good thing to
         | say about Blizzard or EA or any other major studio.
         | 
         | Apple monetizes its brand very well by being able to price
         | their products at premium compared to peers that will be at
         | risk if users feel letdown.
         | 
         | [1] Perhaps new innovations will make radical improvements even
         | in the near future, regardless that will not change Apple can
         | ship in 2025 or even 2026 so still a letdown for users being
         | promised things for last 2 years already.
        
         | Hikikomori wrote:
         | > ChatGPT alone has hundreds of millions of active users that
         | are clearly getting value from it
         | 
         | Idk about that, wouldn't pay for it.
        
           | brulard wrote:
           | What do you mean? Lot of people pay, (me included) and are
           | getting value. If you use it but don't pay, you still get
           | value, otherwise you would be wasting your time. If you don't
           | use it at all, that's your choice to make.
        
         | jostmey wrote:
         | The very fact that Apple thought they were going to run AI on
         | iPhones says that leadership doesn't understand AI technology
         | and simply mandated requirements to engineers without wanting
         | to be bothered by details. In other words, Apple seems to be
         | badly managed
        
           | jackvalentine wrote:
           | > The very fact that Apple thought they were going to run AI
           | on iPhones
           | 
           | Nope
           | 
           | https://security.apple.com/blog/private-cloud-compute/
        
           | iknowstuff wrote:
           | Shoulda kept Scott Forstall
        
         | mentalgear wrote:
         | There are thresholds for every technology where it is "good
         | enough", same with LLMs or SLMs (on-device). Machine learning
         | is already running on-device for photo
         | classification/search/tagging, and even 1.5b models are getting
         | fast really good, as long as they are well trained and used for
         | the right task. Something like email writing, TTS and rewriting
         | and other tasks should be easily doable, the "semantic search
         | aspect" of chatbots are basically a new way of "google/web
         | search" and probably stay in the cloud, but that's not their
         | most crucial use.
         | 
         | Not a big fan of Apple's monopoly, but I like their privacy on-
         | device handling. I don't care for Apple but on-device models
         | are definitely the way to go from a consumer point of view.
        
         | MrMcCall wrote:
         | Do you also judge crack cocaine's value by its number of users?
         | 
         | I don't think most people are capable of doing a cost/benefit
         | ratio calculation on how what they do affects the rest of the
         | world, and the wealthy are far and away the worst abusers of
         | this sadass truth.
        
       | rossdavidh wrote:
       | The worst thing about "AI" is its name. It isn't intelligent, it
       | isn't even dumb. If the current wave had been called "neural
       | networks" or "large language models", then the hype wouldn't have
       | been as breathless, but the disappointment wouldn't be as sharp
       | either, because it wouldn't be used for things it isn't suited
       | for.
       | 
       | It's an algorithm; it's just an algorithm. It's useful for a few
       | things. It isn't useful for most things. Like MVC, or relational
       | databases, or finite state machines, or OOP, it's not something
       | you should have to (or want to) tell the end user that you are
       | using in the internals. The reason most "AI" products brag about
       | using "AI", is there isn't anything else interesting about them.
        
       | pedalpete wrote:
       | This is Apple's spin machine working overtime trying to say
       | "we're not failing at AI, everyone is failing at AI".
       | 
       | I'm not sure anyone is going to buy it, but it doesn't cost them
       | anything to get a few of their PR hacks to give it a try.
       | 
       | It's about as convincing as "we didn't build a bad phone, you're
       | just holding it wrong!".
        
       | Workaccount2 wrote:
       | >If it's 100% accurate, it's a fantastic time saver. If it is
       | anything less than 100% accurate, it's useless.
       | 
       | The insane levels of hypocrisy hearing this come from a
       | mainstream media source. The damage that has been done to all of
       | society by misrepresenting and half-truthing about events to
       | appease audiences is unrivaled, yet here they are on the high
       | horse of "anything less than 100% accurate is useless"
       | 
       | Take note CNN, take fucking note.
        
       | flippy_flops wrote:
       | With Apple/iOS, I can't help but think of the Joker's quote, "You
       | have nothing... Nothing to do with all your strength." The
       | efficiency half is excellent but what with the power? AR? Gaming?
       | AI seems the first broad fit. And where was Apple? Literally
       | chasing cars and an ill conceived VR headset.
       | 
       | I say this as a massive Apple fanboy. AI was heavily advertised
       | as a selling point of iPhone 15 Pro and is completely MIA 6
       | months later. It's a major letdown. It's not the end of the
       | world, but let's just call it what it is.
       | 
       | For those saying Apple doesn't release imperfect products, may I
       | introduce to you Siri? It was average when they bought it and
       | it's become a punch line.
       | 
       | And there are so many uses of AI that don't have to be at the
       | risk level of, "Oops, AI left grandma at LeGuardia." Apple should
       | go back to its roots and provide high quality LLM/MCP and other
       | API sdks to developers and let them go nuts. Then just clone or
       | buy the apps that work like they always do.
        
       | epolanski wrote:
       | Apple went from $170 to $220 after the Apple Intelligence bs
       | promises.
       | 
       | Still sits there despite having long plateaued in revenue and is
       | still priced for some impressive revenue growth.
       | 
       | Go figure.
        
       | roughly wrote:
       | Two thoughts:
       | 
       | The first is that LLMs are bar none the absolute best natural
       | language processing and producing systems we've ever made. They
       | are absolutely fantastic at taking unstructured user inputs and
       | producing natural-looking (if slightly stilted) output. The
       | problem is that they're not nearly as good at almost anything
       | else we've ever needed a computer to do as other systems we've
       | built to do those things. We invented a linguist and mistook it
       | for an engineer.
       | 
       | The second is that there's a maxim in media studies which is
       | almost universally applicable, which is that the first use of a
       | new media is to recapitulate the old. The first TV was radio
       | shows, the first websites looked like print (I work in synthetic
       | biology, and we're in the "recapitulating industrial chemistry"
       | phase). It's only once people become familiar with the new medium
       | (and, really, when you have "natives" to that medium) that we
       | really become aware of what the new medium can do and start
       | creating new things. It strikes me we're in that recapitulating
       | phase with the LLMs - I don't think we actually know what these
       | things are good for, so we're just putting them everywhere and
       | redoing stuff we already know how to do with them, and the
       | results are pretty lackluster. It's obvious there's a "there"
       | there with LLMs (in a way there wasn't with, say, Web 3.0, or
       | "the metaverse," or some of the other weird fads recently), but
       | we don't really know how to actually wield these tools yet, and I
       | can't imagine the appropriate use of them will be chatbots when
       | we do figure it out.
        
         | artyom wrote:
         | Most grounded and realistic take on the AI hype I've read
         | recently.
        
         | brulard wrote:
         | It was a mistake to call LLMs "AI". Now people expect it to be
         | generic.
        
           | exe34 wrote:
           | They're pretty AI to me . I've been using chat gpt to explain
           | things to me while learning a foreign language, and a native
           | speaker has been overseeing the comments from it. it hasn't
           | said anything that the native has disagreed with yet.
        
             | stevenally wrote:
             | Like the OP said "LLMs are bar none the absolute best
             | natural language processing and producing systems we've
             | ever made".
             | 
             | They may not be good at much else.
        
             | caseyohara wrote:
             | I reckon you're proving their point. You're using a large
             | language model for language-specific tasks. It ought to be
             | good at that, but it doesn't mean it is generic artificial
             | intelligence.
        
               | exe34 wrote:
               | generic artificial intelligence is a sufficiently large
               | bag of tricks. that's what natural intelligence is.
               | there's no evidence that it's not just tricks all the way
               | down.
               | 
               | I'm not asking the model to translate from one language
               | to another - I'm asking it to explain to me why a certain
               | word combination means something specific.
               | 
               | it can also solve/explain a lot of things that aren't
               | language. bag of tricks.
        
             | brulard wrote:
             | Yes, but your use case is language. I use LLMs for all kind
             | of stuff from programming, creative work, etc. so I know
             | it's useful even elsewhere. But as the generic term "AI" is
             | being used, people expect it to be good at everything a
             | human can be good at and then whine about how stupid the
             | "AI" is.
        
             | Shorel wrote:
             | I tried the same with another foreign language. Every
             | native speaker have told the answers are crap.
        
               | exe34 wrote:
               | could you give an example?
        
           | lolinder wrote:
           | OpenAI has been pushing the idea that these things are
           | generic--and therefore the path to AGI--from the beginning.
           | Their entire sales pitch to investors is that they have the
           | lead on the tech that is most likely to replace all jobs.
           | 
           | If the whole thing turns out to be a really nifty commodity
           | component in other people's pipelines, the investors won't
           | get a return on any kind of reasonable timetable. So OpenAI
           | keeps pushing the AGI line even as it falls apart.
        
           | MR4D wrote:
           | I wonder.
           | 
           | People primarily communicate thru words, so maybe not.
           | 
           | Of course, pictures, body language, and also tone are also
           | other communication methods.
           | 
           | So far it looks like these models can convert pictures into
           | words reasonably well, and the reverse is improving quickly.
           | 
           | Tone might be next - there are already models that can detect
           | stress so that's a good first start.
           | 
           | Body language is probably a bit farther in the future, but it
           | might be as simple as image analysis (thats only a wild
           | guess-I have no idea)
        
         | holoduke wrote:
         | I actually believe the practical use of transformers, diffusers
         | etc is already as impactful as the wide adoption of the
         | internet. Or smartphones or cars. Its already used by hundreds
         | of millions and it became an irreplaceable tool to enhance work
         | output. And it just started. In 5 years from now it will
         | dominate every single part of our lifes.
        
         | soulofmischief wrote:
         | Transformers still excel at translation, which is what they
         | were originally designed to do. It's just no longer about
         | translating only language. Now it's clear they're good at all
         | sorts of transformations, translating ideas, styles, etc. They
         | represent an incredibly versatile and one-shot programmable
         | interface. Some of the most successful applications of them so
         | far are as some form of interface between intent and action.
         | 
         | And we are still just barely understanding the potential of
         | multimodal transformers. Wait till we get to metamultimodal
         | transformers, where the modalities themselves are assembled on
         | the fly to best meet some goal. It's already fascinating
         | scrolling through latent space [0] in diffusion models, now
         | imagine scrolling through "modality space", with some arbitrary
         | concept or message as a fixed point, being able to explore
         | different novel expressions of the same idea, and sample at
         | different points along the path between imagery and sound and
         | text and whatever other useful modalities we discover. Acid
         | trip as a service.
         | 
         | [0]
         | https://keras.io/examples/generative/random_walks_with_stabl...
        
           | gsf_emergency_2 wrote:
           | Something that has been bugging me is that, applications-
           | wise, the exploitative end of the "exploitation-exploration"
           | trade-off (for lack of a better summary) have gotten way more
           | attention than the other side.
           | 
           | So, besides the complaints about accuracy, hallucinations
           | (you said "acid trip") are dissed much more than would have
           | been necessary.
        
         | caseyy wrote:
         | I haven't read Understanding Media by Marshall McLuhan, but I
         | think he introduced your second point in that book, in 1964. He
         | claims that the content of each new medium is a previous
         | medium. Video games contain film, film contains theater,
         | theater contains screenplay, screenplay contains literature,
         | literature contains spoken stories, spoken stories contain
         | folklore, and I suppose if one were an anthropologist, they
         | could find more and more chain links in this chain.
         | 
         | I use this idea when designing narratives for video games. The
         | stories must draw from other commonly understood media.
         | Otherwise, the players won't be able to relate. For example,
         | you may end up with game stories that only a few people
         | understand and value, like stories based on pop culture,
         | politics, or modern identities.
         | 
         | It's probably the same in AI -- the world needs AI to be chat
         | before it can grow meaningfully beyond. Once people understand
         | neural networks, we can broadly advance to new forms of mass-
         | application machine learning. I am hopeful that that will be
         | the next big leap.
         | 
         | Here's Marc Andreessen applying it to AI and search on Lex
         | Fridman's podcast: https://youtu.be/-hxeDjAxvJ8?t=160
        
           | cma wrote:
           | Oral cultures had theater.
        
         | armada651 wrote:
         | > It's obvious there's a "there" there with LLMs (in a way
         | there wasn't with, say, Web 3.0, or "the metaverse," or some of
         | the other weird fads recently)
         | 
         | There is a "there" with those other fads too. VRChat is a
         | successful "metaverse" and Mastodon is a successful
         | decentralized "web3" social media network. The reason these
         | concepts are failures is because these small grains of success
         | are suddenly expanded in scope to include a bunch of dumb ideas
         | while the expectations are raised to astronomical levels.
         | 
         | That in turn causes investors throw stupid amounts of money at
         | these concepts, which attracts all the grifters of the tech
         | world. It smothers nacant new tech in the crib as it is
         | suddenly assigned a valuation it can never realize while the
         | grifters soak up all the investments that could've gone to
         | competent startups.
        
       | originalvichy wrote:
       | AI working with your OS is absolutely the letdown. I do not want
       | to give my personal computer's data a direct feed into the hands
       | of the same developers who lie about copyright abuses when mining
       | data.
       | 
       | 90% of the mass consumer AI tech demos in the past 2-3 years are
       | the exact same demos that voice assistants used to do with just
       | speech-to-text + search functions. And these older tech demos are
       | already things only 10% of users probably did regularly. So they
       | are adding AI features to halo features that look good in
       | marketing but people never use.
       | 
       | Keep the OS secure and let me use an Apple AI app in 2-3 years
       | when they have rolled their own LLM.
        
       | 4ndrewl wrote:
       | "Apple made a rare stumble"
       | 
       | Auto. Vision Pro. AI.
       | 
       | Is there a pattern emerging here?
        
         | reliabilityguy wrote:
         | No need to go that far.
         | 
         | Search in Mail is abysmal since forever. Everyone knows it.
         | Apple knows it. No change still. So, no surprise here.
        
           | bdangubic wrote:
           | search in outlook is abysmal and it is part of microsoft's
           | core business... :)
        
             | reliabilityguy wrote:
             | well, I think the expectations from Apple's product on a
             | much higher level, which makes the situation with search
             | even more embarrassing.
        
               | bdangubic wrote:
               | how interesting... I expect the opposite, just like maps,
               | search is just not apple's "thing" and I don't expect
               | much at all
        
             | hu3 wrote:
             | whataboutism won't save Apple when Gmail search has been
             | great since forever
        
               | bdangubic wrote:
               | hm search company's search is decent, that's
               | surprising...
        
         | LeoPanthera wrote:
         | I don't know if the Vision Pro counts as a stumble. If they
         | were planning to make a mass-market product, they wouldn't have
         | priced it so high. Apple doesn't reveal sales targets, but I
         | bet they sold about as many Vision Pros as they expected to.
        
           | henry2023 wrote:
           | Everyone one said the same when apple introduced the iPhone.
           | It was expensive and it didn't have a keyboard. Clearly made
           | for a small niche market.
        
       | ichiwells wrote:
       | One of apple's biggest missed with "AI" in my opinion, is not
       | building a universal search.
       | 
       | For all the hype LLM generation gets, I think the rise of LLM-
       | backed "semantic" embedding search does not get enough attention.
       | It's used in RAG (which inherits the hallucinatory problems), but
       | seems underutilized elsewhere.
       | 
       | The worst (and coincidentally/paradoxically I use the most)
       | searches I've seen is Gmail and Dropbox, both of which cannot
       | find emails or files that I know exist, even if using the exact
       | email subject and file name keywords.
       | 
       | Apple could arguably solve this with a universal search SDK, and
       | I'd value this far more than yet-another-summarize-this-paragraph
       | tool.
        
         | brulard wrote:
         | I have this same issue with gmail. I can not find e-mails by an
         | exact word from text or subject. It is there, but search would
         | not show it. I don't understand how a number one email provider
         | can fail at that.
        
           | squid_ca wrote:
           | Or, say, a number one search provider ;)
        
           | fragmede wrote:
           | and search provider! Of all the companies in the world, why
           | is Gmail search just not better?
        
       | LeoPanthera wrote:
       | AI might be disappointing, but Apple Intelligence is definitely a
       | stumble. I've been playing with Gemini and it works shockingly
       | well. I fully expect Apple to catch up, but it will take a while
       | for them to recover from the reputational damage.
        
       | aaomidi wrote:
       | Yes and no.
       | 
       | Siri didn't need to suck all these years. Even before the LLM
       | craze.
        
       | seydor wrote:
       | Trough of disillusionment
        
       | andrewstuart wrote:
       | AI is at the Web 1.0 stage when people didn't really know how to
       | make the most of it.
       | 
       | It sounds ridiculous now but Web 1.0 was mostly about putting
       | companies paper brochures onto websites.
       | 
       | It sounds doubly ridiculous that Web 1.0 came to an end when the
       | market crashed because no one could figure out how to make money
       | from the internet.
       | 
       | Web 1.0 started in 1994 and it would be ten years until Facebook
       | arrived.
       | 
       | So AI has some really really big surprises in store that no one
       | has thought of yet and when they do, fortunes will be made.
        
       | tallytarik wrote:
       | "Hey Siri open the curtains"
       | 
       | "I found some web results. I can show them if you ask again from
       | your iPhone"
       | 
       | Nah, Apple is the letdown, and has been since before ChatGPT.
        
       | kittikitti wrote:
       | Why is a source about AI from CNN being taken seriously? Isn't
       | their "journalism" just clickbait?
        
       | amelius wrote:
       | Only if you are susceptible to the RDF.
        
       | stavros wrote:
       | To me, "AI is the letdown" is the letdown. The sheer lack of
       | imagination and wonder you must have to see what are almost
       | virtual people, something that was _unthinkable_ five years ago,
       | and to say it's a letdown, I will never understand.
       | 
       | We have programs, actual programs that you can run on your
       | laptop, that will understand images and describe them to you,
       | understand your voice, talk to you, explain things to you. We
       | have experts that will answer your every question, that will
       | research things for you, and all we keep saying is how
       | disappointing it is that they aren't better than humans.
       | 
       | To me, this is very much the old joke of "wow, your dog can
       | sing?!" "Eh, it's not that impressive, he's pitchy". To go from
       | "AI that can converse fluently is impossible, basically science
       | fiction" to "AI is a letdown" just shows me the infinite
       | capability humans have to find anything disappointing, no matter
       | how jaw-droppingly amazing it is.
        
         | namaria wrote:
         | Frankly the "this didn't exist before" and extend-the-line "it
         | will keep getting better" is not only bad reasoning, it's
         | getting tired.
         | 
         | Yeah transformers doing NLP well is pretty impressive. No, it
         | is not worth burning hundreds of billions of dollars on GPU
         | data centers. And please, stop the hype. Non-technical decision
         | makers are really spoiling everything with magical thinking
         | about "artificial intelligence". We still have to learn domains
         | and engineer products. There is no silver bullet.
        
           | voidspark wrote:
           | It is worth burning hundreds of billions, because users are
           | demanding it and getting value from it.
           | 
           | Grok has already overloaded their 200k GPU cluster and is
           | struggling to keep up with the demand.
        
         | throwawa14223 wrote:
         | But we've had Eliza since the 60's so what is there to get
         | excited about?
        
           | stavros wrote:
           | We've also had sand, so I'm not sure what this "CPU" hype is
           | about.
        
       | bradgessler wrote:
       | Apple would be much better saying to the world, "we're going to
       | make Siri better". That's concrete, people get it, LLMs are good
       | at it, and something we'd all appreciate.
       | 
       | Instead they're failing to build a bunch of stuff that nobody
       | asked for under the banner, "Apple Intelligence".
       | 
       | Please Apple, just make Siri better.
        
       | throwawa14223 wrote:
       | I've noticed that Siri has gotten far worse at playing a song
       | based on a verbal request. Frequently Siri now assures me that
       | songs are not downloaded to my phone only for me to discover that
       | they have been the whole time.
        
       | lolinder wrote:
       | This is a really good insight:
       | 
       | > There's a popular adage in policy circles: "The party can never
       | fail, it can only be failed." It is meant as a critique of the
       | ideological gatekeepers who may, for example, blame voters for
       | their party's failings rather than the party itself.
       | 
       | > That same fallacy is taking root among AI's biggest backers. AI
       | can never fail, it can only be failed. Failed by you and me, the
       | smooth-brained Luddites who just don't get it.
       | 
       | This attitude has been prevalent on HN from the beginning of the
       | hype cycle. Those of us who say that we keep trying AI and it
       | keeps letting us down invariably are told that we're just holding
       | it wrong. Did you try giving enough context? Did you try the
       | latest fancy tool? Did you use the latest models?
       | 
       | The answer is yes on all counts, and AI still has failed to
       | transform my work in any meaningful way, but that's _clearly_ a
       | me problem because LLMs are so _clearly_ the future.
       | 
       | It's nice to see this attitude encompassed in a little maxim, and
       | somewhat appropriate that the maxim comes from politics.
        
       | deadbabe wrote:
       | We keep trying to find justifications for business use of LLMs.
       | 
       | We keep getting shut down by simpler, purpose built tools that
       | work predictably.
       | 
       | LLM is just good for synthesizing vague inputs.
        
       | ohso4 wrote:
       | > Apple's obsession with privacy and security is the reason most
       | of us don't think twice to scan our faces, store bank account
       | information or share our real-time location via our phones.
       | 
       | Uh do you have any freaking idea of what happens with your
       | location data? bank account information is a matter of security.
       | So is face ID data.
        
       | icu wrote:
       | Where exactly is the Apple Intelligence that was advertised? Siri
       | absolutely cannot go into your phone's calendar and see who you
       | bumped into at some bar or cafe. I've been using the Pixel 9 Pro
       | as my daily driver and while I really wanted to install CalyxOS
       | on it, I've found Gemini to be actually useful (and I'm generally
       | biased against Google).
       | 
       | Apple is behind the curve like Google was prior to Gemini 2.5
       | Pro, but unlike Google, I cannot see Apple having the talent to
       | catch up unless they make some expensive acquisitions and even
       | then they will still be behind. I was shocked at how good Gemini
       | 2.5 Pro is. The cost and value for money difference is so big
       | that I'm considering switching away from my API usage of Claude
       | Sonnet 3.7 to Gemini 2.5 Pro.
        
       | lvl155 wrote:
       | This sort of takeaway is from people who do not have experience
       | in cutting edge. AI is developing at such a rapid pace right now.
       | I've seen some amazing things in the past three months.
       | 
       | I will say Apple AI completely sucks for a company with all the
       | resources available to them.
        
       | krackers wrote:
       | No, frontier models are very mindblowing. And Deepseek's open-
       | weight V3 right now is as good as frontier model. It's just
       | apple's tiny models that are useless for anything. And it's not
       | just that they didn't have good open models, there's no excuse
       | for shipping the joke that is "Image Playground" when you can run
       | any number of diffusion models on a mac and get better quality.
        
       ___________________________________________________________________
       (page generated 2025-03-29 23:00 UTC)