[HN Gopher] OpenAI and Apple Announce Partnership
       ___________________________________________________________________
        
       OpenAI and Apple Announce Partnership
        
       Author : serjester
       Score  : 353 points
       Date   : 2024-06-10 18:55 UTC (4 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | talldayo wrote:
       | > The ChatGPT integration, _powered by GPT-4o_ , will come to
       | iOS, iPadOS, and macOS later this year.
       | 
       | Jensen Huang must be having the time of his life right now.
       | Nvidia's relationship with Apple went from pariah to prodigal
       | son, _real_ fast.
        
         | dereg wrote:
         | If we know anything about Apple, they're going after Nvidia. If
         | anyone can pull it off, it's going to be them.
        
           | talldayo wrote:
           | Well, good luck to Apple then. Hopefully this attempt at
           | killing Nvidia goes better than the first time they tried, or
           | when they tried and gave-up on making OpenCL.
           | 
           | I just don't understand how they can compete on their own
           | merits without purpose-built silicon; the M2 Ultra doesn't
           | shine a candle to a single GB200. Once you consider how
           | Nvidia's offerings are networked with Mellanox and CUDA
           | universal memory, it feels like the only advantage Apple has
           | in the space is setting their own prices. If they want to be
           | competitive, I don't think they're going to be training Apple
           | models on Apple Silicon.
        
             | 0xWTF wrote:
             | S&P 500 average P:E - 20 to 25
             | 
             | NASDAQ average P:E - 31
             | 
             | NVidia's P:E - 71
             | 
             | That's a market of 1 vendor. That's ripe for attack.
        
               | talldayo wrote:
               | Let's check in with OpenCL and see how far it got
               | disrupting CUDA.
               | 
               | You see, I _want_ to live in a world where GPU
               | manufacturers aren 't perpetually hostile against each
               | other. Even Nvidia would, judging by their decorum with
               | Khronos. Unfortunately, some manufacturers would rather
               | watch the world burn than work together for the common
               | good. Even if a perfect CUDA replacement existed like it
               | did with DXVK and DirectX, Apple will ignore and deny it
               | while marketing something else to their customers. We've
               | watched this happen for years, and it's why MacOS
               | perennially cannot run many games or reliably support
               | Open Source software. It is because Apple is an
               | unreasonably fickle OEM, and their users constantly pay
               | the price for Apple's arbitrary and unnecessary
               | isolationism.
               | 
               | Apple thinks they can disrupt AI? It's going to be like
               | watching Stalin try to disrupt Wal-Mart.
        
               | labcomputer wrote:
               | > Let's check in with OpenCL and see how far it got
               | disrupting CUDA.
               | 
               | That's entirely the fault of AMD and Intel fumbling the
               | ball in front of the other team's goal.
               | 
               | For _ages_ the only accelerated backend supported by
               | PyTorch and TF was CUDA. Whose fault was that? Then there
               | was buggy support for a subset of operations for a while.
               | Then everyone stopped caring.
               | 
               | Why I think it will go different this time: nVidia's
               | competitors seem to have finally woken up and realized
               | they need to support high level ML frameworks. "Apple
               | Silicon" is essentially fully supported by PyTorch these
               | days (via the "mps" backend). I've heard OpenCL works
               | well now too, but have no hardware to test it on.
        
               | anvuong wrote:
               | It's ripe for attack. But Nvidia is still in its growing
               | phase, not some incumbent behemoth. The way Nvidia
               | ruthlessly handled AMD tell us that they are ready for
               | competition.
        
               | riquito wrote:
               | > That's a market of 1 vendor. That's ripe for attack.
               | 
               | it's just a monopoly [1] , how hard can it be?
               | 
               | /s
               | 
               | - [1] practically, because of how widespread cuda is
        
               | baq wrote:
               | cuda is x86. the only way from 100% market share is down.
               | 
               | ...though it took two solid decades to even make a dent
               | in x86.
        
               | talldayo wrote:
               | CUDA is also ARM: https://developer.nvidia.com/cuda-
               | downloads?target_os=Linux
        
           | whimsicalism wrote:
           | i would strongly take the other side of that bet
        
             | dereg wrote:
             | Nvidia obviously has an enormous, enormous moat but I do
             | think this is one of the areas in which Apple may actually
             | GAF. The rollout of Apple Intelligence is going to make
             | them the biggest provider of "edge" inference on day one.
             | They're not going to be able to ride on optimism in
             | services growth forever.
        
               | whimsicalism wrote:
               | Apple simply does not have the talent pool to take on
               | either nvidia or the big LLM providers anywhere on the
               | stack except for edge inference.
               | 
               | If you're saying Apple is going to 'take on nvidia' in
               | edge inference, then I don't disagree but I would hardly
               | even count that as taking on nvidia.
        
               | dereg wrote:
               | I can't really dispute any of that.
               | 
               | It took almost a decade but the PA Semi acquisition
               | showed that Apple was able to get out of the shadow of
               | its PowerPC era.
               | 
               | Nvidia will remain a leader in this space for a long
               | time. But things are going to play out wonky and Apple,
               | when determined, are actually pretty good at executing on
               | longer-term roadmaps.
        
             | Ancapistani wrote:
             | Personally, I'm taking _both_ sides of that bet.
             | 
             | I think Apple is going to make rapid and substantial
             | advancements in on-device AI-specific hardware. I also
             | think nVIDIA is going to continue to dominate the cloud
             | infrastructure space for training foundational models for
             | the foreseeable future, and serving user-facing LLM
             | workloads for a long time as well.
        
               | whimsicalism wrote:
               | edge inference? sure - but nvidia is not even a major
               | player in that space now so i wouldn't really count that
               | as 'taking on nvidia'.
        
           | MR4D wrote:
           | Why do you think that?
           | 
           | You seem to be positioning this as a Ford vs Chevy duel, when
           | (to me at least) the comparison should be to Ford vs Exxon.
           | 
           | Nvidia is an infrastructure company. And a darned good one.
           | Apple is a user facing company and has outsourced
           | infrastructure for decades (AWS & Azure being two of the well
           | known ones).
        
             | dereg wrote:
             | Apple outsourced chips to IBM (PowerPC) for a long time and
             | floundered all the while. They went into the game
             | themselves w/ the PA Semi acquisition and now they have
             | Apple Silicon to show for it.
        
               | MR4D wrote:
               | But Apple is vertically integrating. Thats like Ford
               | buying Bridgestone.
               | 
               | The only way it hurts Nvidia is if Apple becomes the
               | runaway leader of the pc market. Even then, Apple hasn't
               | shown any intent of selling GPUs or AI processors to the
               | likes of AWS, or Azure or Oracle, etc.
               | 
               | Nvidia has a much bigger threat from with Intel/AMD or
               | the cloud providers backward integrating and then not
               | buying Nvidia chips. Again, no signs that Apple wants to
               | do this.
        
           | 01100011 wrote:
           | Apple could have moved on Nvidia but instead they seem to
           | have thrown in the towel and handed cash back to investors.
           | The OpenAI deal seems like further admission by Apple that
           | they missed the AI boat.
        
           | gizmo wrote:
           | Exactly. Apple really needs new growth drivers and Nvidia has
           | a 3bn market cap Apple wants to take a bite out of. One of
           | the few huge tech growth areas that Apple can expand into.
        
           | ThinkBeat wrote:
           | I am of course wrong frequently, but I cannot see how that
           | would happen. If they create cpu/gpus that are faster/better
           | than what Nvidia sells, but they only sell them as part of a
           | Mac desktop or laptop systems it wont really compete.
           | 
           | For that they would have to develop servers that has a mass
           | amount of whatever it is or sell the chips in the same manner
           | Nvidia does today.
           | 
           | I dont see that future for Apple.
           | 
           | Microsoft / Google / or other major cloud companies would do
           | extremely well if they could develop it and just keep it as a
           | major win for their cloud products.
           | 
           | Azure is running OpenAI as far as I have heard.
           | 
           | Imagine if M$ made a crazy fast GPU/whatever. It would be a
           | huge competitive advantage.
           | 
           | Can it happen? I dont think so.
        
         | cube2222 wrote:
         | Eh, it seems from the keynote that ChatGPT will be very
         | selectively used, while most features will be powered by on-
         | device processing and Apple's own private cloud _running apple
         | silicon_.
         | 
         | So all in all, not sure if it's that great for Nvidia.
        
           | 01100011 wrote:
           | If OpenAI is furiously buying GPUs to train larger models and
           | Apple is handing OpenAI cash, then this seems like a win for
           | Nvidia. You can argue about how big of a win, but it seems
           | like a positive development.
           | 
           | What would not have been positive for Nvidia is Apple saying
           | they've adapted their HW to server chips and would be
           | partnering with OpenAI to leverage them, but that didn't
           | happen. Apple is busy handing cash back to investors and not
           | seriously pursuing anything but inference.
        
         | ra7 wrote:
         | Didn't Apple say they're using their own hardware for serving
         | some of the AI workloads? They dubbed it 'Private Cloud
         | Compute'. Not sure how much of a vote of confidence it is for
         | Nvidia.
        
           | whimsicalism wrote:
           | not for gpt4o workloads they aren't going to
        
             | jsheard wrote:
             | Plus even if Apple is using their own chips for
             | inferencing, they're still driving more demand for
             | training, which Nvidia still has locked down pretty tight.
        
               | ra7 wrote:
               | Apple said they're using their own silicon for training.
               | 
               | Edit: unless I misunderstood and they meant only
               | inference.
        
               | talldayo wrote:
               | They trained GPT-4o on Apple Silicon? I find that hard to
               | believe, surely they only mean that _some_ models were
               | trained with Apple Silicon.
        
               | ra7 wrote:
               | Not GPT-4o, their own models that power some (most?) of
               | the "Apple Intelligence" stuff.
        
               | jsheard wrote:
               | Interesting, I thought Apple Silicon mainly excelled at
               | inferencing. Though I suppose the economics of it are
               | unique for Apple themselves since they can fill racks
               | full of barebones Apple Silicon boards without having to
               | pay their own retail markup for complete assembled
               | systems like everyone else does.
        
               | whimsicalism wrote:
               | without more details hard to say, but i seriously doubt
               | they trained any significantly large LM on their own
               | hardware
               | 
               | people on HN routinely seem to overestimate Apple's
               | capabilities
               | 
               | e: in fact, iirc just last month Apple released a paper
               | unveiling their 'OpenElm' language models and they were
               | all trained on nvidia hardware
        
             | ra7 wrote:
             | Which is only a subset of requests Apple devices will serve
             | and only with explicit user permission. That's going to
             | shrink over time as Apple continue to advance their own
             | models and silicon.
        
             | stetrain wrote:
             | Right, but are those going to run on Apple-owned hardware
             | at all? It seems like Apple will first prioritize their
             | models running on-device, then their models running on
             | Apple Silicon servers, and then bail out to ChatGPT API
             | calls specifically for Siri requests that they think can be
             | better answered by ChatGPT.
             | 
             | I'm sure OpenAI will need to beef up their hardware to
             | handle these requests - even as filtered down as they are -
             | coming from all of the Apple users that will now be
             | prompting calls to ChatGPT.
        
               | whimsicalism wrote:
               | they're going to be using nvidia (or maybe AMD if they
               | ever catch up) to train these models anyways
        
               | kolinko wrote:
               | not necessarily so, in terms of tflops per $ (of apple's
               | cost of gpus, nit consumer), and tflops per watt their
               | apple silicon is comparable if not better
        
               | whimsicalism wrote:
               | flops/$ is simply not all (or even most) that matters
               | when it comes to training LLMs.... Apple releases LLM
               | research - all of their models are trained on nvidia.
        
               | talldayo wrote:
               | > and tflops per watt their apple silicon is comparable
               | if not better
               | 
               | If Apple currently ships a single product with better AI
               | performance-per-watt than Blackwell, I will eat my hat.
        
           | lxgr wrote:
           | They're even explicitly saying:
           | 
           | > These models run on servers powered by Apple silicon [...]
           | 
           | That doesn't mean that there are no Nvidia GPUs in these
           | servers, of course.
        
             | Dunedan wrote:
             | That quote is about their own LLMs, not about the use of
             | ChatGPT.
        
               | lxgr wrote:
               | Yes, but GP was talking about the AI workloads Apple will
               | be running on their own servers (which are indeed
               | distinct from those explicitly labeled as ChatGPT).
        
             | bbatsell wrote:
             | They say user data remains in the Secure Enclave at all
             | times, which Nvidia GPUs would not be able to access. I am
             | quite certain that their private cloud inference runs only
             | Apple silicon chips. (The pre-WWDC rumors were that they
             | built custom clusters using M2 Ultras.)
        
               | talldayo wrote:
               | Not that it matters anyways, since Apple refuses to sign
               | Nvidia GPU drivers for MacOS in the first place. So if
               | they own any Nvidia hardware themselves, then they also
               | own more third-party hardware to support it.
        
         | Whatarethese wrote:
         | ChatGPT will only be invoked if on device and apple
         | intelligence servers cant handle request.
        
           | croes wrote:
           | To be useful Apple has to share the data with OpenAI
        
             | Workaccount2 wrote:
             | I can only imagine Apple has some kind of siloing agreement
             | with OpenAI, Apple can easily afford whatever price to do
             | so.
        
               | talldayo wrote:
               | Surely Apple wouldn't simply _market_ privacy while lying
               | to their users about who can access their data:
               | https://arstechnica.com/tech-policy/2023/12/apple-admits-
               | to-...
        
               | noahtallen wrote:
               | Yes, also covered explicitly in the keynote that Apple
               | user's requests to openAI are not tracked. (Plus you have
               | the explicit opt-in to even access chatGPT via siri in
               | the first place.)
        
         | KeplerBoy wrote:
         | Not sure Nvidia is too happy with Apple.
         | 
         | They are the first ones to ship on-device inference at scale on
         | non-nvidia hardware. Apple also has the means to build data
         | center training hardware using apple silicon if they want to do
         | so.
         | 
         | If they are serious about the OAI partnership they could also
         | start to supply them with cloud inference hardware and
         | strongarm them into only using apple servers to serve iOS
         | requests.
        
           | whimsicalism wrote:
           | > Apple also has the means to build data center training
           | hardware using apple silicon if they want to do so.
           | 
           | i'm seeing people all over this thread saying stuff like
           | that, it reads like fantasyland to me. Apple doesn't have the
           | talent or the chips or suppliers or really any of the
           | capabilities to do this, where are people getting it from?
        
             | KeplerBoy wrote:
             | Apple is already one of the largest (if not the largest)
             | customers of TSMC and they have plenty of experience
             | designing some of the best chips on the most modern nodes.
             | 
             | Their ability to design a chip and networking fabric which
             | is fast/efficient at training a narrow set of model
             | architecture is not far fetched by any means.
        
               | talldayo wrote:
               | It's worth noting that one of Apple's largest competitor
               | at TSMC is, in fact, Nvidia. And when you line the
               | benchmarks up, Nvidia is one of the few companies that
               | consistently beats Apple on performance-per-watt even
               | when they _aren 't_ on the same TSMC node:
               | https://browser.geekbench.com/opencl-benchmarks
        
           | mistersquid wrote:
           | > Apple also has the means to build data center training
           | hardware using apple silicon if they want to do so.
           | 
           | > If they are serious about the OAI partnership they could
           | also start to supply them with cloud inference hardware and
           | strongarm them into only using apple servers to serve iOS
           | requests
           | 
           | Apple addressed both these points in today's preso.
           | 
           | 1. They will send requests that require larger contexts to
           | their own Apple Silicon-based servers that will provide Apple
           | devices a new product platform called Private Cloud Compute.
           | 
           | 2. Apple's OS generative AI request APIs won't even talk to
           | cloud compute resources that do not attest to infrastructure
           | that has a publicly available privacy audit.
        
             | wmf wrote:
             | I'm pretty sure those points do not apply to ChatGPT
             | integration. ChatGPT is still running on Nvidia.
        
               | mistersquid wrote:
               | > I'm pretty sure those points do not apply to ChatGPT
               | integration.
               | 
               | You're absolutely right. I got too excited about Apple's
               | strategy to encourage developers to use Apple Private
               | Cloud Compute.
               | 
               | The UX for ChatGPT as shown for iOS 18 makes it obvious
               | that you are sending data outside the Apple Silicon
               | walled garden.
        
           | talldayo wrote:
           | > They are the first ones to ship on-device inference at
           | scale on non-nvidia hardware
           | 
           | Which is neat, but it's not CUDA. It's an application-
           | specific accelerator good at a small subset of operations,
           | controlled by a high-level library the industry is unfamiliar
           | with and too underpowered to run LLMs or image generators.
           | The NPU is a novelty, and today's presentation more-or-less
           | confirmed how useless it is for rich local-only operations.
           | 
           | > Apple also has the means to build data center training
           | hardware using apple silicon if they want to do so.
           | 
           | They could, but that's not a competitor against an NVL72 with
           | hundreds of terabytes of unified GPU memory. And then they
           | would need a CUDA competitor, which could either mean
           | reviving OpenCL's rotting corpse, adopting Tensorflow/Pytorch
           | like a sane and well-reasoned company, or reinventing the
           | wheel with an extra library/Accelerate Framework/MPS solution
           | that nobody knows about and has to convert models to use.
           | 
           | So they _can_ make servers, but Xserve showed us pretty
           | clearly that you can lead a sysadmin to MacOS but you can 't
           | make them use it.
           | 
           | > they could also start to supply them with cloud inference
           | hardware and strongarm them into only using apple servers to
           | serve iOS requests.
           | 
           | I wonder how much money they would lose doing that, over just
           | using the industry-standard Nvidia servers. Once you factor
           | in the margins they would have made selling those chips as
           | consumer systems, it's probably in the tens-of-millions.
        
             | KeplerBoy wrote:
             | You're approaching this from a developers point of view.
             | 
             | Users absolutely don't care if their prompt response has
             | been generated by a CUDA kernel or some poorly documented
             | apple specific silicon a poor team at cupertino almost lost
             | their sanity to while porting the model.
             | 
             | And haven't they already spent quite a bit on money on
             | their pytorch-like MLX framework?
        
               | talldayo wrote:
               | > Users absolutely don't care if their prompt response
               | has been generated by a CUDA kernel or some poorly
               | documented apple specific silicon
               | 
               | They most certainly will. If you run GPT-4o on an iPhone
               | with MLX, it will suck. Users will tell you it sucks, and
               | they won't do so in developer-specific terms.
               | 
               | The entire point of this thread is that Apple _can 't_
               | make users happy with their Neural Engine. They require a
               | stopgap cloud solution to make up for the lack of local
               | power on iPhone.
               | 
               | > And haven't they already spent quite a bit on money on
               | their pytorch-like MLX framework?
               | 
               | As well as Accelerate Framework, Metal Performance
               | Shaders and previously, OpenCL. Apple can't decide where
               | to focus their efforts, least of which in a way that
               | threatens CUDA as a platform.
        
             | PartiallyTyped wrote:
             | Imho, the stronghold of cuda is slowly eroding.
             | 
             | Inference can run without it, and could so for years via
             | ONNX. Now we are starting to see more back-ends becoming
             | available.
             | 
             | see https://github.com/openxla
        
             | alextheparrot wrote:
             | Bit of a detail, but where are you deriving "with hundreds
             | of terabytes of unified GPU memory" from?
        
               | talldayo wrote:
               | I was an order of magnitude off, at least in the case of
               | NVL72: https://www.nvidia.com/en-us/data-
               | center/gb200-nvl72/
               | 
               | But the point stands, these systems occupy a niche that
               | Apple Silicon is poorly suited to filling. They run
               | normal Linux, they support common APIs, and network to
               | dozens of other machines using Infiniband.
        
             | Miraste wrote:
             | > reinventing the wheel with an extra library/Accelerate
             | Framework/MPS solution that nobody knows about and has to
             | convert models to use.
             | 
             | This is Apple's favorite thing in the world. They already
             | have an Apple-Silicon-only ML framework as of a few months
             | ago, called MLX. Does anyone know about it? No. Do you need
             | to convert models to use it? Yes.
        
           | wmf wrote:
           | I would say MS Copilot+ is shipping on-device inference a few
           | months before Apple, although at 1000x lower volume.
        
         | swatcoder wrote:
         | Apple's put ChatGPT integration on the very edge of Apple
         | Intelligence. It's a win for OpenAI to have secured that
         | opportunity, and Nvidia wins by extension (as long as OpenAI
         | continues to rely on them themselves), but the vast majority of
         | what Apple announced today appears to run entirely on Apple
         | Silicon.
         | 
         | It's not especially big news for Nvidia at all.
        
       | summarity wrote:
       | It's an interesting vote of confidence in OpenAI's maturity (from
       | a scale and tech perspective) to integrate it as a system wide,
       | third-party dependency available to all users for free.
        
         | willsmith72 wrote:
         | "interesting" is the right adjective. openai's reliability is
         | worse than the typical 2-person startup, but the quality of
         | their ml is just that good.
        
       | machinekob wrote:
       | Nvidia stock going to grow another 10% after that.
        
       | candiddevmike wrote:
       | Is Siri becoming another "frontend" to ChatGPT?
        
         | shironandon wrote:
         | for most requests.. yes.
        
           | qeternity wrote:
           | No, for selectively few requests.
        
             | adolph wrote:
             | Just the requests that make for a great keynote demo under
             | ideal conditions in SF
        
         | quintes wrote:
         | I found some web results that may be useful if you ask me again
         | from your iphone
        
         | visarga wrote:
         | It was cool how you can just ask Siri to connect you to another
         | LLM.
        
       | durpleDrank wrote:
       | Kind of funny that we got a double LLM situation happening lol.
        
       | jaskaransainiz wrote:
       | Clippy crying
        
       | mupuff1234 wrote:
       | GOOG stock seems to be ok with the announcement.
        
         | kccqzy wrote:
         | I'm convinced that GOOG has the necessary engineering chops to
         | pull the same thing off (or to put it less charitably, copy
         | Apple), but hitherto they were hindered by bad product manager
         | decisions leading them to engineer the wrong thing.
        
         | kernal wrote:
         | And why wouldn't it be? The strain on Microsoft servers and the
         | free use of their resources by iOS users with very little, if
         | any, in return is a win for Apple. Not so much for OpenAI or
         | Microsoft.
        
       | quintes wrote:
       | I just need Apple to be clearly indicating which settings will
       | completely disable this.
        
         | xylol wrote:
         | Some for now, none in the future I fear.
        
         | thepasswordis wrote:
         | I will disable it as soon as it tells me how to also
         | permanently disable live photos.
        
           | noahtallen wrote:
           | Settings > Camera > Preserve Settings. Switch "Live Photo" to
           | on. Then disable live photos when taking a picture.
        
         | lxgr wrote:
         | From the announcement, it seems like it's opt-in, not opt-out:
         | 
         | > Apple users are asked before any questions are sent to
         | ChatGPT,
        
       | ChrisArchitect wrote:
       | Related:
       | 
       |  _Introducing Apple Intelligence, the personal intelligence
       | system_
       | 
       | https://news.ycombinator.com/item?id=40636844
        
       | blueelephanttea wrote:
       | IMO this really feels like the Facebook / Twitter integration
       | from early iOS. That only lasted a few years.
       | 
       | Apple clearly thinks it needs a dedicated LLM service atm. But
       | still thinks it is only supplemental as they handle a bunch of
       | the core stuff without it. And require explicit user consent to
       | use OpenAI. And Apple clearly views it as a partial commodity
       | since they even said they plan to add others.
       | 
       | Tough to bet against OpenAI right now...but this deal does not
       | feel like a 10 year deal...
        
         | lanza wrote:
         | Ditto. They'll use it now while they stand to benefit and in 3
         | years they'll be lambasting OpenAI publicly for not being
         | private enough with data and pretend that they never had
         | anything to do with them.
        
           | nextworddev wrote:
           | This partnership is structured so that no data is logged or
           | sent to OpenAI.
        
             | wmf wrote:
             | The UI shows a "do you want your data to be sent to
             | OpenAI?" popup.
        
               | noahtallen wrote:
               | The parent is partially right, the keynote mentioned that
               | OpenAI agreed to not track Apple user requests.
        
               | toomuchtodo wrote:
               | I would like to see that codified in a binding agreement
               | regulators can surface in discovery if needed. Trust but
               | verify.
        
               | astrange wrote:
               | California and EU law require keeping data like that to
               | be opt-in afaik, so it doesn't need a promise to not do
               | it.
        
             | dosinga wrote:
             | That won't stop Apple from lambasting later
        
             | moralestapia wrote:
             | Some people here somehow thinking they will simultaneously
             | outsmart:
             | 
             | * The CEO of a _three trillion_ dollar company that employs
             | 100,000+ of the best talent you could find around the
             | world, with the best lawyers in the world one phone call
             | away. Also, one of the best performing CEOs in modern
             | times.
             | 
             | AND
             | 
             | * The CEO of the AI company (ok ... non-profit) that pretty
             | much brought up the current wave of AI to existence and who
             | has also spent the best part of its life building and
             | growing 1,000s of startups in SF.
             | 
             | Lol.
        
               | observationist wrote:
               | You make it sound like it's merit or competence that
               | landed Cook in that position, and that he somehow has
               | earned the prestige of the position?
               | 
               | I could buy that argument about Jobs. Cook is just a guy
               | with a title. He follows rules and doesn't get fired, but
               | otherwise does everything he can with all the resources
               | at his disposal to make as much money as possible. Given
               | those same constraints and resources, most people with an
               | IQ above 120 would do as well. Apple is an institution
               | unto itself, and you'd have to repeatedly, rapidly, and
               | diabolically corrupt many, many layers of corporate
               | protections to hurt the company intentionally. Instead,
               | what we see is simple complacency and bureaucracy
               | chipping away at any innovative edge that Apple might
               | once have had.
               | 
               | Maintenance and steady piloting is a far different
               | skillset than innovation and creation.
               | 
               | Make no mistake, Cook won the lottery. He knew the right
               | people, worked the right jobs, never screwed up anything
               | big, and was at the right place at the right time to land
               | where he is. Good for him, but let's not pretend he got
               | where he is through preternatural skill or competence.
               | 
               | I know it's a silicon valley trope and all, but the
               | c-class mythos is so patently absurd. Most of the best
               | leaders just do their best to not screw up. Ones that
               | actually bring an unusual amount of value or intellect to
               | the table are rare. Cook is a dime a dozen.
        
               | A_D_E_P_T wrote:
               | I was with you until your last sentence. By all accounts
               | Cook was one of the world's most effective managers of
               | production and logistics -- a rare talent. He famously
               | streamlined Apple's stock-keeping practices when he was a
               | new hire at Apple. How much he exercises that talent in
               | his day-to-day as CEO is not perfectly clear; it may
               | perhaps have atrophied.
               | 
               | In any case, "dime a dozen" doesn't do him justice -- he
               | was _very_ accomplished, in ways you can 't fake, before
               | becoming CEO.
        
               | observationist wrote:
               | I look at it from a perspective of interchangeability -
               | if you swapped Steve Ballmer in for Cook, nothing much
               | would have changed. Same if you swapped Nadella in for
               | Pichai, or Pichai for Cook. Very few of these men are
               | exceptional; they are ordinary men with exceptional
               | resources at hand. What they can do, what they should do,
               | and what they can get away with, unseen, govern their
               | impact. Leaders that actually impact their institutions
               | are incredibly rare. Our current crop of ship steadying
               | industry captains, with few exceptions, are not towering
               | figures of incredible prowess and paragons of leadership.
               | They're regular guys in extraordinary circumstances. Joe
               | Schmo with an MBA, 120 IQ, and the same level of
               | institutional knowledge and 2 decades of experience at
               | Apple could have done the same as Cook; Apple wouldn't
               | have looked much different than it does now.
               | 
               | There's a tendency to exaggerate the qualities of men in
               | positions like this. There's nothing inherent to their
               | positions requiring greatness or incredible merit. The
               | extraordinary events already happened; their job is to
               | simply not screw it up, and our system is such that you'd
               | have to try really, really hard to have any noticeable
               | impact, let alone actually hurt a company before the
               | institution itself cuts you out. Those lawyers are a
               | significant part of the organism of a modern mega
               | corporation; they're the substrate upon which the
               | algorithm that _is_ a corporation is running. One of the
               | defenses modern corporations employ is to limit the
               | impact any individual in the organization can have,
               | positive or otherwise, and to employ intense scrutiny and
               | certainty of action commensurate with the power of a
               | position.
               | 
               | Throw Cook into an start-up arena against Musk, Gates,
               | Altman, Jobs, Buffet, etc, and he'd get eaten alive. Cook
               | isn't the scrappy, agile, innovative, ruthless start-up
               | CEO. He's the complacent, steady, predictable
               | institutional CEO coasting on the laurels of his betters,
               | shielded from the trials they faced through the sheer
               | inertia of the organization he currently helms.
               | 
               | They're different types of leaders for different phases
               | of the megacorp organism, and it's OK that Cook isn't
               | Jobs 2.0 - that level of wildness and unpredictability
               | that makes those types of leaders their fortunes can also
               | result in the downfall of their companies. Musk acts with
               | more freedom; the variance in behavior results in a
               | variance of fortunes. Apple is more stable because of
               | Cook, but it's not because he's particularly special.
               | Simply steady and sane.
        
             | kokanee wrote:
             | The partnership is structured so that Apple can legally
             | defend including language in their marketing that says
             | things like "users' IP addresses are obscured." These
             | corporations have proven time and time again that we need
             | to read these statements with the worst possible
             | interpretation.
             | 
             | For example, when they say "requests are not stored by
             | OpenAI," I have to wonder how they define "requests," and
             | whether a request not having been stored by OpenAI means
             | that the request data is not accessible or even outright
             | owned by OpenAI. If Apple writes request data to an S3
             | bucket owned by OpenAI, it's still defensible to say that
             | OpenAI didn't store the request. I'm not saying that's the
             | case; my point is that I don't trust these parties and I
             | don't see a reason to give them the benefit of the doubt.
             | 
             | The freakiest thing about it is that I probably have no way
             | to prevent this AI integration from being installed on my
             | devices. How could that be the case if there was no profit
             | being extracted from my data? Why would they spend untold
             | amounts on this deal and forcibly install expensive
             | software on my personal devices at no cost to me? The
             | obvious answer is that there is a cost to me, it's just not
             | an immediate debit from my bank account.
        
             | LtWorf wrote:
             | Sure
        
           | chaosmanorism wrote:
           | Careful, Elon got outed by his own xAI:
           | 
           | https://grook.ai/saved_session?id=e269e88a7b1a71eff4f176c864.
           | ..
        
         | adolph wrote:
         | Not looking forward to the equivalent of the early Apple Maps
         | years.
        
         | hehdhdjehehegwv wrote:
         | There's a lot I don't like about Sam Altman. There's a lot I
         | don't like about OpenAI.
         | 
         | But goddamn they absolutely leapfrogged Google and Apple and
         | it's completely amazing to see these trillion dollar companies
         | play catch-up with a start-up.
         | 
         | I want to see more of this. Big Tech has been holding back
         | innovation for too long.
        
           | swatcoder wrote:
           | They "leapfrogged" _Google_ on providing a natural language
           | interface to the world knowledge we 'd gotten used to
           | retrieving throug web search. But Apple's never done more
           | than toyed in that space.
           | 
           | Apple's focus has long been on a lifestyle product experience
           | across their portfolio of hardware, and Apple Intelligence
           | appears to be focused exactly on that in a way that has
           | little overlap with OpenAI's offerings. The partnership
           | agreement announced today is just outsourcing an accessory
           | tool to a popular and suitably scaled vendor, the same as
           | they did for web search and social network integration in the
           | past. Nobody's leapfrogging anybody between these two because
           | they're on totally different paths.
        
             | itishappy wrote:
             | Siri is a toy, but I don't think that was Apple's intent.
             | It's been a long-standing complaint that using Siri to
             | search the web sucks compared to other companies offerings.
        
               | swatcoder wrote:
               | Apple's product focus is on getting Siri to bridge your
               | first-party and third-party apps, your 500GB of on-device
               | data, and your terabyte of iCloud data with a nice
               | interface, all of which they're trying to deliver using
               | their own technology.
               | 
               | Having Siri answer your trivia question about whale
               | songs, or suggest a Pad Thai recipe modification when you
               | ran out of soy sauce, is just not where they see the
               | value. Poor web search has been an easy critique to weigh
               | against Siri for the last many years, and the ChatGPT
               | integration (and Apple's own local prompt prep) should
               | fare far better than that, but it doesn't have any
               | relevance to "leapfrogging" because the two companies
               | just aren't trying to do the same thing.
        
               | itishappy wrote:
               | That's the complaint! They play in the same space, they
               | just don't seem to be trying. Siri happily returns links
               | to Pad Thai recipes, it's not like they didn't expect
               | this to be a use-case. They just haven't made a UX that
               | competes with others.
               | 
               | And it's not just web search! Siri's context is abysmal.
               | My dad routinely has to correct the spelling of _his own
               | name_. It 's a common name, there are multiple spellings,
               | _but it 's his phone_!
        
               | hehdhdjehehegwv wrote:
               | My favorite thing with names is I have some people in my
               | contacts who have names that are phonetically similar to
               | English words. When I type those words in a text or
               | email, Siri will change those words to people's names.
        
               | hehdhdjehehegwv wrote:
               | Ah yes, them saying "we're bad at it on purpose, but are
               | scrambling to throw random features in our next release"
               | is definitely a great defense.
        
             | hehdhdjehehegwv wrote:
             | Apple bought Siri 14 years ago, derailed the progress and
             | promise it had by neglect, and ended up needing a bail out
             | from Sam once he kicked their ass in assistants.
             | 
             | Call it whatever you want.
        
           | bee_rider wrote:
           | Isn't MS heavily invested in them and also letting them use
           | Azure pretty extensively? Rather, I think this is more like
           | an interesting model of a big tech company actually managing
           | to figure out exactly how hands off they need to be, in order
           | to not suffocate any ember of innovation. (In this mixed
           | analogy people often put out fires with their bare hands I
           | guess, don't think too hard about it).
        
           | beoberha wrote:
           | Big Tech is the only reason OpenAI can run. Microsoft is
           | propping them up with billions of dollars worth of compute
           | and infrastructure
        
             | prince_nerd wrote:
             | And the foundational tech (Transformers) came from Big
             | Tech, aka Google
        
               | hehdhdjehehegwv wrote:
               | It came from Google _employees_ who _left_ to found
               | startups.
               | 
               | Google _had_ technical founders, now it's run by MBAs and
               | they are having a Kodak Moment.
        
           | Fricken wrote:
           | Change is inevitable in the AI space, and the changes come in
           | fits and starts. In a decade OpenAI too may become a hapless
           | fiefdom lorded over by the previous generation's AI talent.
        
         | Ancapistani wrote:
         | My gut says that it's a stopgap solution to implement the
         | experience they want.
         | 
         | I think Apple's ultimate goal is to move as much of the AI
         | functionality as possible on-device.
        
           | romeros wrote:
           | yup.. and thats good for the consumers as well because they
           | don't have to worry about their private data sitting on open
           | ai servers.
        
             | kokanee wrote:
             | The idea that they would give ChatGPT away to consumers for
             | free without mining the data in some form or another is
             | naive.
        
         | helsinkiandrew wrote:
         | But they've also signalled they'll probably support
         | Google/Anthropic in the future
         | 
         | > Apple demoed other generative features beyond the OpenAI
         | integration and said it plans to announce support for other AI
         | models in the future.
        
         | wg0 wrote:
         | Actually, just in three to five years, lots of "AI boxes" and
         | those magical sparkling icons next to input fields summoning AI
         | would be silently removed.
         | 
         | LLMs are not accurate, they aren't subject matter experts
         | that'll be maybe within 5% error margin.
         | 
         | People will gradually learn and discover anf the cost of
         | keeping a model updated and running won't drastically reduce so
         | we'll most likely see dust settling down.
        
           | bee_rider wrote:
           | How do you define a percent error margin on the typical
           | output of something like ChatGPT? IIRC the image generation
           | folks have started using metrics like subjective users
           | ratings because this stuff is really difficult to quantify
           | objectively.
        
             | airstrike wrote:
             | IMHO the terribly overlooked issue with generative AI is
             | that the end users' views of the response generated by the
             | LLM often differs greatly from the opinion of the person
             | actually interacting with the model
             | 
             | this is particularly evident with image generation, but I
             | think it's true across the board. for example, you may
             | think something I created on midjourney "looks amazing",
             | whereas I may dislike it because it's so far from what I
             | had in mind and was actually trying to accomplish when I
             | was sending in my prompt
        
               | epolanski wrote:
               | Your last paragraph is true regardless of how the image
               | was generated.
               | 
               | One can find anything YOU produce to have different
               | qualities from you.
        
               | airstrike wrote:
               | True, but generally what art I produce IRL is objectively
               | terrible, whereas I can come up with some pretty nice
               | looking images on Midjourney.... which are still terrible
               | to me when I wanted them to look like something else, but
               | others may find them appealing because they don't know
               | how I've failed at my objective
               | 
               | In other words, there are two different objectives in a
               | "drawing": (1) portraying that which I meant to portray
               | and (2) making it aesthetically appealing
               | 
               | People who only see the finished product may be impressed
               | by #2 and never consider how bad I was at #1
        
           | RodgerTheGreat wrote:
           | I truly hope the reckless enthusiasm for LLMs will cool down,
           | but it seems plausible that discretized, compressed versions
           | of today's cutting-edge models will eventually be able to run
           | entirely locally, even on mobile devices; there are no
           | guarantees that they'll get _better_ , but many promising
           | opportunities to get the same unreliable results faster and
           | with less power consumption. Once the models run on-device,
           | there's less of a financial motivation to pull the plug, so
           | we could be stuck with them in one form or another for the
           | long haul.
        
             | namaria wrote:
             | I don't believe this scenario to be very likely because a
             | lot of the 'magic' in current LLMs (emphasis on 'large') is
             | derived from the size of the training datasets and amount
             | of compute they can throw at training and inference.
        
               | valine wrote:
               | Llama 3 8B captures that 'magic' fairly well and runs on
               | a modest gaming PC. You can even run it on an iPhone 15
               | if you're willing to sacrifice floating point precision.
               | Three years from now I full expect GPT4 quality models
               | running locally on an iPhone.
        
               | TeMPOraL wrote:
               | Three years is _more than twice_ the time since GPT-4 was
               | released to now. Almost twice the time ChatGPT existed.
               | At this rate, even if we 'll end up with GPT-4
               | equivalents runnable on consumer hardware, the top models
               | made available by big players via API will make local
               | LLMs feel useless. For the time being, the incentive to
               | use a service will continue.
               | 
               | It's like a graphics designer being limited to chose
               | between local MS Paint, and Adobe Creative Cloud. Okay,
               | so Llama 3 8B, if it's really as good as you say,
               | graduates to local Paint.NET. Not useless per se, but
               | still not even in the same class.
        
               | refulgentis wrote:
               | They're extremely pessimistic, 3 years is 200% of how
               | long it took ChatGPT 3.5.
               | 
               | Llama 8B is ChatGPT 3.5 (18 months before L3), running on
               | all new iPhones released since October 2022, (19 months
               | before L3). That includes multimodal variants (built
               | outside Facebook).
        
               | valine wrote:
               | No one knows how it will all shake out. I'm personally
               | skeptical scaling laws will hold beyond GPT4 sized
               | models. GPT4 is likely severely undertrained given how
               | much data facebook is using to train their 8B parameter
               | models. Unless OpenAI has a dramatic new algorithmic
               | discovery or a vast trove of previously unused data, I
               | think GPT5 and beyond will be modest improvements.
               | 
               | Alternatively synthetic data might drive the next
               | generation of models, but that's largely untested at this
               | point.
        
               | Aerbil313 wrote:
               | The one thing people overlook is the user data on
               | ChatGPT. That's OpenAI's real moat. That data is "free"
               | RLHF data and possibly, training data.
        
           | bongodongobob wrote:
           | I don't think that's really what Apple is going to do with it
           | though, it's not going to be for factual question and answer
           | stuff. It will be used more like a personal assistant, what's
           | on my calendar this week, who is the last person who called
           | me etc. I think it will more likely be an LLM in the
           | background that uses tools to query iCloud and such, ie,
           | making Siri actually useful.
        
           | onion2k wrote:
           | _LLMs are not accurate, they aren 't subject matter experts
           | that'll be maybe within 5% error margin._
           | 
           | You're asserting that the AI features will be removed in 3 to
           | 5 years because they're not accurate enough _today_ , but you
           | actually need them to remain inaccurate in 3 years time for
           | your prediction to be correct.
           | 
           | That seems unlikely. I agree that people will start to
           | realize the cost, but the accuracy will improve, so people
           | might be willing to pay.
        
             | jhallenworld wrote:
             | The same argument can be used for Tesla full self driving:
             | basically it has to be (nearly) perfect, and after years of
             | development, it's not there yet. What's different about
             | LLMs?
        
               | jaapbadlands wrote:
               | They don't have to be perfect to be useful, and death
               | isn't the price of being wrong.
        
               | ale42 wrote:
               | Death actually _can_ be the price of being wrong. Just
               | wait for someone to do the wrong thing with an AI tool
               | they weren 't supposed to use for what they were doing,
               | and the AI to spit out the worse possible "hallucination"
               | (in terms of outcome).
        
               | ben_w wrote:
               | What you say is true, however with self-driving cars
               | death, personal injury, and property damage are much more
               | immediate, much more visible, and many of the errors are
               | of a kind where most people are qualified to immediately
               | understand what the machine did wrong.
               | 
               | An LLM that gives you a detailed plan for removing a
               | stubborn stain in your toilet that involves mixing the
               | wrong combination of drain cleaners and accidentally
               | releasing chlorine, is going to happen if it hasn't
               | already, but a lot of people will read about this and go
               | "oh, I didn't know you could gas yourself like that" and
               | then continue to ask the same model for recipes or
               | Norwegian wedding poetry because "what could possibly go
               | wrong?"
               | 
               | And if you wonder how anyone can possibly read about such
               | a story and react that way, remember that Yann LeCun says
               | this kind of thing despite (a) working for Facebook and
               | (b) Facebook's algorithm gets flack not only for the
               | current teen depression epidemic, but also from the UN
               | for not doing enough to stop the (ongoing) genocide in
               | Myanmar.
               | 
               | It's a cognitive blind spot of some kind. Plenty smart,
               | still can't recognise the connection.
        
               | ToValueFunfetti wrote:
               | GPT-4 is 1 year old; 3.5 is 1 and a half. Before 3.5,
               | this wasn't really a useful technology. 7 years ago it
               | was a research project that Google saw no value in
               | pursuing.
        
               | dpkirchner wrote:
               | There's hundreds+ of companies making LLMs we can choose
               | from, and the switching cost is low. There's only one
               | company that can make self-driving software for Tesla.
               | Basically, competition should lead to improvements.
        
               | ben_w wrote:
               | Tesla aren't the only people trying to make self-driving
               | cars, famously Uber tried and Waymo looks like they're
               | slowly succeeding. Competition can be useful, but it's
               | not a panacea.
        
             | wg0 wrote:
             | Anyone claiming that accuracy of AI models WILL improve is
             | either unaware of how they really work or is a snake oil
             | salesman.
             | 
             | Forget about a model that knows EVERYTHING. Let's just
             | train a model that only is expert in not all the law of
             | United states just one state and not even that, just
             | understands FULLY the tax law of just one state to the
             | extent that whatever documents you throw at it, it beats a
             | tax consultancy firm every single time.
             | 
             | If even that were possible, OpenAI et.el would be playing
             | this game differently.
        
           | ben_w wrote:
           | > LLMs are not accurate, they aren't subject matter experts
           | that'll be maybe within 5% error margin.
           | 
           | The Gell Mann amnesia effect suggests people will have a very
           | hard time noticing the difference. Even if the models never
           | improve, they're more accurate than a lot of newspaper
           | reporting.
           | 
           | > People will gradually learn and discover anf the cost of
           | keeping a model updated and running won't drastically reduce
           | so we'll most likely see dust settling down.
           | 
           | So, you're betting on no significant cost reduction of
           | compute hardware? Seems implausible to me.
        
           | TeMPOraL wrote:
           | > _People will gradually learn and discover anf the cost of
           | keeping a model updated and running won 't drastically reduce
           | so we'll most likely see dust settling down._
           | 
           | As mentioned elsewhere, 3 to 5 years is some 3x to 5x as long
           | as GPT-4 exists; some 2-3x as long as ChatGPT exists and LLMs
           | suddenly graduated from being obscure research projects to
           | general-purpose tools. Do you really believe the capability
           | limit has already been hit?
           | 
           | Not to mention, there's _lots_ of money and reputation
           | invested in searching for alternatives to current transformer
           | architecture. Are you certain that within the next year or
           | two, one or more of the alternatives won 't pan out, bringing
           | e.g. linear scaling in place of quadratic, without loss of
           | capabilities?
        
             | wg0 wrote:
             | I'm pretty sure that statistical foundations of AI where a
             | thing just been shy of 0.004 of the threshold value out of
             | a million dimensional space can get miscategrized as
             | something else will not deliver AGI or any useable and
             | reliable AI for that matter other than that sequence of
             | sequence mapping (voice to text, text to voice etc.)
             | applications.
             | 
             | As for money and reputation, that's a lot behind gold
             | making too in medieval times and look where that lead too.
             | 
             | Scientific optimism is a thinking distortion and a fallacy
             | too.
        
         | MangoCoffee wrote:
         | It's a win for OpenAI and AI. I remember someone on Hacker News
         | commented that OpenAI is a company searching for a market. This
         | move might prove that AI, and OpenAI, has a legitimate way to
         | be used and profitable. We'll see.
        
           | eddieplan9 wrote:
           | Steve Jobs famously said Dropbox is a feature not a product.
           | This feels very much like it.
        
             | nextworddev wrote:
             | Well, Dropbox is a sub $8bn company now that hasn't really
             | grown in 5 years, so maybe Steve was right?
        
               | pembrook wrote:
               | Yea, I mean...if you're only doing $3.5Bn in annual
               | revenue at 83% gross margins...like, are you even a
               | product bro?
        
               | epolanski wrote:
               | If anything, your words prove he was absolutely wrong.
        
             | dymk wrote:
             | Looking at their stock performance and the amount of work
             | they've put into features that aren't Dropbox file sync, he
             | appears to have been right. iCloud doc syncing is what DB
             | offered at that time.
        
         | kovezd wrote:
         | Disagree. This feels more like the Google partnership with
         | Apple' Safari that has lasted for long time. Except in this
         | case, I think is OpenAI who will get the big checks.
        
           | lxgr wrote:
           | Why would Apple want to keep paying big checks while
           | simultaneously weakening their privacy story?
        
           | dereg wrote:
           | If Apple wasn't selling privacy, I'd assume the other way
           | around. Or if anything, OpenAI would give the service out for
           | free. There's a reason why ChatGPT became free to the public,
           | GPT-4o moreover. It's obvious that OpenAI needs whatever data
           | it can get its hands on to train GPT-5.
        
             | astrange wrote:
             | ChatGPT was free to the public because it was a toy for a
             | conference. They didn't expect it to be popular because it
             | was basically already available in Playground for months.
             | 
             | I think 4o is free because GPT3.5 was so relatively bad it
             | means people are constantly claiming LLMs can't do things
             | that 4 does just fine.
        
           | Dunedan wrote:
           | Apple doesn't even bother to highlight their cooperation with
           | OpenAI. Instead they bury the integration of ChatGPT as the
           | last section of their "Apple Intelligence" announcement:
           | https://www.apple.com/newsroom/2024/06/introducing-apple-
           | int...
        
           | blueelephanttea wrote:
           | If Apple were paying to use Google the partnership would not
           | still exist today.
        
         | halotrope wrote:
         | yeah somehow it reminded me of the fb integration too. we'll
         | see how well it works in practice. i was hoping for them to
         | show the sky demo with the new voice mode that openai recently
         | demoed
        
       | mvdtnz wrote:
       | I still don't know a single person who wants this crap. I don't
       | want "AI" in my web browser, I don't want it in my email client,
       | I don't want it on my phone, I just don't want it. And it feels
       | like everyone I speak to agrees! So who is this all for?
        
         | whimsicalism wrote:
         | maybe you come on too strong for people who disagree to voice
         | it
         | 
         | personally i like it and want it, except in places where it is
         | shoehorned
        
         | myheadasplode wrote:
         | I don't really care for AI in google search results or email.
         | It's often wrong and not what I'm looking for. I _would_ like a
         | much better Siri, so hopefully that 's part of what we get.
        
         | incognito124 wrote:
         | The holders of the shares?
        
         | trustno2 wrote:
         | It _did_ help me to translate nursery rhymes for my kid from
         | one language to another while they still rhyme and mean
         | approximately the same thing. It sucked in gpt-3 but 4o (or
         | whatever is the latest one) is actually really great for that.
         | 
         | It excels in "transfering style from one thing to another
         | thing" basically.
         | 
         | However every time I asked it a factual thing that I couldn't
         | find on Google, it was always hilariously wrong
        
         | sureglymop wrote:
         | I highly agree. And everything it has generated so far has been
         | incredibly mid. Yeah, there may be some legitimate use cases
         | but as it usually goes everyone is overdoing it head first
         | without really thinking enough about it beforehand.
        
         | LZ_Khan wrote:
         | no one wanted an iphone until it came out either
        
           | ipaddr wrote:
           | Everyone wanted something like an iphone and when it came it
           | took over the market. We had a product that got shifted into
           | a third world product overnight.
        
         | educasean wrote:
         | Now you know a few. I love the idea of being able to ask my
         | phone for things like "the guy who emailed last week about the
         | interview, what was his name?" without having to dig through
         | emails trying not to lose the context in my head.
        
         | c1sc0 wrote:
         | Me! I'm dumping text I write into an LLM all-day to help with
         | editing. And I often start brainstorming / research by opening
         | ChatGPT in voice mode, talk to it and keep a browser open at
         | the same time to fact-check the output.
        
         | rvz wrote:
         | > So who is this all for?
         | 
         | It is for everyone and the rest of us. Like it or not.
         | 
         | "AI" cannot be stopped.
        
           | tymscar wrote:
           | Cringe
        
             | rvz wrote:
             | Cope.
        
         | jbm wrote:
         | Nah, I want it. I use it all the time to do things like
         | translate obscure Kanji and learn more about certain religious
         | texts.
         | 
         | For example:
         | https://chatgpt.com/share/4a31c79b-a380-4fa0-9808-8145e3cfb4...
         | 
         | LLMs are very useful and very helpful, certainly more helpful
         | than ony searching the web. Watching people apply the crypto
         | lens to it is unfortunate for them, it's not a waste of
         | electricity like most crypto, and it isn't useless output.
        
           | 101008 wrote:
           | I may be wrong, but the first GPT response says that kanji
           | means "spirit" "soul" or "ghost" but a quick Google search
           | says it means "drops of rain"... do you trust GPT on this
           | matter?
           | 
           | https://hsk.academy/en/characters/%E9%9C%9D
        
             | jbm wrote:
             | Yes, the top radical is for drops of rain but the i
             | inclusion of the bottom part has a meaning that clearly
             | aligns with spirit, especially when you see the rare kanji
             | that use it as a component. I only was curious as it was
             | part of another kanji (Ling ) that I was investigating.
        
         | anvuong wrote:
         | I actually want a virtual assistant that can reliably process
         | my simple requests. But so far all these companies look like
         | they are still in the figuring out phase, basically throwing
         | everything at the wall to see what sticks. Hopefully after 2 or
         | 3 years things will settle down and we will get a great virtual
         | assistant.
        
           | layer8 wrote:
           | I believe this is in the same category as a car that will
           | reliably fully self-drive.
        
       | vessenes wrote:
       | Um, wow. The major question in my mind: did Apple pay, or did
       | OpenAI pay? (A-la google for search).
       | 
       | Apple is not going to lose control of the customer, ever, so on
       | balance I would guess this is either not a forever partnership or
       | that OpenAI won't ultimately get what they want out of it. I'm
       | very curious to see how much will be done on device and how much
       | will go to gpt4o out of the gate.
       | 
       | I'm also curious if they're using Apple intelligence for function
       | calling and routing in device; I assume so. Extremely solid
       | offerings from Apple today in general.
        
         | wmf wrote:
         | Apple is definitely paying because they don't let OpenAI save
         | anything.
        
           | ec109685 wrote:
           | They're letting OpenAI upsell to a professional version, so
           | there is a lot in it for OpenAI to offer this for free, even
           | without the data.
        
           | layer8 wrote:
           | Yeah, I wonder how many subscribers OpenAI will lose.
        
           | kernal wrote:
           | I don't believe that. Apple is in the driver's seat in this
           | negotiation. I believe OpenAI wanted Apple as a jewel in
           | their crown and bent over backwards to get them to sign. I
           | don't see how OpenAI makes any money off of this, but I do
           | see them losing a lot of money as iOS users slam their
           | service for free as they eat the costs.
        
       | rvz wrote:
       | Quite unsurprising that the prediction of a hybrid solution
       | turned out to be true. [0]
       | 
       | The plan is still the same. _Eventually_ Apple will have Apple
       | Intelligence all in-house and race everyone to $0.
       | 
       | [0] https://news.ycombinator.com/item?id=40630897
        
       | jspaetzel wrote:
       | Siri now powered by OpenAI powered by Microsoft Azure
        
       | daft_pink wrote:
       | From watching it, it seems like it's just a kit type integration
       | as it's super clear that it's going to a partner and they said
       | they may allow other partners.
        
         | rgrmrts wrote:
         | I really hope so. I don't trust OpenAI and I'd really rather
         | not have any integrations with them on any of my devices.
        
       | zug_zug wrote:
       | This sounds like exactly what I wanted. There have been a number
       | of times I've been in the car and wanting to ask Siri something
       | it couldn't handle has been a lot e.g. "What state am I in, and
       | how far am I to the border to the state I'm going to cross next,
       | and can I pump my own gas on each state I'm driving through?"
       | 
       | Though a bit of that is premised on whether it could extract
       | information from google maps.
        
         | jedberg wrote:
         | Of course that will only work if you're using Apple Maps.
        
         | noahtallen wrote:
         | I think most of what you're talking about is going through
         | Apple Intelligence, not chatGPT. That "Apple Intelligence"
         | stuff is supposed to be more local and personal to you,
         | accounting for where you are, your events, things like that.
         | There's an API for apps to provide "intents," which Siri can
         | use to chain everything together. (Like "cost of gas at the
         | nearest gas station" or something like that.) None of that is
         | OpenAI, according to the keynote.
        
         | okdood64 wrote:
         | Carplay Siri functionality is currently neutered. A lot of
         | times it won't answer more complex questions that would
         | otherwise be answered without Carplay.
        
           | shepherdjerred wrote:
           | I haven't found this to be the case. Does Siri explicitly
           | refuse to answer questions, or does it misunderstand you?
           | Maybe the microphone in your car makes hearing difficult?
        
         | jjulius wrote:
         | >... and can I pump my own gas on each state I'm driving
         | through?
         | 
         | Huh? Seems like an odd thing to feel the need to ask, as up
         | until last year, the answer was always, "Only if you're driving
         | through Oregon or New Jersey".
         | 
         | Now, you're only unable to pump your own in NJ.
        
         | akira2501 wrote:
         | > "What state am I in, and how far am I to the border to the
         | state I'm going to cross next, and can I pump my own gas on
         | each state I'm driving through?"
         | 
         | What kind of trip was this where these were pertinent
         | questions? Couldn't you have just rephrased most of them?
         | 
         | "What is my current location?"
         | 
         | "Show maps."
         | 
         | "Which states don't allow you to pump your own gas?"
        
       | willis936 wrote:
       | Signs of healthy competition and certainly no reason to claim a
       | tech monopoly.
        
       | karaterobot wrote:
       | > Privacy protections are built in when accessing ChatGPT within
       | Siri and Writing Tools--requests are not stored by OpenAI, and
       | users' IP addresses are obscured.
       | 
       | Does anybody believe Apple will not be able to know who sent a
       | given request, and that OpenAI won't be able to use the data in
       | the request for more or less anything they want? I read
       | statements like this and just flat-out don't believe them
       | anymore.
        
         | RedComet wrote:
         | "Obscured" sounds weak and deliberately vague on their part.
        
         | ec109685 wrote:
         | They have tech to obscure IPs in a way that prevents any one
         | entity from being able to de-obfuscate:
         | https://support.apple.com/en-us/102602
        
           | karaterobot wrote:
           | I guess my point is that removing an IP address does not make
           | something anonymous.
        
       | breadwinner wrote:
       | My biggest disappointment was that Apple said nothing about
       | leveraging GPT-4 to improve voice recognition in iMessage. Voice
       | recognition of ChatGPT is incredibly accurate when compared to
       | iOS. ChatGPT almost never gets anything wrong, while iMessage/iOS
       | voice recognition is extremely frustrating.
       | 
       | So much so that I sometimes dictate to ChatGPT then cut & paste
       | into iMessage.
        
         | noahtallen wrote:
         | They did talk about Siri being better at voice recognition
         | using Apple's own on-device models, so I imagine that will
         | eventually apply more broadly.
        
           | breadwinner wrote:
           | On-device models will not be big enough in the near future.
           | What makes ChatGPT so awesome at recognition is that their
           | model is huge, and so no matter how obscure the topic of the
           | dictation, ChatGPT knows what you're talking about.
        
             | noahtallen wrote:
             | Apple also talked about their private compute cloud, which
             | allows larger models and workflows to integrate with local
             | AI models. It sounds like they will figure out which
             | features require bigger models and which don't. So I think
             | there is a lot of room for what you're mentioning in the
             | future of this AI platform.
             | 
             | Plus, they talk about live phone call transcriptions, voice
             | transcription in notes, the ability to correct words as you
             | speak, contextual conversations in siri, etc. It 100%
             | sounds like better voice recognition is coming
        
             | extr wrote:
             | Pretty sure transcription is done locally on Pixel phones
             | and it's pretty good. Not as good as ChatGPT, but most of
             | the way there. If current iOS is like a 50, Pixel is like a
             | 90 and OpenAI is like 98.
        
         | extr wrote:
         | You can set up a shortcut that will record you, hit the Whisper
         | API, then copy to your clipboard. It's not as smooth as native
         | transcription or the SOTA on Google phones but it's pretty
         | good.
        
       | processing wrote:
       | "Siri add an alarm for an appointment for the dentist tomorrow at
       | 10"
       | 
       |  _Sets appointment for 10pm_
       | 
       | Will the Siri team be fired or are they in charge of openAI
       | integration?
        
         | nullwriter wrote:
         | Its given that a dentist appointment is never usually at 10PM -
         | this doesn't seem probable. LLMs are good at generalizing
        
           | pjerem wrote:
           | And also, that would still be more useful than the current
           | situation where Siri would just answer that it can not give
           | you the weather forecast because there is no city named
           | "Appointment at 10".
           | 
           | Or it may create an appointment at Athens.
        
           | empath75 wrote:
           | Siri without it isn't though. It's so garbage as to be
           | useless.
        
         | matsz wrote:
         | Switching to 24-hour clock solves that problem.
         | 
         | Personally, 12h clock always confused me, so I wouldn't blame
         | Siri.
        
           | layer8 wrote:
           | Siri still uses AM/PM for me when speaking, despite having a
           | 24-hour clock configured.
        
       | philsnow wrote:
       | ~~This is not the direction I was hoping Apple would go with AI.
       | 
       | With all the neural this and that bits baked into apple silicon,
       | it has seemed [0] for a while that Apple wanted to run all these
       | workloads locally, but this partnership seems like a significant
       | privacy backslide.
       | 
       | Another comment in this thread said something about they're using
       | b Apple silicon for these workloads, but didn't give an
       | indication of whether that silicon lives in Apple datacenters or
       | OpenAI ones.
       | 
       | [0] https://news.ycombinator.com/item?id=38725167~~
       | 
       |  _edit: I should have mentioned that I didn't have a chance to
       | watch the video yet; a reply to my comment mentioned that it's
       | addressed in the video so I'll go watch that later_
        
         | astrange wrote:
         | Watch the keynote, it's all clearly explained and you don't
         | need to learn about it from HN comments.
        
           | philsnow wrote:
           | I should have put a disclaimer saying that I hadn't had a
           | chance to watch the video yet. Thanks for mentioning that
           | it's addressed, I'll take a look later.
        
         | noahtallen wrote:
         | I don't think this is a fair take. It sounds like the vast
         | majority of the new AI features (including the local personal
         | context for Siri, the various text/image editing features,
         | better photo categorization, and the list goes on) are all
         | local, on-device models, which can, if needed, use Apple's
         | private cloud. That requires public researcher verification of
         | server software for iOS to even talk to it. (Allegedly :))
         | 
         | The OpenAI partnership is seemingly _only_ if Siri decides it
         | doesn 't have the full context needed to answer. (E.g. if you
         | ask something very creative/generative.) At that point, Siri
         | says "hey, chatGPT might be better at answering this, do you
         | consent to me sending your prompt to them?" and then you get to
         | choose. Apple's partnership also seems to include the various
         | settings that prevent OpenAI from tracking/training on the
         | prompts sent in.
         | 
         | Honestly, that more creative side of genAI is not as
         | interesting in the full context of Apple Intelligence. The real
         | power is coming from the local, personal context, where Siri
         | can deduce context based on your emails, messages, calendar
         | events, maps, photos, etc. to really deeply integrate with
         | apps. (Allegedly!) And that part is not OpenAI.
        
           | atlex2 wrote:
           | Agreed. Apple pretty clearly focused on building an action-
           | tuned model. Also, notice how in the videos you barely see
           | any "Siri speech". I wonder what they used for pre-training,
           | but probably they did it with much more legit datasources--
           | They're launching with English only.
        
         | Workaccount2 wrote:
         | Apple is in the position where it caters primarily to the tech
         | ignorant, so coming out an explaining that Apple LLM is a bit
         | worse (read: far worse) than the cool LLM's on the internet
         | because they are privacy conscious is a non-starter.
         | 
         | Local LLM's on regular local hardware (i.e. no $500+ dedicated
         | GPUs) is _way_ far behind SoTA models right now.
         | 
         | Apple is not gonna be in a position where you can practically
         | real-time intelligently chat with Android phones while iPhones
         | are churning out 3 tokens/second of near useless babbling.
        
           | philsnow wrote:
           | (I haven't watched the video yet)
           | 
           | I completely agree about the market positioning and not
           | keeping up with other platforms' abilities being a non-
           | starter. I just hope it will be clear how to keep my external
           | brain (phone) from being scanned by OpenAI.
           | 
           | (I don't want it to seem like I'm just a hater of either
           | Apple or OpenAI; I'm a more-recent adopter of Apple tech and
           | I'm not looking back, and I have an OpenAI subscription and
           | find it invaluable for some uses.)
           | 
           | Another thing I'm going to be looking for in the video is how
           | this initiative jibes with the greenness Apple has been
           | passing really hard. If they're bringing this kind of
           | generative AI from niches to every iphone, it seems that
           | would add a fair amount of power consumption.
        
             | noahtallen wrote:
             | > I just hope it will be clear how to keep my external
             | brain (phone) from being scanned by OpenAI.
             | 
             | It's very clear, the keynote demonstrates that Siri passing
             | a prompt to chatGPT is completely opt-in and only happens
             | when Siri thinks the prompt needs the more
             | generative/creative model that OpenAI provides.
        
       | mkoubaa wrote:
       | Yay more shitgpt all over my life
        
       | solardev wrote:
       | I'm confused now... Apple's other announcement today discussed
       | on-device AI.
       | 
       | So what sorts of queries will be on-device and what will be sent
       | to OpenAI? How does this distinction appear in the UI?
        
         | sunnybeetroot wrote:
         | It's in the keynote video.
        
           | solardev wrote:
           | Ah, thanks. Not in a place where I can watch video right now,
           | but will check it out later.
        
         | noahtallen wrote:
         | I think the headlines are REALLY muddying things. From watching
         | the Keynote, most of Apple Intelligence is their own stuff,
         | mostly on-device.
         | 
         | Siri _explicitly_ asks you if you want to use chatGPT to answer
         | a query. It does so when it thinks chatGPT will have a better
         | answer. It sounds like that will be for very creative
         | /generative types of things like "please create a 4 course meal
         | with xyz foods," at which point Siri asks you if you want to
         | use chatGPT. It will be very clear, according to Apple.
        
           | lxgr wrote:
           | That said, the Apple Intelligence vs. OpenAI distinction
           | seems much clearer than the Apple cloud vs. local
           | distinction, which I find somewhat concerning.
           | 
           | Sure, the Apple cloud is ultra-secure and private and all,
           | but I'd still like to know what happens where without having
           | to test it myself by enabling airplane mode and seeing what
           | still works.
        
             | noahtallen wrote:
             | Yeah, that's a great point. At the same time, it only takes
             | a couple YouTubers/researchers to do some testing for us to
             | know the answer
        
         | etchalon wrote:
         | When you ask Siri a question, it will prompt you to ask whether
         | it can send your query/data to ChatGPT.
         | 
         | All other AI features within the OS are powered by Apple's
         | Private Compute Cloud, which is Apple's code running on Apple's
         | chips at Apple's Data Center.
        
           | noahtallen wrote:
           | > All other AI features within the OS are powered by Apple's
           | Private Compute Cloud
           | 
           | Clarification: All other AI features within the OS are
           | powered by on device models which can reach out to the
           | private cloud for larger workflows & models.
        
       | Hippocrates wrote:
       | I was surprised how little they are leaning on OpenAI. Most of
       | the impressive integrations that actually look useful are on-
       | device or in their private cloud. OpenAIs ChatGPT was relegated
       | to a corner of Siri for answering "google queries", if you grant
       | it permission. This seems like an L for OpenAI, not being a
       | bigger part of the architecture (and I'm glad).
        
         | extr wrote:
         | Agreed. The rumors beforehand made it sound Apple and OpenAI
         | would practically be merging. This felt like a fig leaf so
         | Apple could say you can access SOTA models from you iPhone. But
         | for me personally, the deep integration with the ecosystem +
         | semantic index are way way more interesting.
        
       | fungiblecog wrote:
       | this will work about as well as the tedious fad for chatbots a
       | few years ago
        
       | 35mm wrote:
       | Anyone know more details about Apple's servers?
       | 
       | "...server-based models that run on dedicated Apple silicon
       | servers."
        
         | wmf wrote:
         | It could be as simple as a 1U version of the Mac Studio.
        
       | bartuu wrote:
       | I wonder, if Apple made a deal with Open AI, how did they solve
       | the privacy issue?
        
       | resource_waste wrote:
       | People who are getting your data:
       | 
       | >Apple
       | 
       | >OpenAI
       | 
       | >Bill Gates by proxy
       | 
       | >US government
       | 
       | >???
       | 
       | Also, before anyone says "Oh they'd never do that!". Live in
       | reality. They were already caught with PRISM.
        
       | cletus wrote:
       | This is one of those things that seems like a good idea but is
       | really an existential threat to OpenAI.
       | 
       | Having a single extremely large customer gives that customer a
       | disproportionate amount of power over your business. Apple can
       | decide one day to simply stop paying you because, hey, they can
       | afford the years of litigation to resolve it. Can you weather
       | than storm?
       | 
       | Famously, Benjamin Moore (the paint company) maintains its own
       | stores. They have not (and probably will not) sell their products
       | through Home Depot or Lowe's. Why? This exact reason. A large
       | customer can dictate terms and hold you over a barrel if they so
       | choose.
       | 
       | AI/ML is something Apple cares about. They've designed their own
       | chips around speeding up ML processing on the device. A
       | partnerhship with OpenAI is clearly a stopgap measure. They will
       | absolutely gut OpenAI if they have the opportunity and they will
       | absolutely replace OpenAI when they can.
       | 
       | Apple just doesn't like relying on partners for core
       | functionality. It's why Apple ditched Google Maps for the (still
       | inferior) Apple Maps. The only reason they can't replace Google
       | Search is because Google pays them a boatload of money and
       | they've simply been unable to.
       | 
       | This may seem like a good move for OpenAI but all they've done is
       | let the foxes run the hen house.
        
         | TIPSIO wrote:
         | Eh.
         | 
         | Apple is just racing to integrate AI into its current compute
         | platform as fast possible.
         | 
         | OpenAI definitely believes a smart enough AI (AGI, ASI) will
         | solve way bigger problems or create essentially a brand new
         | compute platform.
         | 
         | Heck, ChatGPT as a lame LLM is almost its own compute platform
         | already.
         | 
         | Apple is just speeding up people getting used to not needing
         | apps and fancy devices vs simply communicating with an agent.
         | 
         | Who really will need Apple in 10-15 years if AI really does get
         | good enough then?
        
         | philwelch wrote:
         | OpenAI already had a "single extremely large customer":
         | Microsoft. In fact the Apple deal is the first sign they're not
         | just a de facto Microsoft subsidiary.
        
         | FanaHOVA wrote:
         | Is there a single citation for anything you just said?
         | 
         | > Apple can decide one day to simply stop paying you because,
         | hey, they can afford the years of litigation to resolve it.
         | 
         | OpenAI and Microsoft also can do the same. Microsoft would be
         | ecstatic to hurt Apple in any way. Also Apple has also no
         | history of doing this with any of the providers they use.
         | 
         | > Benjamin Moore (the paint company) maintains its own stores.
         | They have not (and probably will not) sell their products
         | through Home Depot or Lowe's. Why?
         | 
         | Because Home Depot has their own brand, Behr. Each Behr color
         | explicitly says what Benjamin Moore color it's copying, and
         | they take 100% of the revenue as a direct alternative. Do you
         | have any sources on this being a Benjamin Moore decision?
         | 
         | > It's why Apple ditched Google Maps for the (still inferior)
         | Apple Maps.
         | 
         | How do you define "still inferior"? How many times a day do you
         | use Apple Maps? Do you have any benchmarks that compare the
         | two?
        
       | hu3 wrote:
       | Microsoft owns 49 percent of OpenAI.
       | 
       | That must be some really detailed 100+ pages contract.
       | 
       | I bet Microsoft is mentioned multiple times with things to the
       | effect of: "Under no condition is Microsoft allowed to access any
       | of the data coming from iPhones."
        
         | astrange wrote:
         | No one is allowed to access any of that data.
         | 
         | Microsoft is mostly a cloud company these days though, and
         | they're already an Apple vendor.
        
       | extr wrote:
       | Honestly I was surprised at how limited the ChatGPT integration
       | seems to be. It felt like they 80/20'd AI with the onboard models
       | + semantic index, but also wanted to cover that last 20% with
       | some kind of SOTA cloud model. But they didn't necessarily NEED
       | to.
        
         | layer8 wrote:
         | They need to in order to not look second-class in terms of chat
         | capabilities. On the other hand, they want to make it clear
         | when you are using ChatGPT, probably not just for privacy
         | reasons, but also so that people blame ChatGPT and not Apple
         | when it gets things wrong.
        
           | extr wrote:
           | This may just be me because I'm a heavy ChatGPT user as-is,
           | but I've had my fill of chat capabilities. What I really want
           | is the context awareness, which is what they seemingly
           | delivered on without OpenAI's help!
        
             | layer8 wrote:
             | Note that this is announced as coming in beta this fall,
             | which means they are currently well pre-beta. I would curb
             | my expectations about how well it will work.
        
       | philodeon wrote:
       | It's an interesting choice to announce a brand-new standard for
       | privacy guarantees regarding AI/ML queries...
       | 
       | ...then announce a partnership with ChatGPT.
        
       | cdme wrote:
       | I hate everything about this. Curious how blocking OpenAI's TLDs
       | at the DNS level will go.
        
       | ricksunny wrote:
       | So, pick your Apple partnership long-arc: 1. Apple-Google[Search]
       | 2. Apple-PaloAltoSemi 3. Apple-PortalPlayer
        
       | JeremyHerrman wrote:
       | > Privacy protections are built in when accessing ChatGPT within
       | Siri and Writing Tools--requests are not stored by OpenAI, and
       | users' IP addresses are obscured. Users can also choose to
       | connect their ChatGPT account, which means their data preferences
       | will apply under ChatGPT's policies.
       | 
       | So does this mean that by default, a random Apple user won't have
       | their ChatGPT requests used for OpenAI training, but a paying
       | ChatGPT Plus customer will?
       | 
       | Does this also mean that if I connect my ChatGPT Plus account
       | that my data will be used for training?
       | 
       | It just seems strange to have a lower bar for privacy for paying
       | customers vs users acquired via a partnership.
       | 
       | (yes I'm aware that the "Temporary Chat" feature or turning off
       | memory will prevent data being used for training)
        
         | ec109685 wrote:
         | You can permanently disable OpenAI from training with your chat
         | data for your account:
         | 
         | "To disable model training, navigate to your profile icon on
         | the bottom-left of the page and select Settings > Data
         | Controls, and disable "Improve the model for everyone." While
         | this is disabled, new conversations won't be used to train our
         | models"
        
           | blibble wrote:
           | and if you believe that you'll believe anything
        
             | ec109685 wrote:
             | Companies really don't like being sued for hundreds of
             | millions in punitive damages just for the benefit of
             | training on the small percentage of people that opt out.
        
               | blibble wrote:
               | it's "fair use" mate
        
               | kolinko wrote:
               | no it isn't
        
           | JeremyHerrman wrote:
           | Great to know! Looks like they only made this change at the
           | beginning of May. Prior to that you had to turn off chat
           | history which wasn't worth it to me.
           | 
           | April 25, 2024: "To disable chat history and model training,
           | navigate to ChatGPT > Settings > Data Controls and disable
           | Chat history & training. While history is disabled, new
           | conversations won't be used to train and improve our models,
           | and won't appear in the history sidebar. To monitor for
           | abuse, we will retain all conversations for 30 days before
           | permanently deleting." https://web.archive.org/web/2024042519
           | 4703/https://help.open...
           | 
           | May 02, 2024: "To disable model training, navigate to your
           | profile icon on the bottom-left of the page and select
           | Settings > Data Controls, and disable "Improve the model for
           | everyone." While this is disabled, new conversations won't be
           | used to train our models." https://web.archive.org/web/202405
           | 02203525/https://help.open...
        
             | tadala wrote:
             | You could fill a form and request them not to train; they
             | usually approved it fairly quickly, but did not advertise
             | it well enough!
        
       | knodi123 wrote:
       | Oh my god, finally. I can't get over how bad Siri is, compared to
       | Alexa and Google.
        
       | getpost wrote:
       | GPT4o access is a handy feature, but, what I was hoping to hear
       | about is an improvement in Siri's language "understanding."
       | 
       | In today's WWDC presentation, there were a few small examples of
       | Siri improvements, such as an ability to maintain context, e.g.,
       | 'Add her flight arrival time to my calendar,' wherein Siri knows
       | who "her" refers to.
       | 
       | In my day-to-day experience with Siri, it's clear Siri doesn't
       | have the kind of ability to understand language that LLMs
       | provide. It still feels like clever son-of-Eliza hacks with stock
       | phrases. If your utterance doesn't match with a pre-programmed
       | stock phrase, it doesn't work. The other day I said something
       | like "Play the song you played before the one I asked you to
       | skip," and Siri didn't seem to know what I wanted. OTOH, GPT4o
       | can easily handle statements like that.
       | 
       | Does anyone know to what extent Siri's underlying language models
       | are being upgraded?
        
         | blcknight wrote:
         | > In today's WWDC presentation, there were a few small examples
         | of Siri improvements, such as an ability to maintain context,
         | e.g., 'Add her flight arrival time to my calendar,' wherein
         | Siri knows who "her" refers to.
         | 
         | Didn't Cortana do this? Pretty underwhelming in 2024.
        
         | asdasdsddd wrote:
         | Siri just feels like, tokenize input => run classifier over
         | hardcoded actions.
        
           | shepherdjerred wrote:
           | I don't think these actions are hardcoded with the App
           | Intents framework. Even today you can ask Siri to run
           | arbitrary shortcuts via custom keywords.
        
         | shepherdjerred wrote:
         | I agree, this is the biggest annoyance with voice assistants
         | today. The good news is that, as you noted, the technology to
         | interpret complex/unclear requests is definitely already here
         | today with ChatGPT.
         | 
         | I think that Apple demoed this today where the presenter
         | changed her mind mid-sentence during a weather query.
         | 
         | I'm hopeful that means they've added a LLM to interpret the
         | intent of user requests.
        
         | TeMPOraL wrote:
         | That's something that I keep wondering about. The existing
         | voice assistants are all garbage across the board. Whatever you
         | say about Siri, Google's assistant is even worse. Meanwhile,
         | for the past couple months, I was able to fire up ChatGPT app
         | and speak to it casually, in noisy environments, and it would
         | both correctly convert my speech to text (with less than 5%
         | errors) _and_ correctly understand what I 'm actually saying
         | (even in presence of transcription errors).
         | 
         |  _All it takes_ to make a qualitatively better voice assistant
         | would be to give GPT-4 a spec of functions representing things
         | it can do on your phone, and integrating that with the OS. So
         | why none of the companies bothered to do it? For that matter, I
         | wonder why OpenAI didn 't extend the ChatGPT app in this
         | direction?
        
       | zx10rse wrote:
       | Well I guess it is time to look around for new devices. This is
       | by far the biggest mistake Apple made.
        
         | jiggawatts wrote:
         | What would you prefer? Less capable products with fewer
         | features? Or a Google product designed in collaboration with
         | their advertising data hoovering team?
        
       | solarkraft wrote:
       | OpenAI is such a controversial company and good competitors like
       | Anthropic, who arguably align better with their brand, exist.
       | That makes the deal so weird to me.
        
         | akira2501 wrote:
         | OpenAI has nothing of particularly high value. They're giving
         | away the store right now just to claim the onboarding. This
         | unsustainable game will end badly and soon.
        
           | 101011 wrote:
           | Nothing of particularly high value, really?
        
             | akira2501 wrote:
             | It's actually a beneficial feature that two people can look
             | at a market and come to two completely different
             | conclusions about it. Yes, I suspect that OpenAI has
             | nothing of lasting competitive value, they're currently
             | overvalued by entities who want their money back, and you
             | can view their recent actions and partnerships through this
             | lens without complication.
        
         | impulser_ wrote:
         | It's also weird because Anthropic models are just better for
         | these tasks. Claude responses are almost always better than
         | GPT4.
         | 
         | I stopped using GPT4 because it would just yap on and on about
         | things I don't want in the response. Claude 3 responses feel
         | way more human like because it response with similar
         | information a human would and not with a bunch of unneeded
         | gibber.
         | 
         | By the time this roles out at the end of the year who knows
         | what models would be the best. Why bet on one company's models?
         | We have seen how fast open source models have caught up to
         | GPT4. Why put all your chips into one basket?
        
       | tmaly wrote:
       | Imagine Apple being able to search for bad things on your phone
       | using AI at the behest of some state or local government request
        
       | andrewinardeer wrote:
       | Now OpenAI has massive contracts with Microsoft and Apple. Two
       | years ago we basically hadn't even raised an eyebrow at OpenAI.
        
       | hurril wrote:
       | I love Apple products but I doubt this will become good.
        
       ___________________________________________________________________
       (page generated 2024-06-10 23:01 UTC)