Apple's response to AI

Ever since Apple revealed their AI strategy to lean into on-device models, there has been a sort of tension with the approach from other companies like OpenAI, Google, and Amazon. Was Apple Intelligence going to work? There are advantages: for user privacy because more data stays on your phone, and for scaling because the load is distributed across millions of phones instead of only running in data centers.

Now we know that a more advanced and personal Siri is delayed until iOS 19. From an Apple spokesperson via Daring Fireball:

We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.

Jason Snell blogged a recap of the Siri demo from WWDC last year:

This led to one of the killer demos of WWDC 2024, in which Siri was able to understand when someone’s mom’s flight is landing by cross-referencing an email with real-time flight tracking to get a good answer. From there, the demo pulls a lunch plan with mom out of a text thread and then displays how long the drive is to there from the airport—all from within Siri, rather than individual apps.

I’m not worried about a delay. Software is complicated and we all hit unexpected challenges. I’m worried that Apple can’t pull this off at all. Parker Ortolani is blogging the same kind of questions:

It felt almost vaporware-like when revealed at WWDC and it certainly seems like they are having a great deal of difficulty making it a reality.

And from Federico Viticci:

…one has to wonder why these features were demoed at all at Apple’s biggest software event last year and if those previews – absent a real, in-person event – were actually animated prototypes.

There are two potential problems with Apple’s approach:

  • On-device models are small and limited by hardware like RAM.
  • App Intents for extensibility require support from developers and won’t be available on all devices.

I’ve written about this before with Siri, including in this blog post right before WWDC last year. Because each device has its own version of Siri, it is hard to ever have a universal assistant that works everywhere and is extensible. There are things Siri can do on a phone that it can’t do on a HomePod.

I’m not seeing even a hint of a solution from Apple on this. If anything, what they showed developers with App Intents at WWDC is going to create an even more disjointed Siri across platforms, because third-party apps may not be available everywhere.

Steve Troughton-Smith on Mastodon is skeptical that third-party developers will help make this vision a reality:

Delayed or not, Apple’s proposed Intents-based Apple Intelligence features require a ton of buy-in from developers for it to be of any real use, and the incentives really aren’t there — donating your app’s content and functionality to enrich Apple’s AI alone, bypassing your UI, UX, branding, metrics, et al, to be delivered in a content soup alongside your competitors.

While App Intents don’t exclude the idea of other APIs for developers to use system models directly, I don’t expect we’ll see anything beyond App Intents until the new Siri is ready, and maybe not even after that. Ben Thompson in today’s Stratechery article:

Apple gives lip service to the role developers played in making the iPhone a compelling platform — and in collectively forming a moat for iOS and Android — but its actions suggest that Apple views developers as a commodity: necessary in aggregate, but mostly a pain in the ass individually.

Ben makes a strong case that Apple should be opening up their models to third-party developers, especially given the incredible potential of the M3 Ultra. Siri is designed for an 8 GB RAM world. The M3 Ultra can have 512 GB. Mac developers will have to bring their own models to take advantage of the great hardware in modern Macs.

Back to the disconnect between on-device models and cloud-based AI, Alexa Skills have been around for a decade and they will apparently work seamlessly with Alexa+. It’s all in the cloud.

I use ChatGPT a lot, every day, and yet there are some things I’m not comfortable sharing into the cloud. I don’t care if it knows that I’m planning a trip or what code I’m working on, but I’d be very hesitant to talk to a cloud-based assistant about truly private matters. Who knows where that info might accidentally end up.

Apple competitors could undercut a lot of Apple’s strategy by creating their own version of private cloud compute. Most users do not really think or worry about this. They’ve been storing emails with all sorts of private details on Gmail servers for years. But making cloud-based AI as secure as possible is just a good thing.

I’m not sure Apple knows what a big risk they are taking by letting OpenAI and others lap them in the AI race. It’s a risk that will pay off if they can execute. Just as likely, though, we are seeing such a disruption in computing that Apple is vulnerable for the first time in a decade.

Big companies like Apple do not move quickly. Amazon put everything into rebuilding Alexa and it has taken nearly two years. If there is a truly new AI device, a post-smartphone pod that we keep in our pocket or that’s built into our glasses, Apple’s strategy to entangle AI with phone hardware will have been proven all wrong for this moment, and they will have no response.

Joe Cieplinski

as usual, a well reasoned take.

I do think it’s a risk for Apple. I also never count them out when they are “late” to anything.

I find it funny all the people who were quite vociferously shouting from the rooftops that Apple “needs to announce something with AI” a year ago are now the same people saying they never should have announced Apple Intelligence if it wasn’t going to be ready.

The mistake was listening to those voices last year. The delay is the course correction they needed.

Like you said, they may very well end up losing this one. But I’m not super worried given the stare of the competition right now. Yes, the tech is getting better. But the product—the real killer app for AI is nowhere to be found yet.

Paul Robert Lloyd

We’ve truly entered Tim Cook’s John Sculley era; too many products, too many promises and too much trying to please shareholders.

Numeric Citizen

It will be interesting to learn how Apple will play it at this year’s WWDC conference. Will they announce a complete reboot efforts? I remember seeing a new high-ranking VP being appointed that has a “get shit down” type of profile… I can’t remember her name. This could be her first visible sign of her involvement since being appointed.

Manton Reece

@joec Yeah, the delay is fine. It’s only bad if they’ve made the wrong trade-offs and don’t yet realize it, so they burn another year going down the wrong path. Even then, it’s not “too late”, just need a very clear vision for what to do.

Michael Brown

Eloquently put. Apple’s AI strategy announced at WWDC was breathtaking at the time. No one was remotely showing contextual AI experiences like Apple did. It almost resulted in me purchasing an iPhone 16. Now that we are where we are today, I’m glad I hadn’t.

I do think Google is taking a similar approach with their Gemini Nano on-device model on their Pixel devices. Rumors suggest that “Pixel Sense” will be what Apple Intelligence was/is supposed to be.

viddndikq4bqhr.bsky.social

One of the biggest issues here is that Apple doesn’t ship fast enough for their AI releases to matter, and because they’re on an (qusai) annual hardware/software release cycle this industry just laps them. EVERY new .X update needs to show Apple intelligence improving by X or adding new features if they want to be competitive here. Sure, big OS updates can add tentpole features like reasoning or personal context, but half the reason Apple feels (and is) out of this is because their main competitor not only has a AI platform in Gemini, but also release great open source models at the same time.

Also, Apple Intelligence should be an app on Android’s play store just like Apple music is, and they should market it as the most private AI service possible and if people want the personal context stuff, buy an iPhone. Apple needs to compete in this market with a good standalone product to get out perception hell they’ve created for themselves. If the word “intelligence” is in your service, people can’t think it’s dumb.

Models are commodities at this point and unless someone creates god out of these things, they always will be.

Manton Reece @manton
Lightbox Image