WWDC initial takeaway

One thing I didn’t appreciate before this year’s WWDC is how limited Apple’s on-device models could be. Apple is going for easy wins and generally not biting off more than they can chew. Summarizing or rewriting text is something LLMs are great at, with almost no risk for getting derailed with hallucinations. So it shouldn’t have been surprising that Apple is doing so much themselves with their own models, and punting to ChatGPT for what Craig Federighi called “broad world knowledge” that is beyond Apple’s own models.

The only thing that struck me as strange in the WWDC keynote was image generation. I didn’t expect Apple to do that and I still don’t see why they needed to. It opens up a can of worms, something that was discussed well on this week’s episode of Upgrade. See the chapter on “AI feelings”.

The rest of the strategy is really good, though. The on-device models are small, but they can be supplemented with cloud models for more advanced tasks. And because it will be transparent to the user whether a local or cloud model is used, Apple can add bigger models to newer iPhones as RAM increases, for example, and the user won’t know the difference. Tasks will just become faster and more sophisticated.

This does require the user’s buy-in on Apple’s premise: that “private cloud compute” is just as secure and private as on-device data. On first glance this doesn’t seem technically true. As soon as the data leaves the device, you’re in a different world for things to go wrong. But Apple has built up a lot of trust. If user’s accept the private cloud — and, importantly, if users even realize that Apple’s cloud is completely different than OpenAI’s cloud — it gives Apple a new strength that others don’t have, even if that strength is propped up mostly on goodwill.

Personally I have no concern with the cloud approach for my own personal data. I expect Apple’s solution to be robust, likely bordering on over-engineered for what it actually needs to do, but that builds confidence.

Ben Thompson is optimistic about Apple’s AI strategy too. From a daily update on whether other companies could displace iOS:

I’m unconvinced that large language models are going to be sufficiently powerful enough to displace iOS, and that Apple’s approach to productize LLM capability around data that only they have access to, while Aggregating more powerful models, is going to be successful, but time will tell. Relatedly, the speed with which a wide array of model builders delivered useful models both gives me confidence that Apple’s models will be good enough, and that there isn’t some sort of special sauce that will lead to one model breaking away from the pack.

I’m not sure. There is no telling whether there will be another GPT-level advance in a couple years. Already OpenAI has some technologies like the voice matching that are so powerful that OpenAI almost seems scared to even release them. If there is a breakthrough, it may be difficult for other companies to replicate it right away, giving a single player a years-long advantage.

At the same time, there is just enough friction in Apple Intelligence that even with the improvements to Siri, it may feel slightly crippled compared to a hypothetical new voice assistant. As I wrote in a blog post before WWDC:

While it’s true that the iPhone will continue to dominate any potential non-phone competition, I think there is a narrow window where a truly new device could be disruptive to the smartphone if Apple doesn’t make Siri more universal and seamless across devices. This universality might sound subtle but I think it’s key.

It’s unlikely for Apple to be displaced. People love their phones. I think there is still an opening for something new — a universal assistant that works everywhere, can do nearly everything, and is a joy to use. But we may never get there, or “good enough” may be fine, in which case Apple is really well-positioned.

Manton Reece @manton