Issue #51: Welcome to Camp Apple Intelligence — Jason Michael Perry

Howdy👋🏾, On Sunday, a close friend and I pitched tents at Shenandoah National Park. As we drove up the mountain, my service dropped from full coverage to zero bars, and all of my notifications, emails, and text messages went silent.

As someone who is often directly jacked into the Internet, the silence initially caused a feeling of anxiety that gradually turned into a quiet calm. I was reminded of how often so many of our devices buzz, begging for our attention when the reality is it’s not that important.

You could call my trip into the wilderness a bit of digital detox, but I made some exceptions, including watching Apple’s World Wide Developer Conference. It took some planning, and while my Verizon service was non-existent, a T-Mobile-equipped iPad was just what we needed.

Many have done an amazing job covering WWDC24 and highly recommend giving it a watch, but I wanted to share my thoughts:

⚡️Apple Vision Pro was front and center, and with so many rumors of device returns, lack of sales, and little love from the developer community, this sent a strong message. The platform is actively under development and not one that would sit on the shelf and go stale. The updates also directly address many of the issues I shared in my review, and I’m excited to try out the beta and see if the device is better after these changes.

⚡️Apple continues to talk about AI without actually saying the words AI – with the closest being a slide with the letters AI expanding to say Apple Intelligence – their branding term for its deep set of AI features. Throughout the presentation, they used terms like ML (Machine Learning), NLP (Natural Language Processing), LLM (Large Language Models), and Diffusion models, all of which represent so many disciplines of AI but avoided this one word. In an interview with the Washington Post, Tim Cook pushed a different narrative on Apple’s approach to AI, letting people know they’re not being left behind and showing how they planned to be much more thoughtful while escaping the pack’s expectations.

⚡️Apple architected things exactly how I predicted. On-device models that run Small Language Models that handle most responses and reach out to the cloud when they can’t. This leans into Apple’s biggest strength, it makes its own silicon and is not dependent on the expensive and huge Nvidia backlogs. The benchmarks for these models are crazy! Based on their own benchmarks, these models beat most other models on the device and server. They also state that they’re less likely to hallucinate. For the moment, they’re singularly positioned to use their vertical stack of hardware and software to build something few (if any) other companies can.

⚡️Apple Intelligence is unavailable today, and the rollout will be phased and possibly take more than a year. I’ll install the betas when I get home to faster Internet, but my understanding is that none of the Apple Intelligence features shipped in the beta and that when it comes, it will require a waiting list.

⚡️Privacy and confidentiality played a huge role in how Apple constructed and planned its models, which is incredibly important. When I use ChatGPT or Claude, my biggest consistent complaint is that it lacks so much context that I have to repeat myself over and over again, or share details about people and things it just can’t know. Apple Intelligence is meant to create an AI model that’s less about the outside world and more about each of us trained on tons of your personal data. It will train locally on your texts, emails, photos, contacts, calendar, maps, app settings, and who knows what else. Anyone who sends text messages knows how deeply personal these exchanges can be. The only other companies in similar positions to build a model so deeply personal are Google, Microsoft, Meta, and maybe Samsung – and frankly, I’m not sure I trust them not to harvest my personal data for advertising or something else. Apple’s privacy stance and unwillingness to build backdoors or provide access to the police might become one of its biggest advantages in creating a truly personal AI that people trust to learn about so much intimate and personal data.

⚡️The possibility of a personal AI with so much of our own data opens up the possibility of a true digital assistant like never before. During the keynote, Apple pitched this as a system that can fill in the dots we forget, similar to proactively adding a calendar event. Think about how powerful ChatGPT is, then imagine an AI model that’s trained on you and has access to tools to do things for you.

⚡️App integration on devices is also built with privacy in mind. I need to read more and watch the developer-focused Platform State of the Union when I get home. Still, my understanding is that App Intents allow applications to expose data to Apple Intelligence or create trigger-able actions the AI model can invoke, allowing them to access or talk with an AI model directly. This approach creates a protective molt around this trove of your personal data but still allows you to query The NY Times Cooking app for recipes or check the status of an EV car charge. This seems like a decent compromise, but I’m curious how intuitive this will be, especially if multiple apps can answer a similar question. 

⚡️The partnership with ChatGPT is interesting and very different than what I expected. Apple is essentially treating them like your choice of default search engines in their Safari browser. Apple Intelligence is the default, but you have the option to use another AI. It sounds like you ask Siri to respond to a prompt with ChatGPT or select it in your interface. As a warning, Apple will confirm that you want to do this before sending any data to their service. This sounds good on paper, but it also has a cumbersome implementation. What I like and see as smart is that Apple does not need to build the best LLM. Apple Intelligence becomes the AI that knows you, your home, your contacts, and your App while being good for general-purpose things. Other AI models can fill in the gap by being more “worldly,” ultimately, you may have access to multiple models that you can use alongside Apple’s AI as you see fit. In a world where Google and Android are potentially locked to its Gemini model, and Microsoft is mainly based on CoPilot powered by ChatGPT this builds a sense of choice at the O/S level, which is a pretty nifty idea.

⚡️Grammarly’s business model is DEAD. Writings have been on the wall for some time, but today, they sherlocked the product and deeply integrated into the operating system in a way Grammarly could never do. Maybe it won’t be as good, but if it’s good enough it’s going to be tough to shell out money monthly for those features.

Apple’s WWDC also solidified something that I’ve assumed for a while, AI is never going to be the star product people imagine it will be. The nature of how it works requires it to become so ubiquitous and so tightly integrated into the devices we use that a great implementation of AI will be invisible and fully transparent.

Betting on an AI startup for users is like investing in Grammarly or the best spell-checker tool that exists. It’s easy for me to think of OpenAI or Anthropic in 20 years as something licensed and more akin to a computer using Intel or AMD processors, or better yet, a person knowing the brand of RAM in a computer. The stuff embedded in our devices will get good enough, and that means people simply won’t care.

This is the battle car manufacturers are finding with Apple CarPlay and Android Auto – I have a device that knows my contacts, does mapping, already knows the location I’m headed to, and is my go-to device for listening to music, books, podcasts, and so much more. Using a competing product requires such great context and integration with things around and about me that no matter how well-designed the product is, it’s still a frustrating experience. Today’s AI is exactly that. It knows so much about the world and so little about me. It doesn’t know my style, what music I like, my favorite movies, my kids’ names, my parents’ birthdays, how I text, my writing style, or how I start and end my emails. I can teach it these things, but some of this I don’t know or think to tell it.

AI is not the platform; it’s a killer feature in the platform, and the more platforms integrate it and couple it with the platforms we use, the more invisible it will be, and the less you, as a customer of it, will care who powers the thing as long as it’s good enough.

The calculus is a bit different for developers and enterprises building platforms or using them. An end user does not care what type of database or programming language I’m using to build out my product, but as an engineer, I know how these choices can impact performance, scalability, security, and user experience.

At WWDC24, Apple reminded everyone that it’s the platform and ecosystem that matter, and those ivory walls are indeed tall, and they control who gets in and who gets out. 

I’m back from my excursion next week and enjoyed the much-needed R&R. If you’re interested in having me speak, deliver one of my great AI-focused workshops, or need a hand from a fractional CTO/CAIO, reach out. Also, don’t forget to share the newsletter if you know anyone who might find it useful. Now I’m turning off my notifications and hopping in the hammock for a nice long nap and digital detox… OK, maybe not a full detox. See you next week.

-jason 

p.s. So apparently, Elon Musk is pissed that OpenAI is getting integrated into Apple’s products and deeply misunderstood how. He went on a tirade saying Apple didn’t or couldn’t make its own LLMs (incorrect) and implied that the on-device training uses OpenAI (also fully false).

Others have repeatedly tried to correct his very wrong statements – but crazier, he’s now suggesting a total ban on any Apple products at all the businesses he owns/operates and suggested employees and visitors need to check their Apple products at the door. I can’t make this stuff up…