Issue #03: WWDC 2023 — Jason Michael Perry

Apple did it! The company announced Vision Pro, a mixed reality headset, with a new operating system called visionOS. Vision Pro checks off almost every box from the rumor mill, including a considerable sticker price of $3,500 and a release date of “early” 2024, whatever “early” means.

No alt text provided for this image

Mindgrub, and a select group of developers, may get their hands on this device sometime this summer to help our clients port applications to visionOS and explore what this platform is capable of. If you’re wondering, so far, early reviews describe the Vision Pro as revolutionary or magical – and that comes from a technology journalist with years of experience with top VR headsets. Even if it owns up to being a truly magical device on the levels of the original iPhone, most of us are fixated on one big question, what do you use this thing for? After all, that $3,500 price point is ginormous, and for an untested version, 1 product at that, even as a die-hard Apple fan, is a request from Apple asking for a hard bite.

No alt text provided for this image

In its argument for why to buy Vision Pro, Apple pushed that it packages the technology of MacBooks, iPhones, iPads, AirPods, and its highest-end studio monitors into one device you can strap to your head. By that argument, these goggles are a steal compared to buying each of these products at the asking price. Apple is also very intentional with its naming – and the addition of the word “Pro” to the product name makes you assume that a singular consumer-focused Vision headset is in our future at a lower price point.

The Apple Watch Series 0 reminds me of the Vision Pro. I purchased that watch within weeks of its release, and it always felt like a beta with great ideas and concepts. I waited two years before upgrading my watch, and when I did, the years of iteration made the watch become a product I found indispensable. Apple has a similar path in store for Vision Pro.

Even if you expect it to flounder, the investment Apple has made in so many supporting AR technologies like Spatial Audio has me convinced that they see this as the path of the future. We sometimes forget, but Apple has been investing in ARKit, Spatial Audio, location tracking with AirTags, and so much in the AR space for over a decade. The Vision Pro is a culmination of those years of research and product releases. Apple’s use of the term “Spacial” also implies that they see the as more than this platform but something that continues to evolve. With a year to release, I promise you Apple has more than a few super exciting hat tricks tucked in its draw for the actual reveal.

Vision Pro is more than just hardware; it’s a new operating system with a new way of interacting. Apple’s UX and UI designers needed to tackle and question how a user interacts in a virtual world. If successful, these designers are tackling how we use mixed reality devices in the same ways we use a keyboard and mouse or swipe on a touch screen. These interactions need to feel comfortable, easy, and intuitive, and visionOS landed on the use of eyes to navigate the display and selection by touching a thumb and index finger together.

Like Apple’s other iOS derivative operating systems, visionOS shares a lot with iOS. This shared heritage also means that apps are essentially already baked to work for visionOS making porting an iPhone or iPad application remarkably easy. If you do nothing, your application will run without any optimization to support translucency or depth. For example, this is an iOS application running on visionOS. 

No alt text provided for this image

If you simply instruct your application that visionOS is a build destination, the OS will adjust your app’s UI to simply work. The operating system will take the interface and adjust it to a translucent interface and adapt it to the environment and lighting. Moving into this mode also opens the option for you to use a new format for images that add depth, allowing assets to move between the 2D and 3D realms with ease.

No alt text provided for this image

Applications you build can run in various contexts on the new device. You can make it run as a traditional iOS or iPhone app; you can also create a space that can be transformed into a 3D immersive world.

Applications in visionOS take a few forms:

👉Windows – The traditional application is ported and run as is or with the addition of transparency and depth. You can open multiple application windows that sit next to each other in a shared space.

👉Volumes – A volume allows you to create 3D objects that coexist in the shared space with other applications. This approach could allow you to play ping pong or manipulate a 3D element while keeping iMessage or other apps glanceable and accessible.

👉Spaces – For more traditional VR or immersive applications, you can create a space that fully takes control. You can think of this as running an application or a game in full screen, making it a dedicated experience that can transport the user to a new world. 

If you have a mobile application built with UIKit, the writing has been on the wall for some time. Now is the time to rebuild or at least rewrite your mobile applications using SwiftUI. SwiftUI allows an app to scale infinitely as new device form factors continue to pop up. It also makes it extremely easy to build one application binary that can run on all of the Apple platforms (iOS, iPad, macOS, watchOS, tvOS, visionOS) with just a handful of tweaks. 

I can’t wait to put Vision Pro on my head and experience the magic others report. While I wait, Mindgrub is looking for folks ready to leap into the unknown and explore what things look like in a new reality. If you want to jump into this new reality, reach out.


No alt text provided for this image

The end days for open APIs feel closer and closer. Reddit has followed in Twitter’s footsteps to charge for access to its API, essentially forcing third-party developers to pay huge fees or leave the game altogether. The crazy thing is they’re not entirely wrong – AI is eating all their data and using it to train models, and if they want a cut of that training revenue, they need rules in place to limit what these models can consume. The difficult part is that you want these rates to feel punitive, but it also opens the chance for open platforms to swat off some undesired but successful clients. Same time it feels like they may use that punitive response also to kill third-party clients and make the Reddit app the only app to have access. I reminisced about the struggle here – but you should also read third-party app developer Apollo’s thoughts and Reddit’s response. 


Did you hear the words AI at Apple’s World Wide Developer Conference (WWDC)? I listened and repeatedly heard the use of many terms, including machine language or even AI tactics like transformers for predictive text correction, but never the words AI. After Microsoft and Google repeatedly talked about AI, the lack of those two letters was almost deafening. For better or worse, Apple has decided to keep out of the AI fray, but why? Is Apple so far behind in the AI race? Careers listings show that they’re interested in generative AI. So, what gives?


https://www.linkedin.com/embeds/publishingEmbed.html?articleId=7423954601771765725&li_theme=light

Working from the office or home, it can take a lot of work to concentrate. I find that just the right music can do the trick to help me get in the zone and churn out code, but today one of our designers introduced me to an even better solution for generating white noise.


⚡️More links to tech & things: