Issue #24: Goodbye 👋🏾, keyboards — Jason Michael Perry

Howdy 👋🏾, things usually slow down when November hits as we stumble into the Thanksgiving and Christmas holidays, but this year is different. This year, the tech community realized it’s my birthday month and has rewarded me with tons of exciting announcements, including last week’s OpenAI dev day and the long-rumored launch announcement of the Humane AI pin. Check out Humane’s launch video in full, but like always, I’m here to give you my thoughts on things.

What makes the new AI pin so different is its lack of a screen or keyboard, which feels weird. Traditional computing custom is that even when a device supports touch and voice, the keyboard is still the defacto input mechanism of choice. The AI pin stands out significantly. It diverges from the usual by being a lapel device primarily interacting through voice. Humane believes combining external services and various AI models is good enough to make a voice-activated AI chatbot our standard form of computer input.

Instead of a traditional screen, the device includes a laser projector you can navigate with hand gestures and a 12MP camera that can be used to take photos and videos or as input to the AI device. Using the camera, the AI pin can track your food or search for a product online to compare pricing. These devices will be available for order starting November 16th. Once reviews are out, we’ll better understand its true bonafide. However, this device signifies a crucial shift, marking the inception of post-keyboard devices.

Just a week ago, I would have thought I was crazy to consider a world without keyboards. However, after spending hours experimenting with OpenAI’s new assistant API, I’m convinced that how we interact with computers will change significantly.

Yes, Siri, Alexa, Cortana, Google Assistant, and tons of voice-first devices existed before the Humane AI pin, but those voice assistants left everyone wanting more. The bots responding to our voice commands felt dumb and unable to process the types of complex sentences and thoughts we, as humans, toss at them. Unlike those assistants, Chatbots like ChatGPT handle complex natural language and chained-together questions, all in a conversational manner. Heck, Siri sucks so much compared to ChatGPT that I used a Siri shortcut to send my requests to ChatGPT instead of Siri.

However, that is still not enough. A voice assistant to replace my computer needs more than general knowledge or the ability to complete everyday tasks like updating my calendar. A trustworthy transformative assistant needs to help me do my job by interacting with my daily looks like our Human Resources Information System (HRIS), project management and team allocation tools, or enter contacts into a Customer Relationship Management (CRM) system.

At OpenAI Dev Day, Sam Altman announced an API for AI assistants that can make API or function calls based on a text description, and folks, it works super well. As a test, I wrote a chatbot (I’ll share it in the coming weeks with source code) that can take natural text and transform it into visualizations and graphs, trigger calls to a CRM, or onboard a new employee by adding them into various corporate systems. Crossing that chasm turns a chatbot into a transformative assistant that could become the primary way that many knowledge workers interact with computers.

My vision of the future, augmented reality glasses, and voice-activated AI chatbots feels more and more inevitable, and products like the AI pin prove that the future could be closer than I once thought. Now, on to some thoughts on tech & things:

⚡️Modern cars are increasingly taking new risks and embracing growing tech. Polestar hopes to make the rearview mirror a thing of the past while car designers push for regulations to drop side mirrors.

⚡️I’m surprised it’s taken this long for AI weather models to hit the press, and it looks like Google has a new AI model named GraphCast that beats the socks off meteorologists. The weather has long depended on predictive analysis, but the precision keeps improving.

⚡️Rummor is Apple is hard at work on iOS 18 and has pretty grandiose future plans. With next year’s release of Vision Pro and Tim Cook implying that much more generative AI is coming to the platform, maybe we will see Apple leap into a keyboard-less future with a much better version of Siri.

⚡️Speaking of Apple, the second beta for iOS 17.2 introduced support for spatial or 3D video, allowing an iPhone Pro with LIDAR to record video meant for the Vision Pro. It looks like Apple invited select folks in the media to record video and see it, and the reviews say the feature is “astonishing.” I can’t wait to get my hands on this thing launching in early 2024!

The problem with voice is that it’s so darn inconvenient in public places, especially when making private requests. I prefer not to air all my business for the world to hear, and if voice becomes the primary way we communicate with our computers, we will increasingly need ways to interact quietly. One possibility is a product like Elon Musk’s brain implant named Neuralink, or we could rock soundproofing devices like Mutalk, which I tried at CES last year, and while it looks weird, it works amazingly well. Make sure to check it out in action.

-jason

P.s. AI Chatbots are super powerful, but let’s not forget that these things still hallucinate. During the launch video, Humane founders Imran Chaudhri and Bethany Bongiorno asked two questions the AI pin confidently answered with factually wrong responsesAI is getting better every day, but make sure to use your human processing with all that AI processing.