Howdy 👋🏾, Sam Altman, CEO of OpenAI, is way too nice. I told him not to worry about my birthday, but he surprised me and released a ton of great new OpenAI features just for me (+ some other developers).
The event, OpenAI Dev Day, is the first developer-focused conference from OpenAI, and the keynote delivered, helping the AI firm further its lead over the competition. You’ll be able to watch the entire keynote on YouTube, which clocks in at under an hour, but read along for my quick thoughts and highlights.
At the center of the AI war is the battle of the foundation models. What is a foundation model, you ask? A foundation model is an AI model based on tons and tons of data, like ChatGPT, allowing a company or individual to skip the immensely expensive costs of training a custom model from scratch. With ChatGPT as a foundation model, you can leverage data it’s already trained on and focus on fine-tuning it. I like to imagine a foundation model as a shiny new college graduate educated with tons of general knowledge, ready to start his or her career at a company. The model knows the general stuff, so we can focus on training it on the data that is unique to our organization.
OpenAI wants to be everyone’s foundation model, and many of the tools it released today make it easier for developers and non-developers to train its models with your proprietary information while sandboxing it from others and keeping safety and security in mind. As OpenAI does this, it creates a moat around its services, making it harder to leave its ecosystem. You can compare it to the stickiness AWS creates within its cloud ecosystem. With that, the announcements and highlights:
🚀Microsoft: As Altman unveiled these new AI products, he brought out Microsoft CEO Satya Nadella to reinforce the strength of their partnership and plan to integrate these AI services deeper and deeper into all of Microsoft’s products. As he put it, Microsoft is not only a strategic partner and reseller of OpenAI’s services but also one of their biggest customers.
🚀Copyright Shield: We have a lot to figure out regarding AI and copyright law. Still, OpenAI followed IBM, Microsoft, Amazon, Getty Images, Shutterstock, and Adobe in announcing they would provide umbrella protection for any company sued for copyright issues using their foundation models. For some companies standing on the sidelines with fear of litigation, this move could be just what we needed to convince the corporate lawyers to open the AI floodgates.
🚀 ChatGPT 4 Turbo: A few months ago, OpenAI released its ChatGPT 4 model to their paid and enterprise users, but today it released GPT 4 turbo, a speedier version of the model that also has been updated to include data from April 2023 and before. The new model will also allow multi-model calls that provide combined text and image responses from the same prompt.
The model has also expanded the size of its context window, allowing more than 300 pages of content, making the supported tokens for calls on par with Anthropic’s Claude 2.
Last, and possibly the most important, OpenAI slashed the API pricing for the new turbo model and older models.
‘We optimized performance so we’re able to offer GPT-4 Turbo at a 3x lower price for input tokens and a 2x lower price for output tokens compared to GPT-4”
🚀GPTS and the Marketplace: OpenAI is doubling down on making fine-tuning a model dead simple and accessible to developers and non-developers alike. To do this, it released GPTs or trainable AI agents that you can customize with a name, personality, and purpose. The GPT will do what it can to learn from the data in its foundation model, but you can make these GPTs smarter by feeding it your information to further fine-tune it.
Live on stage, Altman created a startup advisor GPT and trained it by giving it transcripts of his previous speeches from YCombinator. With this, the GPT, powered by his data and description, could pull from his publicly shared knowledge to provide advice. The demo was impressive, especially considering the app took minutes to make using a web interface without writing any code.
Later this month, OpenAI will unveil a marketplace that allows GPT creators to share their models for a revenue share (which has yet to be announced). The possibilities here are endless. Large non-profits and associations could collect information from members to create powerful industry AI engines that could be starters for both corporate functions. Can you imagine a SHRM bot trained on the collective archives of its magazine and industry data? Or a medical bot that pulls from anonymized patient records?
🚀 AI-based Assistants: OpenAI simplified the development needed to create chatbots using AI models with an assistant builder. This tool creates a stateful API interaction, allowing OpenAI to remember the context between API calls without a developer needing to code it. The assistant also improved its interaction with AI functions, enabling your chatbot to interact with other elements in the browser.
For example, a utility mobile app could receive a question about an outage or restoration times and trigger a function to render the outage map. The assistant could also take files like a copy of a bill, parse the information, and render pieces of it in the application’s interface. These functions or actions can spawn operations with external tools triggering Zapier, Salesforce, Hubspot, or other enterprise systems.
This level of integration opens the door for internal bots that speed up employees’ work by doing complicated tasks with human language. Imagine a Human Resources employee onboarding a new hire by simply speaking to an assistant who can trigger calls to an HRIS, onboarding emails, and assign roles in an SSO. These assistants can also use OpenAI’s code interpreter, allowing the bot to create and execute its generated code to solve a problem. To repeat that, to make sure you’re equally mind-blown, the bot can write code to trigger an API or consume data from a system it was not built to interact with.
🚀New Voices: Interactions with these assistants and GPTs are not limited to text. ChatGPT has supported dictation and both text-to-speech and speech-to-text for a while, but today, it rolled out six synthesized voices that sound human and can allow users to trigger the power of these fine-tuned AI bots and assistants using our voices.
📌 In conclusion, the possibilities here are staggering, and Altman was quick to remind us that they have a lot planned over the coming year. The tech OpenAI laid out today represents the start of a monumental shift in our relationship with computers. Just don’t forget, it’s early days, and we’re in the AOL days of AI. Now, on to some thoughts on tech & things 🔗:
⚡️Have you tried the Apple Touch Bar? I wanted to like the Touch Bar, but it didn’t work. I value the tactile feel of buttons, which the Touch Bar lacked, but that’s the same argument the crackberry keyboard folks made about the iPhone keyboard.
⚡️🪦Goodbye, Mint! I loved Mint and started using it a year before the Intuit acquisition. I loved the app then but failed to keep up with the times and never evolved into the budgeting app I hoped it would become.
⚡️Some cities see AirTags as a solution to car theft. Washington, DC, plans to offer free Apple AirTags to make it easier for you to find that stolen car, and they’re not the only city looking for inventive ways to combat car theft.
It’s hard to believe this is issue 23 of the Thoughts on Tech & Things newsletter! The support and growth are amazing and as of today, we’re shy of 1,000 subscribers🚨. Since this is my birthday week, I get to be a bit extra selfish, so if you’re enjoying the read, please share, forward, and recommend this newsletter to co-workers, friends, and family and help me blow past 1000 subscribers.
P.s. The Rumor is that Humane, the stealth startup founded by ex-Apple employees, is nearing a product release this month. Altman is a significant investor in Humane, and it’s been suggested that the device makes heavy use of AI with aspirations of being the best AI assistant on the market. I’ve been skeptical, but seeing the evolution of OpenAI’s GPT and assistant services makes me wonder if they will pull off the first in a new category of portable AI devices mainly controlled by voice input thats stylish enough to shine at Paris Fashion Week.