What spooked OpenAI’s board?
Sam Altman is back as CEO of OpenAI after four board members of the nonprofit that runs the AI company attempted a poorly planned coup. Much of this drama played out over an intense Thanksgiving holiday — and if you missed it, I highly suggest you check out last week’s newsletter.
As the OpenAI coup story ends (or begins), one huge question remains: What did the OpenAI board learn that spooked them enough to destroy one of the most important companies and innovations ever created? Some think it may have been an internal project named Q* (pronounced q star), led by now-former board member and Chief Science Officer Ilya Sutskever, that allows AI to solve math problems.
If so, here’s my take: AI as we know it is amazing but falls short of being considered a true AGI or an artificial general intelligence. Many current models are LLMs or large language models that can predict the next word in a string, but they don’t truly think or solve logic problems on their own – they’re reliant on answers that already exist. Solving math and solving it in novel ways, takes great problem-solving ability that this q* project may have shown.
If that’s the case, then a well-funded group could build an AI that can access and interpret more data than a human ever could with the processing power to solve unknown math problems. A system that can do that could solve cryptographic and encryption problems that require quantum computing.
This is all conjecture and rumor for now, but something clearly spooked this board enough to take potentially career-ending measures to slow down the progress of AI.
Maybe we’ll eventually hear the full OpenAI story and what truly sparked such dissent. But one thing is clear – AI has the potential to be the most important innovation since electricity, a stunningly transformational breakthrough that could change the course of mankind. The question is, should we tap the brakes or push forward full speed ahead? I say keep the pedal to the metal, what say you?