Latest Thoughts
-
Maps Are Much More Than a Pretty Picture
It’s easy to forget just how decisive and contentious the topic of maps can be. I’m reminded of The West Wing, Season 2, Episode 16, which perfectly captured how something we often take as fact can quickly turn on its head. If you haven’t seen it, watch this snippet—I’ll wait:
This episode came to mind recently with the executive order to rename the Gulf of Mexico and reinstate the name Mount McKinley. Changes like these, once official, ripple beyond their immediate announcements. Today’s maps aren’t just printed in atlases or books—they live on our phones, computers, cars, and apps. Companies with map platforms like Google and Apple, follow international, federal, state and local government sources to define place names and borders. Unsurprisingly, Google has already announced it will update its maps to reflect these changes, and Apple will likely follow.
If this feels like uncharted territory (pun intended), it’s not. After Russia’s annexation of Crimea, many mapping companies faced pressure to update their maps to reflect Crimea as part of Russia. Apple, initially displayed Crimea as part of Ukraine globally, updating its maps to show Crimea as part of Russia—but only for users in Russia.
China has also long lobbied for maps to reflect Taiwan as part of China, sparking ongoing debates about how maps represent geopolitical realities. Even closer to home, cultural shifts are reflected in maps, like when New Orleans renamed Robert E. Lee Blvd to Allen Toussaint Blvd.
Maps are not just representations of geography—they are mirrors of history, politics, and culture. Maps are not just a picture of a territory, they have immense power in shaping how we perceive the world around us.
Update 1/29/2025: Google Maps follows the Geographic Names Information System (GNIS), and under normal circumstances, changes like these would be routine and go unnoticed. However, given the divisiveness of recent name changes, this process has sparked broader debate. It’s likely that Apple and other mapping platforms follow a similar process.
Google has also reclassified the U.S. as a “sensitive country”, adding it to a list that includes China, Russia, Israel, Saudi Arabia, and Iraq. This designation applies to countries with disputed borders or contested place names, similar to Apple’s handling of Crimea.
Update 2/1/2025: John Gruber shared an interesting post on how OpenStreetMap is handling the Gulf of America name change. As a collaborative, community-driven platform, OpenStreetMap has sparked debate on its forums over how to reflect such changes, particularly when they intersect with political decisions. You can follow the community discussion here, where contributors weigh the balance between neutrality and adhering to local or government designations.
-
I Read the DeepSeek Docs So You Don’t Have To
DeepSeek is turning heads in the AI world with two major innovations that flip the usual script for building AI models. Here’s the gist:
Skipping the Study Phase (Supervised Fine-Tuning)
When you train an AI model, the usual first step is something called Supervised Fine-Tuning (SFT). Think of it like studying for a test: you review labeled or annotated data (basically, answers with explanations) to help the model understand the material. After that, the model takes a “quiz” using Reinforcement Learning (RL) to see how much it’s learned.
DeepSeek figured out they could skip most of the study phase. Instead of feeding the model labeled data, they jumped straight to quizzing it over and over with RL. Surprisingly, the model didn’t just keep up—it got better. Without being spoon-fed, it had to “think harder” and reason through questions using the information it already knew.
The “think harder” part is key. Instead of relying on labeled data to tell it what the answers should be, DeepSeek designed a model that had to reason its way to the answers, making it much better at thinking.
This approach relied on a smaller initial dataset for fine-tuning, using only a minimal amount of labeled data to “cold start” the process. As the model answered quizzes during RL, it also generated a “chain of thought,” or reasoning steps, to explain how it arrived at its answers. With continuous cycles of reinforcement learning, the model became smarter and more accurate—faster than with traditional approaches.
By minimizing reliance on SFT, DeepSeek drastically reduced training time and costs while achieving better results.
Mixture of Experts (MoE)
Instead of relying on one big AI brain to do everything, DeepSeek created “experts” for different topics. Think of it like having a math professor for equations, a historian for ancient facts, and a scientist for climate data.
When DeepSeek trains or answers a question, it only activates the “experts” relevant to the task. This saves a ton of computing power because it’s not using all the brainpower all the time—just what’s needed.
This idea, called Mixture of Experts (MoE), makes DeepSeek much more efficient while keeping its responses just as smart.
What Does It Mean?
Using these methods, DeepSeek built an open-source AI model that reasons just as well as OpenAI’s $200/month product—but at a fraction of the cost.
Now, “fraction of the cost” still means millions of dollars and some heavy compute resources, but this is a big deal. DeepSeek has even shared much of their methodology and their models on Hugging Face, making it accessible for others to explore and build upon.
I’m still digging into what makes DeepSeek tick and experimenting with what it can do. As I learn more, I’ll share updates—so be sure to subscribe to the newsletter to stay in the loop!
Footnote: Further Reading
For those curious to dive deeper into the technical details of DeepSeek’s innovations, here are some resources I found useful:
- DeepSeek R1: Reinforcement Learning in Action – VentureBeat’s take on how DeepSeek is challenging AI norms.
- Mixture of Experts and AI Efficiency – A Medium article breaking down the MoE approach.
- Meta Scrambles After DeepSeek’s Breakthrough – An overview of how DeepSeek’s advancements have shaken competitors like Meta.
- DeepSeek R1 Technical Paper – The official documentation for DeepSeek’s R1 model, detailing its innovations.
-
🧠 DeepSeek is redefining the AI race
The juxtaposition of OpenAI and DeepSeek is striking. OpenAI recently announced a deal worth up to $500 billion to build the compute infrastructure required for the next generation of AI models. Meanwhile, DeepSeek, based in China, has developed a competitive AI model on the cheap with a ban that limits their access to the latest and greatest GPUs from Nvidia GPUs.
This contrast is a wake-up call. Meta is reportedly scrambling to understand how DeepSeek managed to achieve this feat, which could upend the competitive landscape in AI development, and OpenAI launched Operator a $200 a month product while DeepSeek is free and open source.
For investors, these developments raise critical questions: Are AI companies overvalued, or does this level of innovation suggest a faster-than-expected path to commoditization for large language models (LLMs)? The race to define the future of AI is accelerating, and the stakes couldn’t be higher.
DeepSeek R1’s bold bet on reinforcement learning: How it outpaced OpenAI at 3% of the costDeepSeek R1’s Monday release has sent shockwaves through the AI community, disrupting assumptions about what’s required to achieve cutting-edge AI performance. This story focuses on exactly how DeepSeek managed this feat, and what it means for the vast number of users of AI models. For enterprises developing AI-driven solutions, DeepSeek’s breakthrough challenges assumptions of OpenAI’s dominance — and offers a blueprint for cost-efficient innovation. -
🧠 $100 Billion AI Initiative Unveiled at the White House
The White House announced a $100 billion initiative—potentially scaling to $500 billion—led by OpenAI to build data centers across the U.S., starting in Texas. This massive effort establishes a new company named Stargate, a partnership between OpenAI, SoftBank, and Oracle. To mark the announcement, SoftBank Chief Masayoshi Son, OpenAI CEO Sam Altman, and Oracle Co-founder Larry Ellison joined President Trump at the White House.
Just minutes before the announcement, Microsoft revealed it had altered its exclusive agreement with OpenAI to grant “right of first refusal” on new cloud computing capacity. This change addresses a challenge for OpenAI, which has faced delays in product releases due to a lack compute resources for building larger, more powerful AI models.
My guess? This move lets Altman tap into Oracle and other cloud platforms, potentially pushing out ChatGPT-5—or even AGI (Artificial General Intelligence)—sooner than we imagined. -
📺 🔍 Robots.txt: The Web’s Silent Gatekeeper
Ever wonder how websites control what’s searchable? Enter the robots.txt file—a simple tool now at the center of big changes in the AI world. Curious about its growing impact? Check out this video to learn more about how this impacts the future of the internet.
-
🧠 Mastodon and the Future of Open Source Ownership
Mastodon just took a bold step by transitioning to a nonprofit structure to ensure its independence and protect against future risks, like being “ruined by a billionaire CEO.” In doing so, it highlights a critical issue for open-source projects: ownership of domains, intellectual property, and the vision of the platform.
This move feels like a direct response to the ongoing drama in the open-source world, where Automattic’s CEO Matt Mullenweg has aggressively gone after WP Engine while asserting control over WordPress.org. The saga is a reminder that even in open source, centralized ownership of key assets can have massive consequences. Mastodon’s shift serves as a model for other projects looking to prioritize community trust and eliminate the risks of single-leader ownership. It’s a lesson that could shape the future of open-source governance.
-
🧠 If I Only (Robots) Had a Brain
Seeing all those robots at CES—especially the tired but trusty Unitree bots—I couldn’t help but feel like something was missing. These one-task machines, while impressive in their own right, lack the intelligence to truly transform how we live and work. It’s like watching the Tin Man on his journey to Oz—all the potential, but no brain to tie it all together.
That’s where OpenAI could come in. Rumors of OpenAI reopening its robotics division have sparked speculation about the possibilities of pairing generative AI with hardware. Imagine a world where robots aren’t just performing tasks but adapting, learning, and problem-solving in real time. If OpenAI can bring their expertise in language models to robotics, we might finally see machines that aren’t just tools, but partners in our daily lives.
-
🧠 Sonos and the Cost of “Courage” in Tech Missteps
decisions. The core issue? A rushed new app launched in May 2024 that stripped away much of the old app’s functionality, leaving users with a clunky, incomplete experience. The CEO called it “courage” to release such an app, but the fallout included missed product targets, layoffs, and what has felt like years of usability issues that the company never fully addressed. Even with a new interim CEO who comes from Quibi—a name that doesn’t exactly inspire confidence—Sonos users are left waiting for a real turnaround.
As for me, I’m sticking to my AirPlay-enabled speakers and holding off on that Sonos soundbar for my basement entertainment setup. After all, trust is a hard thing to rebuild.
-
🧠 My Proud Papa Moment in Mobile App Development
It’s not often I get to link to an article about something I helped build, but Daring Fireball recently mentioned the PECO mobile app—a project I had the privilege of overseeing as CTO of Mindgrub for Exelon. This app, along with others we developed for Baltimore Gas and Electric (BGE) and ComEd, serves a huge user base across cities like Baltimore, Chicago, Philadelphia, and DC.
These apps were a labor of love, taking over a year to design and integrate with a patchwork of backend systems—many of them legacy platforms—across utility companies operating under different regulatory environments. From day one, our focus was on reliability and speed, ensuring these tools could handle the pressure during storms, outages, and other critical moments when customers need them most.
Hearing it recognized in an article like this is a proud moment, but it’s also a testament to the incredible teams at Mindgrub and Exelon that made it all happen. These apps are more than just utilities (pun intended); they’re lifelines during tough times, and I’m thrilled to see their impact acknowledged.
-
🧠 WP Engine Wins Preliminary Injunction Against Automattic
WP Engine secured a significant preliminary injunction, restoring its customers’ access to WordPress.org and granting WP Engine control over the ACF plugin. This decision is a pivotal moment in the ongoing legal battle between WP Engine and Automattic, the company behind WordPress.
I have friends with strong opinions on both sides, but this ruling feels like the right decision. WordPress and Automattic’s approach to weaponizing open-source services against WP Engine seemed not only unfair but potentially catastrophic for the company. However, this is just a preliminary ruling, and with the case still pending, this story is far from over.
-
🧠 OpenAI’s 12 Days of Ship-mas Brings New AI Tools
OpenAI is in full holiday mode, unveiling new products daily as part of its “12 Days of Ship-mas.” Among the highlights so far is a new $200 reasoning model designed for pros, alongside Soros, an advanced video AI tool, and expanded features for Canvas, its collaborative AI workspace.
I haven’t yet had the chance to test these updates, and I’m still waiting for access to Soros, but the rapid pace of these releases is exciting. With a few days left in the campaign, I’m curious to see what else OpenAI has up its sleeve. Stay tuned—this might be the most eventful holiday season in AI yet!
-
🧠 GM Cruise Hits the Brakes on Robotaxis
Cruise paused its robotaxi service in 2023 after a tragic accident where a woman was dragged by a self-driving car. That incident led to massive layoffs and the departure of top executives, sparking speculation that Cruise would soon fold into GM entirely.
So, it was a surprise when the company restarted service in April, albeit with safety drivers. My guess? The cost per ride with safety drivers made profitability tough to envision, and GM likely didn’t want to risk another high-profile incident damaging its brand.
Running a self-driving taxi service isn’t as straightforward as it sounds. Vehicles with human supervision don’t need to be nearly as perfect as fully autonomous cars ferrying people through unexpected circumstances.
With Cruise now officially out, the robotaxi market narrows to Waymo—continuing to expand its footprint—and Tesla, promising its fleet as early as 2025.
-
📺 Technology Evolution (in 90 secs)
I’m excited to take the stage at The AI Summit New York next week!
✨ Dec 11: Securing the Future: Balancing Innovation with Protection
💡Dec 12: The AI Talent Crunch: Solutions for Skill Gaps and Training Needs
Let’s explore how to drive AI innovation while safeguarding the future and bridging the talent gap in this rapidly evolving space. See you there! 🙌
🎟️ Join me and save 20% with promo code: SPKRJasonMP20OFF https://lnkd.in/emPpPanX -
🧠 AWS re:Invent Kicks Off
AWS re:Invent is in full swing in Las Vegas, bringing a flood of product announcements as usual. On the AI front, the standout is Nova, a new family of Amazon-developed AI models that significantly outshine the previous Titan models, alongside enhancements to the Q suite of developer tools.
If you have time, check out the keynote—it’s packed with updates. I’ll keep an eye out for more exciting developments to share next week.