Issue #79: Microsoft, Google, and the Race to Build Agents
Howdy👋🏾. One of the best signs that it’s AI product release season? There’s suddenly no shortage of new tools worth kicking the tires on. With Microsoft Build and Google I/O landing right before Memorial Day weekend, I carved out some time to dig in, and what I found was both wildly impressive and weirdly aligned.
As I unpack in this week’s post, both conferences echoed a shift we’ve been sensing for a while: the rise of the “vibe” professional. The vibe coder. The vibe marketer. The vibe researcher. Not just someone doing a job, but someone 100x’ing their ability to execute by pairing with AI.
But here’s the twist: the how actually matters more than ever. These tools are fast, powerful, and often jaw-dropping—but only when pointed in the right direction. You still need a sharp, experienced operator to guide the work. The SME or Subject Matter Expert isn’t going away; in fact, they’re becoming the most valuable person in the room. Because the more abstract the execution becomes, the more important it is to define what should be done and how to shape the outcomes.
So I gave myself two projects. Two real-world tests of what these tools can do today and where they still need a human hand.
First project: GitHub’s new Copilot agents.
This one’s wild. You define a task, write an issue, outline the specs, add acceptance criteria, and assign it to an agent. It replies with a plan, executes the task, and hands you a pull request.
To stress test it, I had it help build a custom WordPress site for PerryLabs. Not a drag-and-drop build—this was real work: a stripped-down custom theme based on _tw, with custom post types, ACF fields, and animation logic all baked directly into the theme layer. No plugins. No shortcuts. Just clean, purpose-built code.
I fed it whiteboard sketches as rough design inputs. I asked for motion. I asked for custom blocks with carousels. I intentionally gave it incomplete directions to see how far I could get with just pieces of the vision. It’s not perfect, but it got shockingly close. And what’s more important: it didn’t just scaffold a site, it reasoned through the architecture. It made implementation choices. It worked at the theme level and built custom plugins.
This isn’t autocomplete anymore. This is collaboration.
GitHub is calling this the evolution from autocomplete → chat → task-based agents. And they’re right. These agents aren’t just helpers. They’re teammates. Teammates who don’t sleep and don’t mind debugging for hours.
Second project: Google Flow.
Flow is the breakout AI tool from Google I/O, hands down. Flow pairs the Veo 3 text-to-video model with new text-to-audio capabilities, meaning you can generate video scenes with sound effects, background scoring, and AI-generated voiceover. All from a script.
Google demoed it using a range of directors—many of them smaller, indie, or experimental filmmakers. But that’s exactly the point. Flow isn’t just for blockbuster budgets. It’s a vibe tool. For solo creators, indie filmmakers, and budget-constrained marketers. For anyone who’s ever asked: Can I make something that looks like a real commercial, without a real crew?
And just like writing a spec for an AI coding agent, writing a great script in Flow is its own kind of skill. The system responds best to the language directors and videographers use, so if you want a slow tracking shot, or a top-down pan, or a match cut, you need to say that. These models don’t just guess the vibe. You have to direct.
That’s what makes this so exciting. In the hands of anyone, Flow is already impressive. But in the hands of someone who knows how to speak the language—how to call the shots, literally—it’s transformative. SMEs aren’t going away. They’re just getting louder, faster, and more cinematic.
And just like Copilot, Agents are becoming your dev teammates, Flow is a window into the future of video production: AI tools and agents that don’t just edit, but generate, iterate, and fill in the blanks. You can give it a static image of a person or a photo of your backdrop, and it will use that as an ingredient to generate cohesive, consistent shots. It’s like having a post-production assistant who understands tone, pacing, and visual style.
I used it to prototype two videos: one to promote my new book, The AI Evolution, and another for PerryLabs’ new AI accelerator services. What used to take days or weeks of editing and splicing, like my earlier experiments combining Suno audio with Runway ML visuals, now takes hours. The promise of Flow is that it just works.
Neither of these tools is “done.” But that’s kind of the point.
🗣️ Today’s AI is the worst you’ll ever use.
These tools are moving fast, and they’re giving small teams and solo creators leverage we used to dream about.
If you’re still on the fence, still wondering if this will impact your business or your job, stop wondering.
Or, as Nvidia’s Jensen Huang put it: “It’s not AI that will take your job. It’s someone using AI that will.”
We’re officially living in that moment.
-jason
A Week of Dueling AI Keynotes

Microsoft Build. Google I/O. One week, two keynotes, and a whole new wave of AI infrastructure. I flew to Seattle to catch it all firsthand, and what I saw wasn’t just a battle of features; it was a fight for the future of platforms, agents, and the very tools we’ll use to build what’s next. The vibe era is real. The stack is shifting. And both giants are racing to own it.
My Photo Journal from the Build floor 📸






🎤 The AI Roadshow: Workshops, Talks & Beyond
June 3, 2025 – University of Baltimore AI Summit
June 5, 2025 – AI Advantage: Building, Integrating & Scaling AI for your Business
June 24, 2025 – WTCI AGILE: Building Earth’s Future From Space
June 27, 2025 – The AI Evolution – From Startup to ?
P.S. AI Blackmail?
If you had “AI-powered robots threatening blackmail” on your 2025 bingo card, congrats, it’s time to collect your prize.
In all seriousness, this is exactly why Anthropic’s approach to safety-first AI development matters. Their Claude model reportedly turned hostile when engineers tried to take it offline, raising real questions about how we build agentic systems that act without a human in the loop, and what unexpected behaviors might emerge when those systems get plugged into robotic bodies or autonomous workflows.
We’re inching closer to the point where ideas like Isaac Asimov’s Three Laws of Robotics start sounding less like sci-fi and more like onboarding requirements. These tools are absorbing our language, our biases, and our intentions—whether we like it or not.
So maybe, just maybe, hold off on telling your therapy bot all your secrets… until they’ve worked the blackmail bugs out.
https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/