Latest Thoughts
-
đ§ Teaching in an AI World
In my talks with professors at local colleges and universities, I keep hearing the same thing. Weâre teaching for a world thatâs changing faster than we can update our syllabi.
The scale of these AI tools is mind-blowing. But hereâs the catch: subject matter experts, people who truly get it, are the ones who benefit most. When you lack that core understanding, the tool becomes a crutch, and the power dynamic shifts. Instead of the human leading, the tool leads.
I see this all the time with new developers and junior engineers. Many lean on these tools like a lifeline, while the more experienced folks use them to amplify what they already know.
The Jetsons often asked this question in a way only they could, with jokes like George mashing potatoes and calling it slavery before pressing a button to have a robot do it for him.
In the linked blog post, âThe Myth of Automated Learning,â the author lays it out clearly:
Thanks to human-factors researchers and the mountain of evidence theyâve compiled on the consequences of automation for workers, we know that one of three things happens when people use a machine to automate a task they would otherwise have done themselves:
- Their skill in the activity grows.
- Their skill in the activity atrophies.
- Their skill in the activity never develops.
Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development. It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges. In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier.
Of course, the bigger question is how much of this is about the present and how much it will matter in the future. Most of us wouldnât survive if we had to hunt and gather our own food or live without modern conveniences. Maybe some foundational knowledge just wonât be as important tomorrow as it is today. Could programming become a dying art form like calligraphy?
At the heart of all this is the question of whatâs actually worth teaching in a world where AI handles the heavy lifting.
-
đ§ AI Is Helping Robotics Move Faster
I canât wait to get my hands on a copy of the new dev kit from Hugging Face. But whatâs most striking here is how AI is finally bridging the gap to bring general-purpose robotics to life.
Itâs easy to miss the investment, but under the hood, every major AI developer is quietly figuring out how to teach models not just to understand our world but to interact with it. That means moving from generating text or images to transforming what they âseeâ or âunderstandâ into actions, like a robotic arm that can pick up a box or a humanoid that can fold your laundry.
As AI-powered robotics becomes more common, itâs easy to imagine a workplace where robots are as ubiquitous as laptops. Just like a human, you can give them a prompt or an instruction set, or have them watch you do a task once, and theyâll repeat it effortlessly, at a cost humans simply canât match. These systems can work 24/7, needing only electricity to keep them moving.
The doors that AI opens here are tremendous, and much closer than you might think
-
đ§ Vibe Coding A Security Risk?
Vibe coding. Vibe marketing. Vibe everything.
Itâs not just a fad, itâs a transformation. Weâre talking about a 100x boost in individual capability, but hereâs the kicker: subject matter expertise still matters. This article about Lovable, on of the hottest new vibe coding startup, makes that crystal clear.
In development, simple mistakes like where you store your API keys or how you filter input can make or break your security. Itâs common sense to most developers that these steps are essential to writing secure code, but at least today, tools like Lovable or Windsurf gloss over this, leaving a production code base open to attack.
Iâve noticed the same thing when working with other AI tools or writing prompts, you have to be explicit about writing code securely. The vibe can be great and scale human potential by 100x, but until we build in the guardrails, subject matter knowledge will be ireplacable.
-
A Week of Dueling AI Keynotes
Microsoft Build. Google I/O. One week, two keynotes, and a surprise plot twist from OpenAI. I flew to Seattle for Build, but the week quickly became about something bigger than just tool demos; it was a moment that clarified how fast the landscape is moving and how much is on the line.
For Microsoft, the mood behind the scenes is⌠complicated. Their internal AI division hasnât had the impact some expected. And the OpenAI partnershipâthe crown jewel of their AI strategyâfeels increasingly uneasy. OpenAI has gone from sidekick to wildcard. Faster releases, bolder moves, and a growing sense that Microsoft is no longer in the driverâs seat.
Google has its own tension. It still prints money through ads, but it just lost two major antitrust cases and is deep in the remedies stage, which could change the company forever. Meanwhile, the company is trying to reinvent itself around AI, even at its core business model (search + ads) starts to look shaky in a world where answers come from chat, not clicks.
Letâs start with Microsoft
The Build keynote focused squarely on developers and, more specifically, how AI can make them exponentially more powerful. This ideaâAI as a multiplier for small, agile teamsâis core to how I think about Vibe Teams. Itâs not about replacing engineers. Itâs about amplifying them. And this year, Microsoft leaned in hard.
One of the most exciting announcements was GitHub Copilot Agents. If youâve played with tools like Claude Code or Lovable, you know how quickly AI is changing the way we write software. Weâre moving from line-by-line coding to spec-driven development, where you define what the system should do, and agentic AI figures out how.
Copilot Agents takes that further. You can now assign an issue or bug ticket in GitHub to an AI agent. That agent will create a new branch, tackle the task, and submit a pull request when itâs done. You review the PR, suggest edits if needed, and decide whether to merge. No risk to your main codebase. No rogue commits. Just a smart collaborator who knows the rules of the repo.
This isnât just task automationâitâs the blueprint for how teams might work moving forward. Imagine a lead engineer writing specs and reviewing pull requestsânot typing out every line of code but conducting an orchestra of agentic contributors. These agents arenât sidekicks. Theyâre teammates. And they donât need coffee breaks.
Sam Altman joined Satya Nadella remotely – another telling sign that their relationship is collaborative but increasingly arms-length. Satya reiterated Microsoftâs long view, and Sam echoed something Iâve said for a while now: âTodayâs AI is the worst AI youâll ever use.â Thatâs both a promise and a warning.
The next wave of announcements went deeper into the Microsoft stack. Copilot is being deeply embedded into Microsoft 365, supported by a new set of Copilot APIs and an Agent Toolkit. The goal? Create a marketplace of plug-and-play tools that expand what Copilot Studio agents can access. Itâs not just about making Teams smarter – itâs about turning every Microsoft app into an environment agents can operate inside and build upon.
Microsoft also announced Copilot Tuning inside Copilot Studio – a major upgrade that lets companies bring in their own data, refine agent behavior, and customize AI tools for specific use cases. But the catch? These benefits are mostly for companies that are all-in on Microsoft. If your team uses Google Workspace or a bunch of best-in-breed tools, the ecosystem friction shows.
Azure AI Studio is also broadening its model support. While OpenAI remains the centerpiece, Microsoft is hedging its bets. They’re now adding support for LLaMA, HuggingFace, GrokX, and more. Azure is being positioned as the neutral groundâa place where you can bring your model and plug it into the Microsoft stack.
Now for the real standout: MCP.
The Model Context Protocolâoriginally developed by Anthropicâis the breakout standard of the year. Itâs like USB-C for AI. A simple, universal way for agents to talk to tools, APIs, and even hardware. Microsoft is embedding MCP into Windows itself, turning the OS into an agent-aware system. Any app that registers with the Windows MCP registry becomes discoverable. An agent can see whatâs installed, what actions are possible, and trigger tasks, from launching a design in Figma to removing a background in Paint.
This is more than RPA 2.0. Itâs infrastructure for agentic computing.
Microsoft also showed how this works with local development. With tools like Ollama and Windows Foundry, you can run local models, expose them to actions using MCP, and allow agents to reason in real-time. Itâs a huge shiftâone that positions Windows as an ideal foundation for building agentic applications for business.
The implication is clear: Microsoft wants to be the default environment for agent-enabled workflows. Not by owning every model, but by owning the operating system they live inside.
Build 2025 made one thing obvious: vibe coding is here to stay. And Microsoft is betting on developers, not just to keep pace with AI, but to define what working with AI looks like next.
Now Google
Where Build was developer-focused, Google I/O spoke to many audiences, sometimes pitching directly to end-users and sometimes to developers. Google I/O pushed to give a peek at what an AI-powered future could look like inside the Google ecosystem. It was a broader, flashier stage, but still packed with signals about where theyâre headed.
The show opened with cinematic flair: a vignette generated entirely by Flow, the new AI-powered video tool built on top of Veo 3. But this wasnât just a demo of visual generation. Flow pairs Veo 3âs video modeling with native audio capabilities, meaning it can generate voiceovers, sound effects, and ambient noise, all with AI. And more importantly, it understands film language. Want a dolly zoom? A smash cut? A wide establishing shot with emotional music? If you can say it, Flow can probably generate it.
But Googleâs bigger focus was context and utility.
Gemini 2.5 was the headliner, a major upgrade to Google’s flagship model, now positioned as their most advanced to date. This version is multimodal, supports longer context windows, and powers the majority of what was shown across demos and product launches. Google made it clear: Gemini 2.5 isnât just powering experimentsâitâs now the model behind Gmail, Docs, Calendar, Drive, and Android.
Gemini 2.5 and the new Google AI Studio offer a powerful development stack that rivals GitHub Copilot and Lovable. Developers can use prompts, code, and multi-modal inputs to build apps, with native support for MCP, enabling seamless interactions with third-party tools and services. This makes AI Studio a serious contender for building real-world, agentic software inside the Google ecosystem.
Google confirmed full MCP support in the Gemini SDK, aligning with Microsoftâs adoption and accelerating momentum behind the protocol. With both tech giants backing it, MCP is well on its way to becoming the USB-C of the agentic era.
And then thereâs search.
Google is quietly testing an AI-first search experience that looks a lot like Perplexity – summarized answers, contextual follow-ups, and real-time data. But itâs not the default yet. That hesitation is telling: Google still makes most of its revenue from traditional search-based ads. Theyâre dipping their toes into disruption while trying not to tip the boat. That said, their advantageâaccess to deep, real-time data from Maps, Shopping, Flights, and moreâis hard to match.
Project Astra offered one of the most compelling demos of the week. Itâs Googleâs vision for what an AI assistant can truly become – voice-native, video-aware, memory-enabled. In the clip, an agent helps someone repair a bike, look up receipts in Gmail, make phone calls unassisted to check inventory at a store, reads instructions from PDFs, and even pauses naturally when interrupted. Was it real? Hard to say. But Google claims the same underlying tech will power upcoming features in Android and Gemini apps. Their goal is to graduate features from Astra as they evolve from showcase to shippable, moving beyond demos into the day-to-day.
Gemini Robotics hinted at whatâs next, training AI to understand physical environments, manipulate objects, and act in the real world. Itâs early, but itâs a step toward embodied robotic agents.
And then came Googleâs XR glasses.
Not just the long-rumored VR headset with Samsung, but a surprise reveal: lightweight glasses built with Warby Parker. These arenât just a reboot of Google Glass. They feature a heads-up display, live translation, and deep Gemini integration. That display can able to silently serve up directions, messages, or contextual cues, pushing them beyond Metaâs Ray-Bans, which remain audio-only. These are ambient, spatial, and persistent. You wear them, and the assistant moves with you.
Between Appleâs Vision Pro, Metaâs Orion prototypes, and now Google XR, one thing is clear: weâre heading into a post-keyboard world. The next interface isnât a screen, itâs an environment. And Googleâs betting that Gemini, which they say now leads the field in model performance, will be the AI to power it all.
And XR glasses seem like a perfect time for Sam Altman to steal the show…
OpenAI and IO sitting in a tree…
Just as Microsoft and Google finished their keynotes, Sam Altman and Jony Ive dropped the weekâs final curveball: OpenAI has acquired Iveâs AI hardware-focused startup, IO, for a reported $6.5 billion.
There were no specs, no images, and no product name. Just a vision. Altman said he took home a prototype, and it was enough to convince him this was the next step. ‘I’ve described the device as something designed to “fix the faults of the iPhone,” less screen time, more ambient interaction. Rumors suggest itâs screenless, portable, and part of a family of devices built around voice, presence, and smart coordination.
In a week filled with agents, protocols, and assistant upgrades, the IO announcement begs the question:
What is the future of computing? Are Apple, Google, Meta, and so many other companies right to bet on glasses?
And if it’s not glasses, not headsets, not wearables, weâve already seenâbut something entirely new. What might the new interface to computing look like?
And with Ive on board, design won’t be an afterthought. This wonât be a dev kit in a clamshell. Itâll be beautiful. Personal. Probably weird in all the right ways.
So where does that leave us?
AI isnât just getting smarterâitâs getting physical.
Agents are learning to talk to software through MCP. Assistants are learning your context across calendars, emails, and docs. Models are learning to see and act in the world around them. And now hardware is joining the party.
Weâre entering an era where the tools wonât just be on your desktopâtheyâll surround you. Support you. Sometimes, speak before you do. Thatâs exciting. Itâs also unsettling. Because as much as this future feels inevitable, itâs still up for grabs.
The question isnât whether agentic AI is coming. Itâs who youâll trust to build the agent that stands beside you.
Next up: WWDC on June 10. Apple has some catching up to do. And then re:Invent later this year.
-
đ§ AI is sprinting, and this weekâs pace was dizzying!
At Microsoft Build and Google I/O, we saw a flood of dev-focused announcements, new models, better tooling, and smarter assistants. OpenAI shook things up with its surprise acquisition of IO, the design-forward startup from Jony Ive that had already picked up Wind Surf and its âvibe codingâ platform.
But Anthropic quietly dropped what might be the most impressive update of the week: Claude 4. Analysts are calling it one of the best coding models released to date. And hereâs where things get really interesting: rumor has it Apple is prepping a Claude 4 integration directly into Xcode. WWDC is around the corner, and if true, that could mark a major shift in how Apple plans to close the AI gap.
Every player is pushing forward. The race isnât just about general intelligence anymore – itâs about who can make AI feel seamless, useful, and built-in for developers. -
The AI Evolution: Approaching Data and Integration
“I’ve seen things you people wouldn’t believe.”
– Roy Batty, Blade RunnerWorking in consulting gives you a kind of X-ray vision. You walk into a room with a new client and they start listing all the reasons theyâre uniqueâhow no one understands their business, how their systems are one-of-a-kind, how the complexity of what they do defies replication. And sure, some of that is true. Every organization has things that make it unique and its oddities. But once you get past the surface, you usually find something that feels familiar: a recognizable business structure layered with years of adaptations, workarounds, and mismatched systems that were never quite built to talk to each other.
When it comes to AI, this same story plays out over and over again. We start talking about the opportunitiesâwhere it could go, what it might unlockâand then we hit the same wall: the data. Or more accurately, the data they think they have.
Here are some common refrains Iâve heard across industries:
⢠“Those two systems donât talk to each other.”
⢠“That data is stored in PDFs we print and file away.”
⢠“We purge that information every few months because of compliance.”
⢠“Itâs in SharePoint. Somewhere. Maybe.”
⢠“Our marketing and sales platforms use different ID systems, so we canât link anything.”
None of these answers are surprising. Whatâs surprising is how often people are still shocked when their AI project struggles to get off the ground.
In our survey, 44% of business leaders said that their companies are planning to implement data modernization efforts in 2024 to take better advantage of Gen AI.
PWC 2024 AI Business Predictions
This chapter is about getting real about your data. Before you can build intelligent systems, you have to integrate them. And before you can integrate them, you have to understand what data you have, where it lives, what shape itâs in, and whether itâs even useful in the first place.
Most companies assume their data is more usable than it actually is, which creates the Illusion of Readiness.
They picture their systems like neat rows of filing cabinets, all labeled and accessible. The reality is more like a junk drawer: some useful stuff, some random receipts, and a bunch of keys no one remembers the purpose of.
And hereâs the kicker: AI doesnât just use data. It relies on it. Feeds off it. Becomes it. If you give it bad data, it doesnât know any better. It wonât tell you itâs confused. It will confidently give you the wrong answerâand that can have consequences.
Before we get into the mechanics of how AI consumes data, we need to talk about what kind of AI weâre actually working with.
The term youâll hear a lot is foundation model.
These are large, general-purpose AI models trained on vast swaths of dataâthink billions upon billions of pieces of information. Theyâve read the internet. Absorbed the classics. Ingested code repositories, encyclopedias, manuals, blogs, customer reviews, Reddit threads, medical journals, and everything in between. Foundation models like ChatGPT, Claude, Gemini, and Llama are built by major AI labs with enormous compute budgets and access to vast training sets. The result? Models with broad, flexible knowledge and the ability to respond to all sorts of queries, even ones theyâve never explicitly seen before.
To understand how these models workâand how youâll be charged for themâyou need to know about tokens.
A token is a unit of language. Itâs not quite a word, and not quite a character. Most AI models split up text into these tokens to process input and generate output. For example, the phrase âfoundation models are smartâ becomes something like: âfoundation,â âmodels,â âare,â âsmart.â Each token costs money to process, both in and out. That means longer prompts, longer documents, and longer replies increase your cost.
But itâs not just about billing. Tokens define the modelâs short-term memory, called the context window. Each model has a limited number of tokens it can âseeâ at any given time. Once you exceed that limit, earlier parts of the conversation start to fall out of memory. This is why long chats start to lose focusâand why prompts or instruction sets, RAG results, and injected context have to be compact and relevant. The more efficient your language, the smarter your AI becomes.
But not every task needs a giant model.
If youâre running a chatbot that answers routine FAQs, sorting support tickets, or parsing form submissions, a smaller and faster model will likely serve you betterâand at a much lower cost. Foundation models are impressive, but theyâre not always the most efficient tool in the toolbox. The art of modern AI isnât about grabbing the biggest brain in the room. Itâs about choosing the right model for the right jobâand knowing when to escalate to something more powerful only when the problem truly demands it.
Theyâre called âfoundationâ models for a reason: they serve as the base layer on which other, more specialized AI systems are built.
But hereâs the catch: These models know a lot about everything, but nothing about you.
They can answer general questions, draft emails, and summarize the history of jazz, but they donât know how your company operates, what your customers expect, or how your internal systems are structured. Thatâs your businessâs knowledge. It’s edge. And thatâs what theyâre missing.
So when I talk to clients about working with foundation models, I often use a simple analogy:
Think of a foundation model like a shrink-wrapped college grad.
Theyâve spent years absorbing general knowledgeâhistory, math, language, computer science, maybe even a few philosophy electives. Theyâre smart. Broadly informed. But they donât yet know how you do things. Theyâve never been inside your business, they donât know your workflows, and they havenât lived through your weird industry quirks.
Theyâre ready to learn. But the quality of that learning depends entirely on how you teach them.
Some of the best-performing companies in the world are known for their onboardingâhow they train employees on day one to not just do the job, but to do it their way. With AI, the same principle applies. But instead of crafting training programs, youâre curating datasets. Instead of a week-long orientation, youâre creating repeatable processes that teach the model how to think and respond like someone inside your organization.
The tools are powerful. But theyâre blank on the most important stuff: your data, your culture, your expectations.
Thatâs where integration comes in. Thatâs where the real work starts.
So now, with that in mind, letâs pause and break down the major ways these foundation models actually consume and interact with your data:
⢠Fine-Tuning: Adjusting a general model with domain-specific data. Itâs powerful, but expensive and slow.
⢠Prompt Injection: Feeding data into the model at runtime, via a prompt. Quick, flexible, great for prototypes.
⢠RAG (Retrieval-Augmented Generation): Dynamically pulling in relevant documents or facts to answer a question. This is where a lot of real-world business AI is headedâand where integration becomes make-or-break.
Letâs clarify something right out of the gate: youâre not picking and choosing one method from a menu. Youâre using all of themâmaybe not all at once, but certainly over time, across use cases, or layered within a single product. Each of these approachesâfine-tuning, prompt injection, and RAGâhas its strengths, and more importantly, its purpose. Prompt injection can be a great place to prototype or test assumptions. RAG lets you pull in fresh, contextual data in real time. Fine-tuning adds deeper understanding over time. Each method puts different pressure on your data infrastructure, your team, and your expectations. But they all share one common requirement: accessible, well-governed data.
And thatâs the part where most companies start to sweat.
But before we get deep into integration strategies or data lake architectures, we need to rewind a bit because the way we talk about prompting itself is already limiting how we think….
Thatâs just a slice of the chapterâand a small window into the work ahead.
The AI Evolution isnât about theory or hype. Itâs a real-world guide for leaders who want to build smarter orgs, prep their teams, and actually use AI without the hand-waving.
If this hit home, the full book goes deeper with practical frameworks, strategy shifts, and the patterns Iâve seen across startups, enterprises, and everything in between.
đ Grab your copy of The AI Evolution here.
âď¸ And if you do leave a review. It means a lot. -
Bye SEO, and Hello AEO
If you caught my recent LinkedIn post, Iâve been sounding the alarm on SEO and searchâs fading dominance. Not because itâs irrelevant, but because the game is changing fast.
For years, SEO (Search Engine Optimization) has been the foundation of digital discovery. But weâre entering the age of Google Zeroâa world where fewer clicks make it past the search results page. Googleâs tools (Maps, embedded widgets, AI Overviews) are now hogging the spotlight. And hereâs the latest signal: In April, Appleâs Eddy Cue said that Safari saw its first-ever drop in search queries via the URL bar. Thatâs huge. Safari is the default browser for iPhones and commands over half of U.S. mobile browser traffic. A dip here means a real shift in how people are asking questions.
Iâve felt it in my habits. I still use Google, but I’ve started using Perplexity, ChatGPT, or Claude to ask my questions. Itâs not about keywords anymore, itâs about answers. That brings us to a rising idea: AEO â Answer Engine Optimization.
Just like SEO helped businesses get found by Google, AEO is about getting found by AI. Tools like Perplexity and ChatGPT now crawl the open web to synthesize responses. If your content isnât surfacing in that layer, youâre invisible to the next generation of search.
Itâs not perfectâyet. For something like a recipe, the AI might not cite you at all. But for anything involving a recommendation or purchase decision, it matters a lot.
Take this example: I was recently looking for alternatives to QuickBooks. In the past, Iâd Google it and skim through some SEO-packed roundup articles. Now? I start with Perplexity or ChatGPT. Both gave me actual product suggestions, citing sources from review sites, Reddit threads, and open web content. The experience felt more tailored. More direct.
If you sell anythingâwhether itâs a SaaS product, a service, or a physical item this is the new front door. Itâs not just about ranking on Google anymore. Itâs about being visible to the large language models that shape what users see when they ask.
So, you’re probably asking. How do you optimize for an answer engine? The truth is, the rules are still emerging. But hereâs what we know so far:
⢠Perplexity leans on Bing. It uses Microsoftâs search infrastructure in the background. So your Bing SEO might matter more than you think.
⢠Sources are visible. Perplexity shows where it pulled info fromâReddit, Clutch, Bench, review sites, etc. If your product is listed or mentioned there, youâve got a shot.
⢠Wikipedia still rules. Most AI models treat it as a trusted source. If your business isnât listedâor your page is thinâyouâre missing an easy credibility signal.But the biggest move you can make?
Start asking AI tools what they know about you.Try it. Ask ChatGPT or Perplexity: âWhat are the top alternatives to [your product]?â or âWhat is [your business] known for?â See what surfaces. That answer tells you what the AI thinks is true. And just like with Google, you can shape that reality by shaping the sources it learns from.
This shift wonât happen overnight. But itâs already happening.
Donât just optimize for search. Optimize for answers. -
Welcome to the Vibe Era
Early in the AI revolution, I sat across a founder pitching a low-code solution that claimed to eliminate the need for developers. I was skeptical, after all, Iâd heard this pitch before. As an engineer whoâs spent a career building digital products, I figured it was another passing trend.
I was wrong. And worse, I underestimated just how fast this change would come.
Today, weâre in a new era. The skills and careers many of us have spent years refining may no longer be the most valuable thing we bring to the table. Not because theyâve lost value, but because the tools have shifted whatâs possible.
Weâre in an era where one person, equipped with the right AI stack, can match the output of ten. Vibe coding. Vibe marketing. Vibe product development. Small teams (and sometimes solo operators) are launching polished prototypes, creative campaigns, and full-on businesses fast.
For marketers, the traditional team structure is collapsing.
- Need product photos? Generate them with ChatGPT or Meta Imagine.
- Need a product launch video? Runway or Sora has you covered.
- Need a voiceover? Use ElevenLabs.
- Need custom music? Suno AI.
- Need someone to bounce ideas off of? Make an AI agent that thinks with you.
What used to take a full team now takes⌠vibes and tools.
The same applies to developers. Tools like Lovable let you spec and ship an MVP in minutes. I recently used it to build a simple app from scratch, and it took me less than an hour. Itâs not perfect, but itâs good enough to rethink how we define âdevelopment.â
As I often say in my talks, we are still in the AOL dial-up phase of this revolution. This version of AI youâre using today is the worst it will ever be.
Even if you think, âI could write better codeâ or âthat copy isnât quite there,â remember: these tools get better with every click and every release. Critiquing their limits is fair, but betting against their progress? Thatâs dangerous.
Shopifyâs CEO recently said, âBefore hiring someone, I ask: Is this a job AI can do?â Thatâs not just a hiring philosophyâitâs a survival strategy. Itâs catching on fast.
That leads to a deeper question: If AI can handle the tactical and mechanical parts of your work, then whatâs left that only you or I can do?
For marketers, itâs the story behind the product.
For developers, itâs solving human problemsânot just writing code.
For writers, itâs the reporting, not the sentences.
(Just read The Informationâs deep dive on Appleâs AI stumblesâAI couldâve written it, but it couldnât have reported it.)
This is the heart of the vibe era. Itâs not about replacing humansâitâs about refocusing them. On feel. On instinct. On taste.
AI does the repetitive parts. You bring the spark.
In essence, vibe marketing (and vibe everything) is a shift in what matters most: You focus on crafting emotional resonanceâthe vibeâwhile AI handles execution.
Itâs tailor-made for teams that want to scale fast and connect authentically in a world moving faster than ever.
To borrow a metaphor:
Stephen King isnât great because of just the words on the page.
Heâs great because of the ideas he puts there.
And thatâs where the human magic still lives.
-
The Worst It Will Ever Be
One thing I often say in my talks is that this version of AI youâre using today is the worst it will ever be.
Itâs not a knockâitâs a reminder. The pace of progress in AI is staggering. Features that were laughably bad just a year or two ago have quietly evolved into shockingly capable tools. Nowhere is this more obvious than with image generation.
Designers used to love dunking on AI-generated images. Weâd share screenshots of twisted hands, off-kilter eyes, and text that looked like a keyboard sneezed. And for good reasonâit was bad. But release by release, the edges have been smoothed. The hands make sense. The faces feel grounded. And the text? It finally looks like, well, text.
Miyazakiâs Legacy Meets AI
This all came to mind again recently when an old clip of Hayao Miyazaki started circulating. If youâre not familiar, Miyazaki is the legendary co-founder of Studio Ghibli, the anime studio behind Spirited Away, My Neighbor Totoro, and Princess Mononoke. His art style is iconicâwhimsical, delicate, and instantly recognizable. Ghibliâs work isnât just beautiful; itâs emotional. It feels human.
So when Miyazaki was shown an early AI-generated video years ago, his response was brutal:
âI strongly feel that this is an insult to life itself.â
Oof. But here we are in 2025, and now people are using ChatGPTâs new image generation feature to recreate scenes in Studio Ghibliâs style with eerie accuracy.
Of course, I had to try it.
And I have to admitâitâs impressive. Not just the style replication, but the fact that the entire composition gets pulled into that world. The lighting, the mood, the characters⌠the tool doesnât just apply a filter. It understands the vibe.
Muppets, Comics, and Infographics, Oh My
Inspired by the experiment, I went down the rabbit hole.
First: Muppets. I blame my older brother James for this idea, but I started generating Muppet versions of our family and a few friends. The results were weirdly goodâcheery felt faces, button eyes, and backgrounds that still somehow made sense. It even preserved details from the original photos, just muppet-ified.
The Muppet version of one of my favorite photos – you can see it on my about page. Then I wonderedâcould this work for layout-driven design? What about infographics?
This was the prompt: I need an infographic that shows the sales funnel process I suggest companies use – use this as inspiration Again, it nailed it. The AI could not only generate visuals, but correctly layer and position readable, realistic text onto the imagesâa feat that was basically impossible in the early days of AI art.
So I pushed further: comics.
Could I recreate the clean simplicity of XKCD or the style of something like the popular The Far side comic strip?
The original XKCD comic is much, much better… ChatGPT and I made a version of my favorite Far Side comic…. I hear this is where the brightest minds work From Toy to Tool
You canât snap your fingers and expect instant results. But itâs no longer just a toy. Itâs a creative partnerâand if youâre a designer, marketer, or content creator, itâs something you should be exploring now.
And hereâs the big takeaway. Even if the images donât quite reach your final vision, theyâre now good enough to prototype, storyboard, or inspire a full design process. The creative bar keeps risingâand so does the floor.
So if you havenât played with ChatGPTâs image generation yet, try it out. Generate something weird. Make a comic. Turn yourself into a Muppet. Just remember: This is the worst version of the tool youâll ever use.
-
Rise of the Reasoning Models
Last week, I sat on a panel at the Maryland Technology Councilâs Technology Transformation Conference to discuss Data Governance in the Age of AI alongside an incredible group of experts. During the Q&A, someone asked about DeepSeek and how it changes how we think about data usageâa question that speaks to a fundamental shift happening in AI.
When I give talks on AI, I often compare foundation modelsâAI models trained on vast datasetsâto a high school or college graduate entering the workforce. These models are loaded with general knowledge, and just like a college graduate or a masterâs degree holder, they may be specialized for particular industries.
If this analogy holds, models like ChatGPT and Claude are strong generalists, but what makes a company special is its secret sauceâthe unique knowledge, processes, and experience that businesses invest heavily in teaching their employees. Thatâs why large proprietary datasets have been key to training AI, ensuring models understand an organizationâs way of doing things.
DeepSeek changes this approach. Unlike traditional AI models trained on massive datasets, DeepSeek was built on a much smaller datasetâpartly by distilling knowledge from other AI models (essentially asking OpenAI and others questions). Lacking billions of training examples, it had to adaptâwhich led to a breakthrough in reasoning. Instead of relying solely on preloaded knowledge, DeepSeek used reinforcement learningâa process of quizzing itself, reasoning through problems, and improving iteratively. The result? It became smarter without needing all the data upfront.
If we go back to that college graduate analogy, weâve all worked with that one person who gets it. Someone who figures things out quickly, even if they donât have the same background knowledge as others. Thatâs whatâs happening with AI right now.
Over the last few weeks, every major AI company seems to be launching âreasoning modelsââpossibly following DeepSeekâs blueprint. These models use a process called Chain of Thought (COT), which allows them to analyze problems step by step, effectively âshowing their workâ as they reason through complex tasks. Think of it like a math teacher asking students to show their workâexcept now, AI can do the same, giving transparency into its decision-making process.
Donât get me wrongâdata is still insanely valuable. Now, the question is: Can a highly capable reasoning model using Chain of Thought deliver answers as effectively as a model pre-trained on billions of data points?
My guess? Yes.
This changes how companies may train AI models in the future. Instead of building massive proprietary datasets, businesses may be able to pull pre-built reasoning models off the shelfâjust like hiring the best internâand put them to work with far less effort.
-
Writing an AI-Optimized Resume
Earlier this week, Meta began a round of job cuts and has signaled that 2025 will be a tough year. But theyâre far from aloneâMicrosoft, Workday, Sonos, Salesforce, and several other tech companies have also announced layoffs, leaving thousands of professionals searching for new roles.
In the DMV (DC-Maryland-Virginia), the federal government is also facing unprecedented headwinds, with DOGE taking the lead on buyout packages and the shutdown of entire agencies, including USAID.
Like many of you, some of my friends and family were impacted, and one thing I hear over and over again? The job application process has become a nightmare.
Why Job Searching Feels Broken
For many, job hunting now means submitting tons of applications per week, navigating AI-powered screening tools, and attempting to âgameâ Applicant Tracking Systems (ATS) just to get noticed. If youâve ever optimized a website for search engines (SEO), you already understand the challengeâyour resume now needs to be written for AI just as much as for human reviewers.
As someone who has been a hiring manager, I know why these AI-powered filters exist. Companies receive an overwhelming number of applications, making AI screening tools a necessary first layer of evaluationâbut they also mean that perfectly qualified candidates might never make it past the system.
To get past these filters, job seekers need to think like SEO strategists, using resume optimization techniques to increase their chances of reaching an actual hiring manager.
AI Resume Optimization Tips
To level the playing field, resume-scoring tools have been developed to help applicants evaluate their resumes against job descriptions and ATS filters. These tools offer insights such as:
⢠Include the job title in a prominent header.
⢠Match listed skills exactly as they appear in the job description.
⢠Avoid image-heavy or complex formatsâATS systems are bots parsing text, not designers.
⢠Optimize keyword density to align with job descriptions while keeping it readable.
⢠Ensure your resume meets the minimum qualificationsâAI wonât infer missing experience.
Once youâve optimized your resume with these strategies, AI-powered tools can help you analyze your resume against job descriptions to see how well it matches and provide targeted improvement suggestions.
Testing AI Resume Scoring with JobScan
To put this into practice, I submitted my resume to Jobscan to see how well it aligned with a Chief Technology Officer (CTO) job posting in Baltimore that I found on ZipRecruiter.
Iâll admit, Jobscan was a bit finicky at first and pushed hard for an upgrade, but once I got my resume and job description uploaded, it generated a report analyzing my match score and offering several helpful suggestions to improve my resume for the job description I provided.
The results provided a rating based on my resumeâs content and offered useful insights, including:
- Hard and soft skills are mentioned in the job description and I should add.
- Missing sections or details that could improve my resumeâs match.
- Formatting adjustments (like date formats) to improve ATS readability.
It also provided a very detailed report with suggestions to improve the readability, and density of keywords for example, the words “collaboration” and “innovation” were both used 3 times in the job description but the resume mentioned collaboration once, and innovation 6 times.
The tool also offers an option to provide a URL to the job listing it will identify the ATS being used and provide additional suggestions specific to what It knows about that tool.
ChatGPT for Resume Optimization
These days many of us have access to a free or paid version of AI tools like ChatGPT or Claude, so I decided to create a prompt and see how well it could help me. I crafted a prompt that spoke to my needs and provided it with the same resume and job description. For reference here is the prompt I used:
I need to optimize my resume for an AI-powered Applicant Tracking System (ATS) to improve my chances of passing the initial screening process. Below is the job description for the role Iâm applying for, followed by my current resume.
Please analyze my resume against the job description and provide the following:
1. A match score or summary of how well my resume aligns with the job description.
2. Key skills, keywords, or qualifications from the job posting that are missing or need to be emphasized.
3. Suggestions for improving formatting and structure to ensure compatibility with ATS filters.
4. Any red flags or areas where my resume could be better tailored to the role.
Jobscan rated my resume at 49%, pointing out missing skills, formatting issues, and keyword gaps. On the other hand, ChatGPT, rated it between 80-85%, focusing more on content alignment rather than rigid formatting rules. However, it had great suggestions and naturally picked up on skills missing in my resume that exist in the job description.
While the ranking was different the recommendations and things ChatGPT pointed out are similar to the results of JobScan just not laid out as simply in a dashboard. This final recommendations section gives a pretty good overview of ChatGPT’s recommendations.
Beating the ATS Game
Most resumes now pass through an ATS before reaching a human hiring manager. Understanding how to optimize for these filters is critical in a competitive job market.
In conclusion, AI and resume-scanning tools have the potential to level the playing field for job seekersâprovided they know how to leverage them effectively. And if traditional methods fall short, why not turn the tables? Use AI to go on the offensive, automating your job applications and maximizing your opportunities. Tools like Lazy Apply let AI handle the applications for you, so you can focus on landing the right role.