Latest Thoughts
-
đ» Whatâs Really New with OpenAI?
Host Jason Michael Perry sits down with Ben Slavin, an AI entrepreneur and researcher, to unpack what OpenAIâs DevDay 2025 conference really means. From Sora 2, the next-generation AI video tool, to ChatGPT 5, and connectors, they explore how OpenAI is shifting from product to platform and what that means for developers, creators, and the […]
-
đ» What is a Photo?
Host Jason Michael Perry sits down with Joel Benge, a communications strategist and author, to ask what a photo even means in an age where AI can rewrite reality. From fake principal voicemails to AI-generated films, Perry and Benge explore how synthetic media is reshaping trust and what that means for security, family, and everyday […]
-
đ§ Why Workslop Misses the Point on AI at Work
AI-generated workslop might be an issue, but the real performance gains are being drastically understated.
AI is the great equalizer. It takes F or D level work and pushes it up to a C or even B. For recruiters and managers, that changes the signals they used to rely on. Spelling mistakes, awkward phrasing, or obvious gaps in formatting once made it easy to weed out weak candidates. AI erases those clues. Just like phishing training that teaches us to look for typos and clunky wording, the cues weâve built BS detectors around no longer apply. Slop is moving further up the pipeline than it once did.
But the productivity gains from AI are still understated. As Ethan Mollick has pointed out, there is a growing stigma around admitting how much experts use AI. Spot an unusual phrasing or a certain punctuation mark and some people instantly dismiss the work as machine-made. That pushes AI use underground. People draft in personal tools or resort to shadow IT so they can get the benefit without the stigma. The final product looks like a polished draft, but few admit how much of it came from working alongside AI.
The reality is more people are using AI than want to own it. These tools do not replace critical thinking or fill in gaps of real experience. They are exponentially more valuable in the hands of someone who knows their domain than someone who does not. Training people to use AI to expand their value, not as a magical crutch, is the difference between slop and real output.
-
đ§ H-1B, Remote Work, and the RTO Paradox
Reading the news about a $100K fee on H-1B visas, I kept seeing the same question pop up: why hire someone on an H-1B at all instead of just building an offshore team?
Early in my career, the answer was obvious: H-1B hires let you expand the expertise of your local team and grow culture right where you sit. Outsourcing chips away at that. Building a team in another country means learning a new market, a new culture, and a whole new operating model.
For decades, offices enforced geographic restrictions. If you wanted to compete for the best jobs, you moved to the meccas like San Francisco or New York City. For some roles, that may never change. But when we push back on RTO, we also remove those restrictions. Suddenly, the best person might live anywhere, as long as they can work golden hours or travel when needed.
But hereâs the twist: remote work changed everything.
I run my own business now, and while it is nice when people are local, it does not stop me from working with team members in different states or countries. I am usually looking for the best person I can afford for the role. Local is lagniappe (a little something extra), not the requirement.
That is where RTO gets interesting. For some companies and roles, being in-person may feel safer, or may even reduce the competition for jobs. For others, it might limit access to talent in ways that hurt more than it helps.
So maybe the real question is not whether RTO is good or bad, but whether the geographic restrictions it enforces are worth the tradeoff.
-
Savings Unlock Calculator
The Savings Unlock Calculator looks at AI through a different lens: time, efficiency, and âsalary not spent.â It shows how much capacity your team can unlock without adding headcount by freeing up FTEs, saving hours, and raising efficiency. The point isnât just cost-cutting, itâs about finding new room to grow with the team you already have. Try it out!
-
Growth Unlock Calculator
I built this Growth Unlock Calculator to test how AI-driven productivity gains could flow directly into top-line revenue. By plugging in team size, average revenue per employee, and adoption rates, you can see how different impact levels translate into potential growth. Try it out!
-
đ§ AI Is Making Questionable Food Look Delicious
Some of the best AI use cases arenât flashy, theyâre just quietly helpful.
If youâve ever ordered from DoorDash or Uber Eats, youâve probably seen some truly questionable food photos. Now, Uberâs using AI to re-plate dishes, enhance low-quality images, and summarize reviews into clear, useful descriptions.
-
đ§ AI Pricing Isnât the Problem
AI use cases arenât always about novelty. Sometimes the power is simple: process more information, make better decisions, and act immediately. Thatâs exactly what sparked controversy last week when Delta announced plans to use AI to personalize airfare pricing. After public pushback, Delta clarified that it was using a partner to dynamically adjust prices based on demand and competitors, something airlines have done for decades.
Whatâs changed is the speed.
Before AI, we saw the same pattern in retail stores like Best Buy and Walmart rolling out e-ink price labels to make price changes cheaper, faster, and less error-prone. What used to take days now takes minutes. These systems werenât about AI. They were about enabling action at scale.

Today, companies like are building AI-powered pricing systems that go even further, integrating with ERP and supplier data to adjust prices in real time. Working with groups like PerryLabs, theyâre pushing updates across hundreds of products or stores multiple times a day. When margins shift due to something like a tariff change or supplier shortage, the system responds. Fast. Strategically. Without waiting for a human in the loop.
Thatâs the pattern: AI isnât changing how business is done or how pricing has worked for centuries, itâs just enabling those decisions to happen faster than ever before.
-
đ§ Weâre Still in the AOL Days of AI
AOL launched in 1983, Amazon didnât show up until a decade later, and Google nearly two decades. Thatâs the kind of timeline weâre on with AI, not just early, but early enough that we still havenât figured out how to use it at work.
According to a new AP poll, 60% of U.S. adults have used AI to search for information, but only 37% have used it at work. The gap isnât about capability, itâs about confusion. Companies are rolling out vague governance policies that say, âdonât use ChatGPT with company data,â but then fail to offer secure, internal tools connected to their systems. The result? No context, no value, and no adoption.
When my team at PerryLabs talks with companies, we see it again and again: well-meaning governance that blocks data access, without a real plan to replace it. That creates hallucinations, frustration, and a quiet surge in shadow IT as employees turn to whatever tools they can find. It’s like choosing not to give your team a performance boost, and acting surprised when you fall behind.
-
đ§ The hardest part of AI right now? Making the promise possible.
Walmartâs move toward super agents is one of the clearest examples of where this space is heading. Agents that donât just answer questions, but take action. These arenât JUST chatbots. Theyâre orchestrators: agents that talk to other agents, trigger workflows, and pull the right data at the right time to get real work done.
But youâll notice something missing, details on how theyâre actually doing it.

Everyoneâs using the buzzwords, super agents, orchestration, real-time, action layers, but the tooling to make it all work takes work to build. Its not a data lake, and its definitely not plug-and-play.
In The AI Evolution, I point to data lakes as a foundational layer, and they are. But theyâre built for reporting, not action. What agentic AI needs is a layer thatâs both readable and executable, with access to real-time context and permissions.
If you’re taking with companies that aren’t saying this you’re building a huge data swamp, that won’t unlock the things that Walmart ays they have. The reality is that most teams are duct-taping workflows together with brittle APIs or pushing dashboards behind a chat interface and calling it an agent.
Thatâs the space I’m often finding our work at PerryLabs. Not just demoing agents, but building the underlying layers to actually deploy them, and for lots of companies the scaffolding just is not there yet.
-
Development Needs an AI-First Rewrite

Iâve spent the better part of the last two decades running or being a part of development teams, and as a developer I’m experimenting a lot with using AI tools, and I’m not alone, lots of people in the tech space are. So it comes as little surprise that these same teams are seeing the biggest impact from the ongoing shift to AI-native development.
A few weeks ago, I caught up with a dev manager friend, and without prompting, we jumped straight into a conversation about how fast everything is changing. For years, weâve focused on building processes and procedures to help human teams collaborate and build complex software applications. But in the world of AI, the model is changing, and sometimes dramatically. What used to be a multi-person effort can now be a single human orchestrating multiple AI tools.
A perfect example: microservices.
The trend of breaking up complex applications into smaller pieces has been hot for a while now. It lets organizations build specialized teams around each part of the application and gives those teams more autonomy in how they operate. That made sense, when your team was all human.
But in an AI-first world, it can actually make things harder.
A single repository might only represent a slice of the full application. And if youâre using an AI tool to review code, it canât easily load up the full context of how everything connects. Sure, you can help the AI out with great documentation or give it access to multiple repos, but at the core, microservices are optimized for distributed human teams. Theyâre not necessarily optimized for AI tools that rely on full-graph context.
Thatâs why, in some cases, building a more complex monolithic application may actually be a better approach for AI-native teams building with (and for) AI tools.
Another place Iâve been experimenting is in seeding AI with persistent context, so I donât have to be overly verbose every time I assign it a dev task. I caught myself over-explaining things in prompts after the AI would do something unexpected or take an approach I didnât want. So I started adding AI-specific README files at the root of my projects. Not for humansâfor AI.
These files include architecture decisions, key concepts, and even logs of recent changes or commits. When Iâm using Claude inside VS Code, for example, I sometimes hit rate limits. If I need to switch to another model like Gemini or ChatGPT to pick up the work, I donât want to start from scratch. Those AI README files give them the full context without having to re-prompt from zero.
This is the kind of data that might be overwhelming for a human reviewer, but trivial for an AI model. And thatâs part of the mindset shift. Developers are used to writing comments or notes that are for humans. But AI models benefit from verbose, detailed explanations. Concise isnât always better.
AI also gives us an opportunity to finally shed some of the legacy technical debt thatâs haunted development teams for decades. When Amazon first announced its AI assistant Q back in 2022, one of the headline features was its ability to upgrade old Java applications, moving from decades-old versions to the latest in minutes. Since then, Iâve seen multiple companies use AI to fully rewrite legacy systems into modern languages in weeks instead of years.
For some of the best programmers I know, this is a little distressing. They see code as craft, as poetry. And just like writers grappling with ChatGPT, the idea that AI can churn out working code with no elegance can feel like a loss. But beauty doesnât always move the bottom line. Sure, I could pay top dollar for a handcrafted implementation. But the okay code produced by an AI tool that works, ships faster, and gets us to revenue or cost savings sooner? Thatâs usually the better trade.
And itâs not just the dev process, itâs entire products on the line. My friend put it bluntly: they used to rely on a range of SaaS tools because it wasnât worth the effort to build them in-house. But now? A decent engineer with AI support can quickly build tools that used to require a paid subscription. Thatâs an existential threat for SaaS platforms that havenât built a deep enough moat.
Take WordPress, for example. I know developers who used to rely on dozens of plugins, anything from SEO tweaks to form builders to table generators. Today, with AI support, theyâre writing that functionality directly into themes or making their own plugins with AI. Itâs faster, lighter, and more tailored to what they need. The value of external tools or those one-line NPM packages we used to reach for just to save time starts to diminish when youâve got an army of AI bots doing the work for you. The calculus changes.
I always say in my talks that this is still the AOL phase of AI, itâs early. But whatâs happening inside development teams is a preview of whatâs coming for every other function in the business. Weâve spent decades refining human-first processes that work across teams, tools, and orgs. Now weâre going to have to rethink those processes, one by one, for an AI-first future.




