Thoughts
-
Agents as a Service is not it.
If you’ve heard of OpenClaw, Perplexity’s Personal Computer, or NVIDIA‘s new Agent Toolkit with NemoClaw, you may also know that the real power behind these tools comes from skills. Modular capabilities that tell an agent how to do a specific job.
The original bet was that enterprises would build agents and integrate them into their software and we would use them. What we are seeing is people want to create their own agents, with their own personality, memory, and context, and then connect them to skills from the systems they already use.
For enterprises, this is bigger than it sounds. These agents can sit on top of your ERP, finance platform, CRM, and project tools and finally do what we have been doing manually for years: connecting data across platforms that were never designed to talk to each other. Correlating things, producing reports, surfacing answers that used to require three people and a spreadsheet.
Some of my own agents connect HubSpot, Google Workspace, Ramp, and Jira, as well as our custom MCP servers, so I can approve an expense report from a Slack message or a voice command.
Skills as a service may be the new SaaS!
Agents-as-a-service are poised to rewire the software industry and corporate structuresEnterprise software vendors can see the agentic train coming and are quickly investing to stay ahead of it. For CIOs, the evolution of AaaS will change not just how the tools are used, but how they’re priced, integrated, and secured. -
The AI gas meter is spinning.
AI adoption in businesses is still low, but as it scales, so does token consumption. Think of it like a gas meter that starts spinning the moment your team fires up these tools.
Sitting across from people rolling out AI lately, the concern of mounting token costs with little visible ROI keeps coming up. Every prompt, every response, every automated workflow burns tokens. So companies are starting to ask: does everyone need access to the most powerful models? Should we really be burning frontier compute just to fix the grammar in an email?
Not every task needs the latest shiny model. Sorting email, summarizing a doc, answering routine questions, you simply don’t need Claude Opus 4.6 or ChatGPT 5.4 for that.
One solution is to use tools like Ollama that let you run open-weight models locally for everyday tasks at near zero cost, saving your frontier budget for work that actually needs it.
Tiered AI, matching the model to the task or job role, might be the most underrated cost strategy in the room right now. It’s what allows businesses to expand AI access across the org instead of cutting usage just to save at the pump. -
Why Workslop Misses the Point on AI at Work
AI-generated workslop might be an issue, but the real performance gains are being drastically understated.
AI is the great equalizer. It takes F or D level work and pushes it up to a C or even B. For recruiters and managers, that changes the signals they used to rely on. Spelling mistakes, awkward phrasing, or obvious gaps in formatting once made it easy to weed out weak candidates. AI erases those clues. Just like phishing training that teaches us to look for typos and clunky wording, the cues we’ve built BS detectors around no longer apply. Slop is moving further up the pipeline than it once did.
But the productivity gains from AI are still understated. As Ethan Mollick has pointed out, there is a growing stigma around admitting how much experts use AI. Spot an unusual phrasing or a certain punctuation mark and some people instantly dismiss the work as machine-made. That pushes AI use underground. People draft in personal tools or resort to shadow IT so they can get the benefit without the stigma. The final product looks like a polished draft, but few admit how much of it came from working alongside AI.
The reality is more people are using AI than want to own it. These tools do not replace critical thinking or fill in gaps of real experience. They are exponentially more valuable in the hands of someone who knows their domain than someone who does not. Training people to use AI to expand their value, not as a magical crutch, is the difference between slop and real output.
-
H-1B, Remote Work, and the RTO Paradox
Reading the news about a $100K fee on H-1B visas, I kept seeing the same question pop up: why hire someone on an H-1B at all instead of just building an offshore team?
Early in my career, the answer was obvious: H-1B hires let you expand the expertise of your local team and grow culture right where you sit. Outsourcing chips away at that. Building a team in another country means learning a new market, a new culture, and a whole new operating model.
For decades, offices enforced geographic restrictions. If you wanted to compete for the best jobs, you moved to the meccas like San Francisco or New York City. For some roles, that may never change. But when we push back on RTO, we also remove those restrictions. Suddenly, the best person might live anywhere, as long as they can work golden hours or travel when needed.
But here’s the twist: remote work changed everything.
I run my own business now, and while it is nice when people are local, it does not stop me from working with team members in different states or countries. I am usually looking for the best person I can afford for the role. Local is lagniappe (a little something extra), not the requirement.
That is where RTO gets interesting. For some companies and roles, being in-person may feel safer, or may even reduce the competition for jobs. For others, it might limit access to talent in ways that hurt more than it helps.
So maybe the real question is not whether RTO is good or bad, but whether the geographic restrictions it enforces are worth the tradeoff.
-
Savings Unlock Calculator
Read more →The Savings Unlock Calculator looks at AI through a different lens: time, efficiency, and “salary not spent.” It shows how much capacity your team can unlock without adding headcount by freeing up FTEs, saving hours, and raising efficiency. The point isn’t just cost-cutting, it’s about finding new room to grow with the team you already […]
-
Growth Unlock Calculator
Read more →I built this Growth Unlock Calculator to test how AI-driven productivity gains could flow directly into top-line revenue. By plugging in team size, average revenue per employee, and adoption rates, you can see how different impact levels translate into potential growth. Try it out!
-
AI Is Making Questionable Food Look Delicious
Some of the best AI use cases aren’t flashy, they’re just quietly helpful.
If you’ve ever ordered from DoorDash or Uber Eats, you’ve probably seen some truly questionable food photos. Now, Uber’s using AI to re-plate dishes, enhance low-quality images, and summarize reviews into clear, useful descriptions.
-
AI Pricing Isn’t the Problem
AI use cases aren’t always about novelty. Sometimes the power is simple: process more information, make better decisions, and act immediately. That’s exactly what sparked controversy last week when Delta announced plans to use AI to personalize airfare pricing. After public pushback, Delta clarified that it was using a partner to dynamically adjust prices based on demand and competitors, something airlines have done for decades.
What’s changed is the speed.
Before AI, we saw the same pattern in retail stores like Best Buy and Walmart rolling out e-ink price labels to make price changes cheaper, faster, and less error-prone. What used to take days now takes minutes. These systems weren’t about AI. They were about enabling action at scale.

Today, companies like are building AI-powered pricing systems that go even further, integrating with ERP and supplier data to adjust prices in real time. Working with groups like PerryLabs, they’re pushing updates across hundreds of products or stores multiple times a day. When margins shift due to something like a tariff change or supplier shortage, the system responds. Fast. Strategically. Without waiting for a human in the loop.
That’s the pattern: AI isn’t changing how business is done or how pricing has worked for centuries, it’s just enabling those decisions to happen faster than ever before.
-
We’re Still in the AOL Days of AI
AOL launched in 1983, Amazon didn’t show up until a decade later, and Google nearly two decades. That’s the kind of timeline we’re on with AI, not just early, but early enough that we still haven’t figured out how to use it at work.
According to a new AP poll, 60% of U.S. adults have used AI to search for information, but only 37% have used it at work. The gap isn’t about capability, it’s about confusion. Companies are rolling out vague governance policies that say, “don’t use ChatGPT with company data,” but then fail to offer secure, internal tools connected to their systems. The result? No context, no value, and no adoption.
When my team at PerryLabs talks with companies, we see it again and again: well-meaning governance that blocks data access, without a real plan to replace it. That creates hallucinations, frustration, and a quiet surge in shadow IT as employees turn to whatever tools they can find. It’s like choosing not to give your team a performance boost, and acting surprised when you fall behind.
-
The hardest part of AI right now? Making the promise possible.
Walmart’s move toward super agents is one of the clearest examples of where this space is heading. Agents that don’t just answer questions, but take action. These aren’t JUST chatbots. They’re orchestrators: agents that talk to other agents, trigger workflows, and pull the right data at the right time to get real work done.
But you’ll notice something missing, details on how they’re actually doing it.

Everyone’s using the buzzwords, super agents, orchestration, real-time, action layers, but the tooling to make it all work takes work to build. Its not a data lake, and its definitely not plug-and-play.
In The AI Evolution, I point to data lakes as a foundational layer, and they are. But they’re built for reporting, not action. What agentic AI needs is a layer that’s both readable and executable, with access to real-time context and permissions.
If you’re taking with companies that aren’t saying this you’re building a huge data swamp, that won’t unlock the things that Walmart ays they have. The reality is that most teams are duct-taping workflows together with brittle APIs or pushing dashboards behind a chat interface and calling it an agent.
That’s the space I’m often finding our work at PerryLabs. Not just demoing agents, but building the underlying layers to actually deploy them, and for lots of companies the scaffolding just is not there yet.
-
Development Needs an AI-First Rewrite
Read more →I’ve spent the better part of the last two decades running or being a part of development teams, and as a developer I’m experimenting a lot with using AI tools, and I’m not alone, lots of people in the tech space are. So it comes as little surprise that these same teams are seeing the […]
-
AI is the Great Equalizer and the Ultimate Multiplier
OpenAI just released its first productivity report, based on real-world deployments of AI tools across consulting firms, government agencies, legal teams, and more. The results? Proof that AI isn’t just speeding up tasks, it’s fundamentally shifting how work stuff done.
- Consulting: AI made consultants 25% faster, completing 12% more tasks with 40% higher quality. The biggest gains? Lower performers, up 43%.
- Legal services: Productivity jumped 34% to 140%, especially in complex work like persuasive writing and legal analysis.
- Government: Pennsylvania state workers saved 95 minutes per day, a full workday back every week, by using AI tools.
- Education: U.S. K–12 teachers using AI saved nearly 6 hours per week, the equivalent of six extra teaching weeks a year.
- Customer service: Call center agents became 14% more productive, with junior staff seeing the biggest gains.
- Marketing: Content creators using AI saved 11+ hours per week on copy, ideas, and assets.
These aren’t just stats. They’re signals that AI doesn’t just help people work faster. We’re entering a moment where tools do more than help, they amplify.
AI can take a D player and make them a B. What it does for your A and B players. It turns them into 10x or 100x power houses. Not just faster, but more scalable, and more consistent.
In my book The AI Evolution, I talk about this movement as a shift to Vibe Teams. Small, AI-augmented groups that pair human strategy with agentic execution. They’re cross-functional. They move fast. They scale without headcount. And they don’t just adopt AI, they build with it.
-
The Fraud Risk No One’s Ready For
I agree with Sam Altman, AI-powered deepfakes and voice cloning are one of the most alarming risks we’re facing right now.
Here in Baltimore, we’ve already seen how real this can get. Last year, a Pikesville high school principal was accused of making racist comments—until it came out that the entire recording was a voice clone. After that, I experimented with cloning my own voice and doctoring a video of myself. It wasn’t perfect, but it was close enough to be unsettling.
Systems that rely on voice to ID customers just aren’t safe anymore. Tools like ElevenLabs have made cloning voices frighteningly easy, and if you’ve ever said a few words online or been recorded at an event, chances are you’ve given someone enough data to copy you.
Scammers are already using these tools in the wild, and combined with the mountain of personal info already floating around online, it’s easy to imagine how these attacks are getting harder to detect. Security training used to teach us that phishing emails were easy to spot. Look for misspellings, weird tone, bad grammar. But now? The grammar’s perfect. The details are specific. And the sender sounds like your boss, your partner, or your kid.
So what can you do? Start with the basics. And remember, it’s not just about locking down your own account. Train your team. Talk to your family. Assume your voice and your data are already out there, and focus on making it harder for attackers to weaponize them.
Use multi-factor authentication. Slow down before responding to anything unusual. And always, always, verify through a second channel if something feels off. No one’s going to get upset if you hang up and call them back. Or shoot them a text to double-check. That extra step might be the thing that saves you from all this mess.
-
Not All Citations Can Be Trusted
If you’re not familiar with the term “hallucination,” it’s what happens when an AI confidently gives you an answer… that’s completely made up. It might sound convincing. It might even cite a source. But sometimes, a quick Google search is all it takes to realize that study, case, or quote never existed.
That’s the scary part. And it’s not going away.
In fact, hallucinations are getting worse, not better. And the bigger issue is that AI is now quietly woven into the background of nearly everything we read: articles, presentations, reports, sometimes even court filings.
That’s why I think we’ll start seeing a shift where citations aren’t just taken at face value anymore. Judges, editors, and program officers need to start asking: “Did you actually check this, is this citation real?”
The good news? There’s a growing wave of tools designed to help with exactly that:
- Sourcely scans your writing and suggests credible, traceable sources, or flags weak ones that don’t hold up.
- Scite Assistant reviews your citations and shows whether later research supports or contradicts the claim.
- Perplexity isn’t perfect, but I often use it as a quick smell test when an AI-generated response feels a little too slick.
- LegalEase, Westlaw, and other legal tools are already helping firms and courts catch citation errors before they make it into the record.
Just because a model says it, and even if it drops a polished citation, doesn’t mean it’s true. We used to say, “Just take a look, it’s in a book.” Now? You better take a second look, and make sure that book actually exists.
If your work is making claims – especially ones tied to legal precedent, public health, or science – then checking those claims has to become part of the workflow.
Truth still matters. And in the age of generative content, fact-checking is the new spellcheck.
-
The Problem with Data
Read more →Everyone has the same problem, and its name is data. Nearly every business functions in one of two core data models: In reality, most businesses operate somewhere in between. One system becomes the “system of truth,” while others orbit it, each with its own partial view of the business. That setup is manageable until AI […]
-
Funny Timing, Right?
You know what’s a strange coincidence?
This week, both OpenAI and Perplexity announced new web browsers, bold moves that could reshape how we interface with the internet. On the exact same day, Ars Technica published a story about browser extensions quietly turning nearly a million people’s browsers into scraping bots.
And it got me thinking…
Owning the browser is a convenient way to bypass a lot of the safeguards platforms like Cloudflare have put up to stop scraping and bot traffic. Maybe that’s not the intention, but sure is a funny coincidence.
-
What Even Is AGI?
In my AI talks, I often reference OpenAI CTO Mira Murati’s 5 steps toward AGI. I like her framing, not just because it emphasizes thinking and reasoning, but because it anchors AGI in action.
AGI isn’t just about answering questions. It’s AI that can manage tasks, run systems, and ultimately operate and orchestrate an organization, made up of both humans and machines.
That’s the vision. But defining what counts as AGI? That’s still a moving target.
Even the leaders at the top AI labs can’t agree on what the finish line looks like. Is it general reasoning across any domain? Is it autonomy? Is it the ability to pursue goals without human prompting?
Ars Technica has a great deep dive on that exact question: What actually is this thing we’re racing toward, and how will we know when we’ve built it?
-
Follow the Money. AI Is Already Paying Off.
I mentioned in a recent post that more and more CEOs are starting to warn of smaller teams and reduced hiring needs as they see the success of AI. It’s early in the transition, and that means use cases and hard metrics are still trickling in.
But Microsoft’s Chief Commercial Officer just stepped out to offer some proof:
“Microsoft saved over $500 million in its call centers this past year by using AI.”
The real power of AI, especially agentic AI, isn’t just chatbots or flashy demos. It’s giving software the ability to act on your data. To do things your experts would normally do. At scale.
This space is still early, but the direction is clear. Microsoft, Meta, Google, and Salesforce are baking agents into everything, from support tools to software development to customer outreach. And it’s working.
The upside? Massive efficiency, better performance, and tools that extend your best people.
-
CEOs Are Finally Saying the Quiet Part Out Loud: AI Means Smaller Teams
It started with Andy Jassy warning investors that Amazon would become more efficient by using AI to reduce manual effort. But now, more CEOs are saying the quiet part out loud: AI is enabling smaller teams and leaner companies.
This isn’t theoretical. It’s already underway.
In the past year, we’ve seen a clear trend: rising layoffs across the tech sector, especially in management, operations, and recruiting roles. The message across these moves has been consistent, cut layers, flatten orgs, and use AI to close the gap.
From Microsoft:
“We continue to implement organizational changes necessary to best position the company and teams for success in a dynamic marketplace… focused on reducing layers with fewer managers and streamlining processes, products, procedures, and roles to become more efficient.”
From Meta:
“Zuckerberg has stated that Meta is developing AI systems to replace mid-level engineers… By 2025, he expects Meta to have AI that can function as a ‘midlevel engineer,’ writing code, handling software development, and replacing human roles.”
Google has “thinned out” recruiters and admins, explicitly citing AI tools. Duolingo laid off portions of its translation and language staff in early 2024 after aggressively shifting to AI for core product features.
This trend is especially visible in tech because these companies are building the very tools driving the shift. They see the impact first, and are adjusting accordingly. But this won’t stop at software firms. AI is reshaping workflows and org design across every sector.
In my book, I call this the rise of “vibe teams”, small, empowered units supported by AI agents that amplify productivity far beyond traditional headcount. This model isn’t aspirational. It’s becoming operational reality.
For anyone outside the tech industry, this should read as a warning. We’re watching the early adopters recalibrate, and what follows will be a broader redefinition of roles, team structures, and management itself.
Harvard Business Review recently published a powerful piece that underscores the urgency: the manager’s role is changing. Traditional org structures no longer make sense when AI can scale a team, and organizations become flatter.
Nvidia’s CEO summed it up well:
“It’s not AI that will take your job, but someone using AI that will.”
And the best time to start adapting is now.
-
Cloudflare just entered the AI monetization chat.
One of the core tensions with SEO and now AEO (AI Engine Optimization) is access. If you block bots from crawling your content, you lose visibility. But if you don’t block them, your content gets scraped, summarized, and served elsewhere without credit, clicks, or revenue.
For publishers like Reddit, recipe sites, and newsrooms, that’s not just a tech issue, it’s an existential one. Tools like Perplexity and ChatGPT summarize entire pages, cutting publishers out of the traffic (and ad revenue) loop.
Now Cloudflare’s testing a new play: charge the bots. Their private beta lets sites meter and monetize how AI tools crawl their content. It’s early, but it signals a bigger shift. The market’s looking for a middle ground between “open” and “owned.” And the real question is—who gets paid when AI learns from your work?










