Jason Michael Perry — Page 2 of 14 — Thoughts on Tech & Things

Latest Thoughts

  1. 🧠 AI is the Great Equalizer and the Ultimate Multiplier

    OpenAI just released its first productivity report, based on real-world deployments of AI tools across consulting firms, government agencies, legal teams, and more. The results? Proof that AI isn’t just speeding up tasks, it’s fundamentally shifting how work stuff done.

    • Consulting: AI made consultants 25% faster, completing 12% more tasks with 40% higher quality. The biggest gains? Lower performers, up 43%.
    • Legal services: Productivity jumped 34% to 140%, especially in complex work like persuasive writing and legal analysis.
    • Government: Pennsylvania state workers saved 95 minutes per day, a full workday back every week, by using AI tools.
    • Education: U.S. K–12 teachers using AI saved nearly 6 hours per week, the equivalent of six extra teaching weeks a year.
    • Customer service: Call center agents became 14% more productive, with junior staff seeing the biggest gains.
    • Marketing: Content creators using AI saved 11+ hours per week on copy, ideas, and assets.

    These aren’t just stats. They’re signals that AI doesn’t just help people work faster. We’re entering a moment where tools do more than help, they amplify.

    AI can take a D player and make them a B. What it does for your A and B players. It turns them into 10x or 100x power houses. Not just faster, but more scalable, and more consistent.

    In my book The AI Evolution, I talk about this movement as a shift to Vibe Teams. Small, AI-augmented groups that pair human strategy with agentic execution. They’re cross-functional. They move fast. They scale without headcount. And they don’t just adopt AI, they build with it.

  2. 🧠 The Fraud Risk No One’s Ready For

    I agree with Sam Altman, AI-powered deepfakes and voice cloning are one of the most alarming risks we’re facing right now.

    Here in Baltimore, we’ve already seen how real this can get. Last year, a Pikesville high school principal was accused of making racist comments—until it came out that the entire recording was a voice clone. After that, I experimented with cloning my own voice and doctoring a video of myself. It wasn’t perfect, but it was close enough to be unsettling.

    Systems that rely on voice to ID customers just aren’t safe anymore. Tools like ElevenLabs have made cloning voices frighteningly easy, and if you’ve ever said a few words online or been recorded at an event, chances are you’ve given someone enough data to copy you.

    Scammers are already using these tools in the wild, and combined with the mountain of personal info already floating around online, it’s easy to imagine how these attacks are getting harder to detect. Security training used to teach us that phishing emails were easy to spot. Look for misspellings, weird tone, bad grammar. But now? The grammar’s perfect. The details are specific. And the sender sounds like your boss, your partner, or your kid.

    So what can you do? Start with the basics. And remember, it’s not just about locking down your own account. Train your team. Talk to your family. Assume your voice and your data are already out there, and focus on making it harder for attackers to weaponize them.

    Use multi-factor authentication. Slow down before responding to anything unusual. And always, always, verify through a second channel if something feels off. No one’s going to get upset if you hang up and call them back. Or shoot them a text to double-check. That extra step might be the thing that saves you from all this mess.

  3. 🧠 Not All Citations Can Be Trusted

    If you’re not familiar with the term “hallucination,” it’s what happens when an AI confidently gives you an answer
 that’s completely made up. It might sound convincing. It might even cite a source. But sometimes, a quick Google search is all it takes to realize that study, case, or quote never existed.

    That’s the scary part. And it’s not going away.

    In fact, hallucinations are getting worse, not better. And the bigger issue is that AI is now quietly woven into the background of nearly everything we read: articles, presentations, reports, sometimes even court filings.

    That’s why I think we’ll start seeing a shift where citations aren’t just taken at face value anymore. Judges, editors, and program officers need to start asking: â€œDid you actually check this, is this citation real?”

    The good news? There’s a growing wave of tools designed to help with exactly that:

    • Sourcely scans your writing and suggests credible, traceable sources, or flags weak ones that don’t hold up.
    • Scite Assistant reviews your citations and shows whether later research supports or contradicts the claim.
    • Perplexity isn’t perfect, but I often use it as a quick smell test when an AI-generated response feels a little too slick.
    • LegalEase, Westlaw, and other legal tools are already helping firms and courts catch citation errors before they make it into the record.

    Just because a model says it, and even if it drops a polished citation, doesn’t mean it’s true. We used to say, â€œJust take a look, it’s in a book.” Now? You better take a second look, and make sure that book actually exists.

    If your work is making claims – especially ones tied to legal precedent, public health, or science – then checking those claims has to become part of the workflow.

    Truth still matters. And in the age of generative content, fact-checking is the new spellcheck.

  4. The Problem with Data

    Everyone has the same problem, and its name is data. Nearly every business functions in one of two core data models:

    1. ERP-centric: One large enterprise system (like SAP, NetSuite, or Microsoft Dynamics) acts as the hub for inventory, customers, finance, and operations. It’s monolithic, but everything is in one place.
    2. Best-of-breed: A constellation of specialized tools – Salesforce or HubSpot for CRM, Zendesk for support, Shopify or WooCommerce for commerce, QuickBooks for finance – all loosely stitched together, if at all.

    In reality, most businesses operate somewhere in between. One system becomes the “system of truth,” while others orbit it, each with its own partial view of the business. That setup is manageable until AI enters the picture.

    AI is data-hungry. It works best when it can see across your operations. But ERP vendors often make interoperability difficult by design. Their strategy has been to lock you in and make exporting or connecting data expensive or complex.

    That’s why more organizations are turning to data lakes or lakehouses, central repositories that aggregate information from across systems and make it queryable. Platforms like Snowflake and Databricks have grown quickly by helping enterprises unify fragmented data into one searchable hub.

    When done well, a data lake gives your AI tools visibility across departments: product, inventory, sales, finance, customer support. It’s the foundation for better analytics and better decisions.

    But building a good data lake isn’t easy. I joke in my book The AI Evolution, a bad data lake is just a data swamp, a messy, unstructured dump that’s more confusing than helpful. Without a clear data model and strategy for linking information, you’re just hoarding bytes.

    Worse, the concept of data lakes was designed pre-AI. They’re great at storing and querying data, but not great at acting on it. If your AI figures out that you’re low on Product X from Supplier Y, your data lake can’t place the order; it can only tell you.

    This is where a new approach is gaining traction: API orchestration. Instead of just storing data, you build connective tissue between systems using APIs, letting AI both see and do across tools. Think of it like a universal translator (or Babelfish): systems speak different languages, but orchestration helps them understand each other.

    For example, say HubSpot has your customer data and Shopify has your purchase history. By linking them via API, you can match users by email and give AI a unified view. Better yet, if those APIs allow actions, the AI can update records or trigger workflows directly.

    Big players like Mulesoft are building enterprise-grade orchestration platforms. But for smaller orgs, tools like Zapier and n8n are becoming popular ways to connect their best-of-breed stacks and make data more actionable.

    The bottom line: if your data lives in disconnected systems, you’re not alone. This is the reality for nearly every business we work with. But investing in data cleanup and orchestration now isn’t just prep, it’s the first step needed to truly unlock the power of AI.

    That’s exactly why we built the AI Accelerator at PerryLabs. It’s designed for companies stuck in this in-between state where the data is fragmented, the systems don’t talk, and the AI potential feels just out of reach. Through the Accelerator, we help you identify those key data gaps, unify and activate your systems, and build the orchestration layer that sets the stage for real AI performance. Because the future of AI isn’t just about having the data—it’s about making it usable.

  5. 🧠 Funny Timing, Right?

    You know what’s a strange coincidence?

    This week, both OpenAI and Perplexity announced new web browsers, bold moves that could reshape how we interface with the internet. On the exact same day, Ars Technica published a story about browser extensions quietly turning nearly a million people’s browsers into scraping bots.

    And it got me thinking


    Owning the browser is a convenient way to bypass a lot of the safeguards platforms like Cloudflare have put up to stop scraping and bot traffic. Maybe that’s not the intention, but sure is a funny coincidence.

  6. 🧠 What Even Is AGI?

    In my AI talks, I often reference OpenAI CTO Mira Murati’s 5 steps toward AGI. I like her framing, not just because it emphasizes thinking and reasoning, but because it anchors AGI in action.

    AGI isn’t just about answering questions. It’s AI that can manage tasks, run systems, and ultimately operate and orchestrate an organization, made up of both humans and machines.

    That’s the vision. But defining what counts as AGI? That’s still a moving target.

    Even the leaders at the top AI labs can’t agree on what the finish line looks like. Is it general reasoning across any domain? Is it autonomy? Is it the ability to pursue goals without human prompting?

    Ars Technica has a great deep dive on that exact question: What actually is this thing we’re racing toward, and how will we know when we’ve built it?

  7. 🧠 Follow the Money. AI Is Already Paying Off.

    I mentioned in a recent post that more and more CEOs are starting to warn of smaller teams and reduced hiring needs as they see the success of AI. It’s early in the transition, and that means use cases and hard metrics are still trickling in.

    But Microsoft’s Chief Commercial Officer just stepped out to offer some proof:

    “Microsoft saved over $500 million in its call centers this past year by using AI.”

    The real power of AI, especially agentic AI, isn’t just chatbots or flashy demos. It’s giving software the ability to act on your data. To do things your experts would normally do. At scale.

    This space is still early, but the direction is clear. Microsoft, Meta, Google, and Salesforce are baking agents into everything, from support tools to software development to customer outreach. And it’s working.

    The upside? Massive efficiency, better performance, and tools that extend your best people.

  8. 🧠 CEOs Are Finally Saying the Quiet Part Out Loud: AI Means Smaller Teams

    It started with Andy Jassy warning investors that Amazon would become more efficient by using AI to reduce manual effort. But now, more CEOs are saying the quiet part out loud: AI is enabling smaller teams and leaner companies.

    This isn’t theoretical. It’s already underway.

    In the past year, we’ve seen a clear trend: rising layoffs across the tech sector, especially in management, operations, and recruiting roles. The message across these moves has been consistent, cut layers, flatten orgs, and use AI to close the gap.

    From Microsoft:

    “We continue to implement organizational changes necessary to best position the company and teams for success in a dynamic marketplace
 focused on reducing layers with fewer managers and streamlining processes, products, procedures, and roles to become more efficient.”

    GeekWire

    From Meta:

    “Zuckerberg has stated that Meta is developing AI systems to replace mid-level engineers
 By 2025, he expects Meta to have AI that can function as a ‘midlevel engineer,’ writing code, handling software development, and replacing human roles.”

    The HR Digest

    Google has “thinned out” recruiters and admins, explicitly citing AI tools. Duolingo laid off portions of its translation and language staff in early 2024 after aggressively shifting to AI for core product features.

    This trend is especially visible in tech because these companies are building the very tools driving the shift. They see the impact first, and are adjusting accordingly. But this won’t stop at software firms. AI is reshaping workflows and org design across every sector.

    In my book, I call this the rise of “vibe teams”, small, empowered units supported by AI agents that amplify productivity far beyond traditional headcount. This model isn’t aspirational. It’s becoming operational reality.

    For anyone outside the tech industry, this should read as a warning. We’re watching the early adopters recalibrate, and what follows will be a broader redefinition of roles, team structures, and management itself.

    Harvard Business Review recently published a powerful piece that underscores the urgency: the manager’s role is changing. Traditional org structures no longer make sense when AI can scale a team, and organizations become flatter.

    Nvidia’s CEO summed it up well:

    “It’s not AI that will take your job, but someone using AI that will.”

    And the best time to start adapting is now.

  9. 🧠 Cloudflare just entered the AI monetization chat.

    One of the core tensions with SEO and now AEO (AI Engine Optimization) is access. If you block bots from crawling your content, you lose visibility. But if you don’t block them, your content gets scraped, summarized, and served elsewhere without credit, clicks, or revenue.

    For publishers like Reddit, recipe sites, and newsrooms, that’s not just a tech issue, it’s an existential one. Tools like Perplexity and ChatGPT summarize entire pages, cutting publishers out of the traffic (and ad revenue) loop.

    Now Cloudflare’s testing a new play: charge the bots. Their private beta lets sites meter and monetize how AI tools crawl their content. It’s early, but it signals a bigger shift. The market’s looking for a middle ground between “open” and “owned.” And the real question is—who gets paid when AI learns from your work?

  10. 🧠 Claude Tried to Run a Business.  It Got Weird.

    Anthropic and a Andon Labs ran an experiment with an AI agent named Claudius. Could Claudius run a snack shop inside a company break room?

    The store was modest, a fridge, baskets, and an iPad for self-checkout, but the business was real with actual cash at stake. Claudius was also given real tools, notes pads to manage inventory and finances, access to email to talk with suppliers, a web browser to do research, and the companies slack to interact with employees. For things the agent could not do it relied on physical employees for things like restocking.

    On the path to AGI, this is an early test of Level 5 on OpenAI’s AGI roadmap, the point where AI becomes an organizer, capable of managing people, tools, and systems like a CEO. As a refresher, OpenAI’s former CTO laid out five levels on the road to AGI:

    1. Recall
    2. Reasoning
    3. Acting (agents/tools)
    4. Teaching
    5. Organizing (aka boss-mode)

    Right now, most models live between Level 2 and 3, they can recall information, reason through problems, and complete some tasks with tools.

    So, how did it go?

    Anthropic concedes, it “would not hire Claudius”. So shop owners can breathe easy for now.

    To be fair Claudius was not a complete failure. It found suppliers, but as the great writeup explores it hallucinated conversations, often failed to negotiate profit margins, and was easily convinced into giving deep discount codes or products for free.

    Check out the full article, its a worthy read.

  11. 🧠 Vibe Teams Are the Future

    There’s a chapter in The AI Evolution that I keep coming back to, Vibe Teams. It’s the idea that small, high-trust teams can do big things when paired with AI and the right tools. And lately, it feels less like a prediction and more like a playbook for what’s already happening.

    Salesforce says up to 50% of their team’s work is now handled by AI agents and tools. Amazon’s CEO Andy Jassy predicts the company will only get smaller as AI becomes a force multiplier. The message? Big companies are reorganizing around smaller teams that move faster, think smarter, and leverage AI to punch way above their weight.

    I call AI the great equalizer for a reason. In my workshops, I’ve seen firsthand how a small business with the right AI setup can compete with a team 10x its size.

    We’re entering a new era where small teams don’t just survive, they thrive. They launch faster, personalize better, and operate with precision because they let AI handle the grunt work while they focus on the magic. That’s what a Vibe Team is: focused, fluid, and augmented.

  12. 🧠 Catch me on WYPR Midday

    Thank you to the Midday team at WYPR for inviting me to talk.

    I joined Dr. Anupam Joshi to talk with guest host Farai Chideya about how AI is reshaping the workplace, not just in tech, but across every industry. We covered what skills matter most now, how AI is changing the job search and hiring process, and what Maryland is doing on the policy front.

    We also talked about how to get started with AI, even if you’re not technical, and how people at every stage of their career can adapt and grow. Checkout the link to the full episode linked below.

  13. What the Heck Is MCP?

    AI models are built on data. All that data, meticulously scraped and refined, fuels their capacity to handle a staggering spectrum of questions. But as many of you know, their knowledge is locked to the moment they were trained. ChatGPT 3.5, for instance, was famously unaware of the pandemic. Not because it was dumb, but because it wasn’t trained on anything post-2021.

    That limitation hasn’t disappeared. Even the newest models don’t magically know what happened yesterday, unless they’re connected to live data. And that’s where techniques like RAG (Retrieval-Augmented Generation) come in. RAG allows an AI to pause mid-response, reach out to external sources like today’s weather report or last night’s playoff score, and bring that data back into the conversation. It’s like giving the model a search engine it can use on the fly.

    But RAG has limits. It’s focused on data capture, not doing things. It can help you find an answer, but it can’t carry out a task. And its usefulness is gated by whatever systems your team has wired up behind the scenes. If there’s no integration, there’s no retrieval. It’s useful, but it’s not agentic.

    Enter MCP

    MCP stands for Model Context Protocol, and it’s an open protocol developed by Anthropic, the team behind Claude. It’s not yet the de facto standard, but it’s gaining real momentum. Microsoft and Google are all in, and OpenAI seems on board. Anthropic hopes that MCP could become the “USB-C” of AI agents, a universal interface for how models connect to tools, data, and services.

    What makes MCP powerful isn’t just that it can fetch information. It’s that it can also perform actions. Think of it like this: RAG might retrieve the name of a file. MCP can open that file, edit it, and return a modified version, all without you lifting a finger.

    It’s also stateful, meaning it can remember context across multiple requests. For developers, this solves a long-standing web problem. Traditional web requests are like goldfish; they forget everything after each interaction. Web apps have spent years duct-taping state management around that limitation. But MCP is designed to remember. It lets an AI agent maintain a thread of interaction, which means it can build on past knowledge, respond more intelligently, and chain tasks together with nuance.

    At Microsoft Build, one demo showed an AI agent using MCP to remove a background from an image. The agent didn’t just describe how to do it or explain how a user might remove a background; it called Microsoft Paint, passed in the image, triggered the action, and received back a new file with the background removed.

    MCP enables agents to access the headless interfaces of applications, with platforms like Figma and Slack now exposing their functionality through standardized MCP servers. So, instead of relying on fragile screen-scraping or rigid APIs, agents can now dynamically discover available tools, interpret their functions, and use them in real time.

    That’s the holy grail for agentic AI: tools that are discoverable, executable, and composable. You’re not just talking to a chatbot. You’re building a workforce of autonomous agents capable of navigating complex workflows with minimal oversight.

    Imagine asking an agent to let a friend know you’re running late – with MCP, the agent can identify apps like email or WhatsApp that support the protocol, and communicate with them directly to get the job done. More complex examples could involve an agent creating design assets in an application such as Figma and then exporting assets into a developer application like Visual Studio Code to implement a website. The possibilities are endless.

    The other win? Security. MCP includes built-in authentication and access control. That means you can decide who gets to use what, and under what conditions. Unlike custom tool integrations or API gateways, MCP is designed with enterprise-grade safeguards from the start. That makes it viable not just for tinkerers but for businesses that need guardrails, audit logs, and role-based permissions.

    Right now, most MCP interfaces run locally. That’s partly by design; local agents can interact with desktop tools in ways cloud models can’t. But we’re already seeing movement toward the web. Microsoft is embedding MCP deeper into Windows, and other companies are exploring ways to expose cloud services using the same model. If you’ve built RPA (Robotic Process Automation) systems before, this is like giving your bots superpowers and letting them coordinate with AI agents that actually understand what they’re doing.

    If you download Claude Desktop and have a paid Anthropic account, you can start experimenting with MCP right now. Many developers have shared example projects that talk to apps like Slack, Notion, and Figma. As long as an application exposes an MCP server, your agent can query it, automate tasks, and chain actions together with ease.

    At PerryLabs, we’re going a step further. We’re building custom MCP servers that connect to a company’s ERP or internal APIs, so agents can pull live deal data from HubSpot, update notes and tasks from a conversation, or generate a report and submit it through your business’s proprietary platform. It’s not just automation. It’s intelligent orchestration across systems that weren’t designed to talk to each other.

    What’s wild is that this won’t always require a prompt or a conversation. Agentic AI means the agent just knows what to do next. You won’t ask it to resize 10,000 images—it will do that on its own. You’ll get the final folder back, with backgrounds removed, perfectly cropped, and brand elements adjusted—things we once assumed only humans could handle.

    MCP makes that future real. As the protocol matures, the power of agentic AI will only grow.

    If you’re interested in testing out how MCP can help you build smarter agents or want to start embedding MCP layers into your applications, reach out. We’d love to show you what’s possible.

  14. 🧠 Finding a job sucks and it’s turning into AI warfare

    Like many of you, I’ve got friends on both sides of the job battle. Recruiters and hiring managers are getting flooded with more resumes than ever. And let’s be honest, no one has time to manually comb through thousands of applications while also doing their other job. So hiring teams turn to AI tools to help screen.

    On the other side, job seekers are exhausted. You spend hours tailoring your resume, researching the company, writing a thoughtful cover letter only to send it into the void. No response. No feedback. Not even a polite rejection. It’s soul-crushing.

    AI was bound to enter the picture, but now it’s become a battleground. Applicants use AI to apply faster and look better. Hiring teams respond by using more AI to filter even harder. The result? Everyone’s stuck. It’s time for a better approach. Resumes alone won’t cut it anymore. I think AI should help interview, not just screen. Conversational tools, avatars, first-round screeners, anything that gives more people an honest at-bat. The system’s already broken. Doing the same thing over and over is just automation-driven insanity.

  15. 🧠 AI Is Eating the Internet, And Your Traffic

    I know I’ve said this before, but it’s worth repeating: If your business relies heavily on Google search traffic, you need to prepare for a reality where that hose turns from a stream to a drip.

    Search traffic is declining. AI is accelerating the shift. And more evidence keeps piling up.

    Cloudflare, a company best known for keeping websites fast, secure, and online, also offers tools to block AI bots from scraping public web content. If you’ve ever hit a “verify you’re not a robot” check before reading an article, that’s Cloudflare or similar services doing their job to protect publishers’ content from being quietly hoovered up.

    In a recent interview, Cloudflare’s CEO laid it out clearly: publisher traffic is down hard. Worse, the new wave of AI search, Perplexity, ChatGPT, Gemini, doesn’t send readers back to your site the way old-school Google blue links did.

    Some say this signals the death of the open web. Maybe they’re right.

    But I think we’re witnessing a transition.

    The open web, once the front door to everything, is fading. Today, most people experience the internet through closed ecosystems: YouTube, Amazon, TikTok, Instagram, Reddit, Facebook, Twitter. Each of these platforms has sticky sandboxes designed to keep users (and their content) locked inside.

    If you’re not building brand gravity outside of SEO, if your whole model depends on inbound clicks from Google, things are going to get difficult quickly.

  16. 🧠 Is Apple About to Buy an Answer Engine?

    What do you do when $20 billion in revenue might vanish thanks to Google’s looming antitrust fallout?

    You buy the best damn answer engine on the block.

    Perplexity is already my favorite AI search tool—fast, smart, and actually useful. Imagine it embedded deep into Apple’s many operating systems. A real-time answer engine that could make Siri useful and launch a day-one Google Search competitor. If this happens, it might be Apple’s smartest acquisition in years.

  17. 🧠 Teaching in an AI World

    In my talks with professors at local colleges and universities, I keep hearing the same thing. We’re teaching for a world that’s changing faster than we can update our syllabi.

    The scale of these AI tools is mind-blowing. But here’s the catch: subject matter experts, people who truly get it, are the ones who benefit most. When you lack that core understanding, the tool becomes a crutch, and the power dynamic shifts. Instead of the human leading, the tool leads.

    I see this all the time with new developers and junior engineers. Many lean on these tools like a lifeline, while the more experienced folks use them to amplify what they already know.

    The Jetsons often asked this question in a way only they could, with jokes like George mashing potatoes and calling it slavery before pressing a button to have a robot do it for him.

    In the linked blog post, “The Myth of Automated Learning,” the author lays it out clearly:

    Thanks to human-factors researchers and the mountain of evidence they’ve compiled on the consequences of automation for workers, we know that one of three things happens when people use a machine to automate a task they would otherwise have done themselves:

    1. Their skill in the activity grows.
    2. Their skill in the activity atrophies.
    3. Their skill in the activity never develops.

    Which scenario plays out hinges on the level of mastery a person brings to the job. If a worker has already mastered the activity being automated, the machine can become an aid to further skill development. It takes over a routine but time-consuming task, allowing the person to tackle and master harder challenges. In the hands of an experienced mathematician, for instance, a slide rule or a calculator becomes an intelligence amplifier.

    Of course, the bigger question is how much of this is about the present and how much it will matter in the future. Most of us wouldn’t survive if we had to hunt and gather our own food or live without modern conveniences. Maybe some foundational knowledge just won’t be as important tomorrow as it is today. Could programming become a dying art form like calligraphy?

    At the heart of all this is the question of what’s actually worth teaching in a world where AI handles the heavy lifting.