What the Heck Is MCP? — Jason Michael Perry

AI models are built on data. All that data, meticulously scraped and refined, fuels their capacity to handle a staggering spectrum of questions. But as many of you know, their knowledge is locked to the moment they were trained. ChatGPT 3.5, for instance, was famously unaware of the pandemic. Not because it was dumb, but because it wasn’t trained on anything post-2021.

That limitation hasn’t disappeared. Even the newest models don’t magically know what happened yesterday, unless they’re connected to live data. And that’s where techniques like RAG (Retrieval-Augmented Generation) come in. RAG allows an AI to pause mid-response, reach out to external sources like today’s weather report or last night’s playoff score, and bring that data back into the conversation. It’s like giving the model a search engine it can use on the fly.

But RAG has limits. It’s focused on data capture, not doing things. It can help you find an answer, but it can’t carry out a task. And its usefulness is gated by whatever systems your team has wired up behind the scenes. If there’s no integration, there’s no retrieval. It’s useful, but it’s not agentic.

Enter MCP

MCP stands for Model Context Protocol, and it’s an open protocol developed by Anthropic, the team behind Claude. It’s not yet the de facto standard, but it’s gaining real momentum. Microsoft and Google are all in, and OpenAI seems on board. Anthropic hopes that MCP could become the “USB-C” of AI agents, a universal interface for how models connect to tools, data, and services.

What makes MCP powerful isn’t just that it can fetch information. It’s that it can also perform actions. Think of it like this: RAG might retrieve the name of a file. MCP can open that file, edit it, and return a modified version, all without you lifting a finger.

It’s also stateful, meaning it can remember context across multiple requests. For developers, this solves a long-standing web problem. Traditional web requests are like goldfish; they forget everything after each interaction. Web apps have spent years duct-taping state management around that limitation. But MCP is designed to remember. It lets an AI agent maintain a thread of interaction, which means it can build on past knowledge, respond more intelligently, and chain tasks together with nuance.

At Microsoft Build, one demo showed an AI agent using MCP to remove a background from an image. The agent didn’t just describe how to do it or explain how a user might remove a background; it called Microsoft Paint, passed in the image, triggered the action, and received back a new file with the background removed.

MCP enables agents to access the headless interfaces of applications, with platforms like Figma and Slack now exposing their functionality through standardized MCP servers. So, instead of relying on fragile screen-scraping or rigid APIs, agents can now dynamically discover available tools, interpret their functions, and use them in real time.

That’s the holy grail for agentic AI: tools that are discoverable, executable, and composable. You’re not just talking to a chatbot. You’re building a workforce of autonomous agents capable of navigating complex workflows with minimal oversight.

Imagine asking an agent to let a friend know you’re running late – with MCP, the agent can identify apps like email or WhatsApp that support the protocol, and communicate with them directly to get the job done. More complex examples could involve an agent creating design assets in an application such as Figma and then exporting assets into a developer application like Visual Studio Code to implement a website. The possibilities are endless.

The other win? Security. MCP includes built-in authentication and access control. That means you can decide who gets to use what, and under what conditions. Unlike custom tool integrations or API gateways, MCP is designed with enterprise-grade safeguards from the start. That makes it viable not just for tinkerers but for businesses that need guardrails, audit logs, and role-based permissions.

Right now, most MCP interfaces run locally. That’s partly by design; local agents can interact with desktop tools in ways cloud models can’t. But we’re already seeing movement toward the web. Microsoft is embedding MCP deeper into Windows, and other companies are exploring ways to expose cloud services using the same model. If you’ve built RPA (Robotic Process Automation) systems before, this is like giving your bots superpowers and letting them coordinate with AI agents that actually understand what they’re doing.

If you download Claude Desktop and have a paid Anthropic account, you can start experimenting with MCP right now. Many developers have shared example projects that talk to apps like Slack, Notion, and Figma. As long as an application exposes an MCP server, your agent can query it, automate tasks, and chain actions together with ease.

At PerryLabs, we’re going a step further. We’re building custom MCP servers that connect to a company’s ERP or internal APIs, so agents can pull live deal data from HubSpot, update notes and tasks from a conversation, or generate a report and submit it through your business’s proprietary platform. It’s not just automation. It’s intelligent orchestration across systems that weren’t designed to talk to each other.

What’s wild is that this won’t always require a prompt or a conversation. Agentic AI means the agent just knows what to do next. You won’t ask it to resize 10,000 images—it will do that on its own. You’ll get the final folder back, with backgrounds removed, perfectly cropped, and brand elements adjusted—things we once assumed only humans could handle.

MCP makes that future real. As the protocol matures, the power of agentic AI will only grow.

If you’re interested in testing out how MCP can help you build smarter agents or want to start embedding MCP layers into your applications, reach out. We’d love to show you what’s possible.

Uncategorized