Issue #83: Governing AI Without Starving It
Howdy 👋🏾. Last week, I had the honor of speaking to the Maryland Certified Public Manager cohort at the University of Baltimore. The session focused on AI’s role in public sector leadership, and in prepping for it, I revisited my chapter on governance in The AI Evolution and reviewed AI policies from the state, county, and city governments across Maryland.
Unsurprisingly, what I found was a bit of a scattershot. Everyone is writing their own rules. But there are some clear through-lines that most government guidance includes:
- Human oversight – You’re still responsible for the final product. Don’t blindly trust the output.
- Data privacy – Don’t paste confidential info into an AI tool.
- Tool approval – Use systems that have been vetted and cleared.
- Bias awareness – Be mindful of baked-in or generated bias.
- No automated decision-making – AI shouldn’t be making decisions for people.
At PerryLabs, we spend a lot of time helping agencies and businesses define these rules, and more importantly, figure out how to keep them up to date. Because one of the hardest parts of AI governance is that the rules change faster than most institutions can react. The ground shifts while you’re still getting settled.
One of the biggest rules I see clients wrestling with is the one about data: don’t put sensitive or private data into AI systems. It’s a good rule. A necessary one. But also kind of a trap.
These models are starving for data. The better your prompt, the more context you give it, the better your answer. And when you don’t give it that context? It hallucinates. It fills in the blanks with guesses. Now you’re looking at an answer that’s confident, wrong, and potentially damaging.
That’s the real dilemma: if you don’t give the AI context, it can’t help you.
If you do, and you’re not careful, you’re putting private or protected data at risk.
If you’re not empowering your team to use these tools, they’ll find a way anyway, just without your oversight. That’s how you end up with shadow IT, unsanctioned tools, and a governance mess that will be tough to clean up.
The side door here is that companies and public agencies can deploy instances of these models to their teams, with controls that restrict how data is handled. You can configure them so data inputs aren’t used to train the model, while still giving your team the benefit of modern AI.
In an organization with a well-crafted AI adoption plan, one of the most important moves is to prevent shadow IT by deploying your AI. You can use existing foundation models, but wrap them in your policies, with access to your data and protections that respect confidentiality and compliance.
For some teams, that means rolling out OpenAI’s Business or Enterprise tiers. For Microsoft shops, CoPilot is a natural extension into the 365 suite. At PerryLabs, we often recommend a best-of-model approach, one that lets you deploy agents across platforms and plug them into your internal data behind the scenes.
We’ve been here before. When messaging platforms hit the enterprise, people found ways to use them before retention rules caught up. When cloud storage took off, companies feared the sprawl of data in Dropbox. But then they deployed their solutions and brought structure to the chaos. First fear, then shadow usage, and then formal adoption.
We’re in that same phase now with AI.
The answer isn’t locking it down or letting it run wild. The answer is intentional deployment:
- Tools that keep your data protected
- Policies that are clear, living, and practical
- Training that gives your team the confidence to use AI well, and the judgment to know when not to
Governance matters. But so does access. Today’s AI is the worst AI your team will ever use, and it’s improving fast. That means reviewing policies regularly and ensuring your teams have the necessary tools before they begin searching for unsanctioned ones.
-jason
👉🏾 At PerryLabs, we advise, implement, and support AI from idea to impact.
Advise. Implement. Support. Real AI, real results with PerryLabs.
The Developer Shift No One’s Talking About

I’ve been building software for a long time, and I’ve never seen a shift hit dev teams this fast. AI isn’t just speeding things up; it’s changing how we work at the core. I’m sharing some real-world changes I’m seeing (and testing): why microservices might not integrate well with AI, how I’m using AI-specific README files to convey context across tools, and why some teams are quietly replacing SaaS tools with custom builds supported by AI.
If you’re leading a tech team or working with one, you’ll want to read this.
👉 Read the full post
📼 Talking Tech: Watch & Learn
AI voice cloning isn’t a future threat—it’s happening now. After a real deepfake incident in Baltimore, I tried it myself and was stunned by how realistic it sounded. Voice is no longer a reliable form of identity, and the risks are growing fast. Stay alert, use multi-factor authentication, and always confirm anything suspicious through another channel.
In this video, OpenAI CEO Sam Altman discusses the growing fraud crisis during his talk last week in D.C.
đź”— The Best in Tech This Week
🤕 Claude Gets Clipped – Anthropic rolled out new rate limits that quietly throttle power users. What’s worse? There’s still no clear answer on what “paying more” actually buys you. Claude’s great, but opaque dev throttling isn’t. This pricing post captures the frustration perfectly.
🎮 Cloud Vader Strikes Again – Echelon’s smart gym gear now needs an internet connection to work, thanks to a firmware update. From Belkin Wemo dropping HomeKit to surprise subscriptions from Futurehome, the smart home is starting to feel like a dumb deal. Darth Vader agrees.
🧠GPT-5 is (Maybe) Coming – OpenAI’s most advanced model yet might drop in August. Rumors are swirling. Capabilities sound wild. I’m ready to break it and see what it breaks in return.
🎤 The AI Roadshow: Workshops, Talks & Beyond
August 11, 2025 – Black Is Tech Conference
September 16, 2025 – AI Tech Summit w/ Central Maryland Chamber
September 25, 2025 – BannerX: Where Cybersecurity and AI Shape Business
December 10-11 – The AI Summit New York
P.S. Before you go…
Unitree just dropped a humanoid robot for under $7K, out of China. It walks, talks, and charges via USB-C.
Pair that with Hugging Face’s $100 robotic arm and their new humanoid prototypes, and the gap between AI and real-world robotics is closing fast. With open-source tooling like MCP and small language models entering the scene, we’re getting closer and closer to Jetsons-level assistants. As I like to say in my talks: these are the AOL days of AI, my friends. We have a fun and exciting journey ahead.
