Issue #83: Governing AI Without Starving It
Howdy ๐๐พ. Last week, I had the honor of speaking to the Maryland Certified Public Manager cohort at the University of Baltimore. The session focused on AIโs role in public sector leadership, and in prepping for it, I revisited my chapter on governance in The AI Evolution and reviewed AI policies from the state, county, and city governments across Maryland.
Unsurprisingly, what I found was a bit of a scattershot. Everyone is writing their own rules. But there are some clear through-lines that most government guidance includes:
- Human oversight โ Youโre still responsible for the final product. Donโt blindly trust the output.
- Data privacy โ Donโt paste confidential info into an AI tool.
- Tool approval โ Use systems that have been vetted and cleared.
- Bias awareness โ Be mindful of baked-in or generated bias.
- No automated decision-making โ AI shouldnโt be making decisions for people.
At PerryLabs, we spend a lot of time helping agencies and businesses define these rules, and more importantly, figure out how to keep them up to date. Because one of the hardest parts of AI governance is that the rules change faster than most institutions can react. The ground shifts while youโre still getting settled.
One of the biggest rules I see clients wrestling with is the one about data: donโt put sensitive or private data into AI systems. Itโs a good rule. A necessary one. But also kind of a trap.
These models are starving for data. The better your prompt, the more context you give it, the better your answer. And when you donโt give it that context? It hallucinates. It fills in the blanks with guesses. Now youโre looking at an answer thatโs confident, wrong, and potentially damaging.
Thatโs the real dilemma: if you donโt give the AI context, it canโt help you.
If you do, and youโre not careful, youโre putting private or protected data at risk.
If you’re not empowering your team to use these tools, theyโll find a way anyway, just without your oversight. Thatโs how you end up with shadow IT, unsanctioned tools, and a governance mess that will be tough to clean up.
The side door here is that companies and public agencies can deploy instances of these models to their teams, with controls that restrict how data is handled. You can configure them so data inputs arenโt used to train the model, while still giving your team the benefit of modern AI.
In an organization with a well-crafted AI adoption plan, one of the most important moves is to prevent shadow IT by deploying your AI. You can use existing foundation models, but wrap them in your policies, with access to your data and protections that respect confidentiality and compliance.
For some teams, that means rolling out OpenAIโs Business or Enterprise tiers. For Microsoft shops, CoPilot is a natural extension into the 365 suite. At PerryLabs, we often recommend a best-of-model approach, one that lets you deploy agents across platforms and plug them into your internal data behind the scenes.
Weโve been here before. When messaging platforms hit the enterprise, people found ways to use them before retention rules caught up. When cloud storage took off, companies feared the sprawl of data in Dropbox. But then they deployed their solutions and brought structure to the chaos. First fear, then shadow usage, and then formal adoption.
Weโre in that same phase now with AI.
The answer isnโt locking it down or letting it run wild. The answer is intentional deployment:
- Tools that keep your data protected
- Policies that are clear, living, and practical
- Training that gives your team the confidence to use AI well, and the judgment to know when not to
Governance matters. But so does access. Todayโs AI is the worst AI your team will ever use, and itโs improving fast. That means reviewing policies regularly and ensuring your teams have the necessary tools before they begin searching for unsanctioned ones.
-jason
๐๐พ At PerryLabs, we advise, implement, and support AI from idea to impact.
Advise. Implement. Support. Real AI, real results with PerryLabs.
The Developer Shift No Oneโs Talking About

Iโve been building software for a long time, and Iโve never seen a shift hit dev teams this fast. AI isnโt just speeding things up; itโs changing how we work at the core. Iโm sharing some real-world changes Iโm seeing (and testing): why microservices might not integrate well with AI, how Iโm using AI-specific README files to convey context across tools, and why some teams are quietly replacing SaaS tools with custom builds supported by AI.
If youโre leading a tech team or working with one, youโll want to read this.
๐ Read the full post
๐ผ Talking Tech: Watch & Learn
AI voice cloning isnโt a future threatโitโs happening now. After a real deepfake incident in Baltimore, I tried it myself and was stunned by how realistic it sounded. Voice is no longer a reliable form of identity, and the risks are growing fast. Stay alert, use multi-factor authentication, and always confirm anything suspicious through another channel.
In this video, OpenAI CEO Sam Altman discusses the growing fraud crisis during his talk last week in D.C.
๐ The Best in Tech This Week
๐ค Claude Gets Clipped โ Anthropic rolled out new rate limits that quietly throttle power users. Whatโs worse? Thereโs still no clear answer on what โpaying moreโ actually buys you. Claudeโs great, but opaque dev throttling isnโt. This pricing post captures the frustration perfectly.
๐ฎ Cloud Vader Strikes Again โ Echelonโs smart gym gear now needs an internet connection to work, thanks to a firmware update. From Belkin Wemo dropping HomeKit to surprise subscriptions from Futurehome, the smart home is starting to feel like a dumb deal. Darth Vader agrees.
๐ง GPT-5 is (Maybe) Coming โ OpenAIโs most advanced model yet might drop in August. Rumors are swirling. Capabilities sound wild. Iโm ready to break it and see what it breaks in return.
๐ค The AI Roadshow: Workshops, Talks & Beyond
August 11, 2025 โ Black Is Tech Conference
September 16, 2025 โ AI Tech Summit w/ Central Maryland Chamber
September 25, 2025 โ BannerX: Where Cybersecurity and AI Shape Business
December 10-11 – The AI Summit New York
P.S. Before you go…
Unitree just dropped a humanoid robotย for under $7K, out of China. It walks, talks, and charges via USB-C.
Pair that withย Hugging Faceโs $100 robotic armย and theirย new humanoid prototypes, and the gap between AI and real-world robotics is closingย fast. With open-source tooling like MCP and small language models entering the scene, weโre getting closer and closer to Jetsons-level assistants. As I like to say in my talks: these are the AOL days of AI, my friends. We have a fun and exciting journey ahead.
