Category: Application Programmer Interfaces (APIs)
-
Development Needs an AI-First Rewrite
I’ve spent the better part of the last two decades running or being a part of development teams, and as a developer I’m experimenting a lot with using AI tools, and I’m not alone, lots of people in the tech space are. So it comes as little surprise that these same teams are seeing the biggest impact from the ongoing shift to AI-native development.
A few weeks ago, I caught up with a dev manager friend, and without prompting, we jumped straight into a conversation about how fast everything is changing. For years, we’ve focused on building processes and procedures to help human teams collaborate and build complex software applications. But in the world of AI, the model is changing, and sometimes dramatically. What used to be a multi-person effort can now be a single human orchestrating multiple AI tools.
A perfect example: microservices.
The trend of breaking up complex applications into smaller pieces has been hot for a while now. It lets organizations build specialized teams around each part of the application and gives those teams more autonomy in how they operate. That made sense, when your team was all human.
But in an AI-first world, it can actually make things harder.
A single repository might only represent a slice of the full application. And if you’re using an AI tool to review code, it can’t easily load up the full context of how everything connects. Sure, you can help the AI out with great documentation or give it access to multiple repos, but at the core, microservices are optimized for distributed human teams. They’re not necessarily optimized for AI tools that rely on full-graph context.
That’s why, in some cases, building a more complex monolithic application may actually be a better approach for AI-native teams building with (and for) AI tools.
Another place I’ve been experimenting is in seeding AI with persistent context, so I don’t have to be overly verbose every time I assign it a dev task. I caught myself over-explaining things in prompts after the AI would do something unexpected or take an approach I didn’t want. So I started adding AI-specific README files at the root of my projects. Not for humans—for AI.
These files include architecture decisions, key concepts, and even logs of recent changes or commits. When I’m using Claude inside VS Code, for example, I sometimes hit rate limits. If I need to switch to another model like Gemini or ChatGPT to pick up the work, I don’t want to start from scratch. Those AI README files give them the full context without having to re-prompt from zero.
This is the kind of data that might be overwhelming for a human reviewer, but trivial for an AI model. And that’s part of the mindset shift. Developers are used to writing comments or notes that are for humans. But AI models benefit from verbose, detailed explanations. Concise isn’t always better.
AI also gives us an opportunity to finally shed some of the legacy technical debt that’s haunted development teams for decades. When Amazon first announced its AI assistant Q back in 2022, one of the headline features was its ability to upgrade old Java applications, moving from decades-old versions to the latest in minutes. Since then, I’ve seen multiple companies use AI to fully rewrite legacy systems into modern languages in weeks instead of years.
For some of the best programmers I know, this is a little distressing. They see code as craft, as poetry. And just like writers grappling with ChatGPT, the idea that AI can churn out working code with no elegance can feel like a loss. But beauty doesn’t always move the bottom line. Sure, I could pay top dollar for a handcrafted implementation. But the okay code produced by an AI tool that works, ships faster, and gets us to revenue or cost savings sooner? That’s usually the better trade.
And it’s not just the dev process, it’s entire products on the line. My friend put it bluntly: they used to rely on a range of SaaS tools because it wasn’t worth the effort to build them in-house. But now? A decent engineer with AI support can quickly build tools that used to require a paid subscription. That’s an existential threat for SaaS platforms that haven’t built a deep enough moat.
Take WordPress, for example. I know developers who used to rely on dozens of plugins, anything from SEO tweaks to form builders to table generators. Today, with AI support, they’re writing that functionality directly into themes or making their own plugins with AI. It’s faster, lighter, and more tailored to what they need. The value of external tools or those one-line NPM packages we used to reach for just to save time starts to diminish when you’ve got an army of AI bots doing the work for you. The calculus changes.
I always say in my talks that this is still the AOL phase of AI, it’s early. But what’s happening inside development teams is a preview of what’s coming for every other function in the business. We’ve spent decades refining human-first processes that work across teams, tools, and orgs. Now we’re going to have to rethink those processes, one by one, for an AI-first future.
-
The Problem with Data
Everyone has the same problem, and its name is data. Nearly every business functions in one of two core data models:
- ERP-centric: One large enterprise system (like SAP, NetSuite, or Microsoft Dynamics) acts as the hub for inventory, customers, finance, and operations. It’s monolithic, but everything is in one place.
- Best-of-breed: A constellation of specialized tools – Salesforce or HubSpot for CRM, Zendesk for support, Shopify or WooCommerce for commerce, QuickBooks for finance – all loosely stitched together, if at all.
In reality, most businesses operate somewhere in between. One system becomes the “system of truth,” while others orbit it, each with its own partial view of the business. That setup is manageable until AI enters the picture.
AI is data-hungry. It works best when it can see across your operations. But ERP vendors often make interoperability difficult by design. Their strategy has been to lock you in and make exporting or connecting data expensive or complex.
That’s why more organizations are turning to data lakes or lakehouses, central repositories that aggregate information from across systems and make it queryable. Platforms like Snowflake and Databricks have grown quickly by helping enterprises unify fragmented data into one searchable hub.
When done well, a data lake gives your AI tools visibility across departments: product, inventory, sales, finance, customer support. It’s the foundation for better analytics and better decisions.
But building a good data lake isn’t easy. I joke in my book The AI Evolution, a bad data lake is just a data swamp, a messy, unstructured dump that’s more confusing than helpful. Without a clear data model and strategy for linking information, you’re just hoarding bytes.
Worse, the concept of data lakes was designed pre-AI. They’re great at storing and querying data, but not great at acting on it. If your AI figures out that you’re low on Product X from Supplier Y, your data lake can’t place the order; it can only tell you.
This is where a new approach is gaining traction: API orchestration. Instead of just storing data, you build connective tissue between systems using APIs, letting AI both see and do across tools. Think of it like a universal translator (or Babelfish): systems speak different languages, but orchestration helps them understand each other.
For example, say HubSpot has your customer data and Shopify has your purchase history. By linking them via API, you can match users by email and give AI a unified view. Better yet, if those APIs allow actions, the AI can update records or trigger workflows directly.
Big players like Mulesoft are building enterprise-grade orchestration platforms. But for smaller orgs, tools like Zapier and n8n are becoming popular ways to connect their best-of-breed stacks and make data more actionable.
The bottom line: if your data lives in disconnected systems, you’re not alone. This is the reality for nearly every business we work with. But investing in data cleanup and orchestration now isn’t just prep, it’s the first step needed to truly unlock the power of AI.
That’s exactly why we built the AI Accelerator at PerryLabs. It’s designed for companies stuck in this in-between state where the data is fragmented, the systems don’t talk, and the AI potential feels just out of reach. Through the Accelerator, we help you identify those key data gaps, unify and activate your systems, and build the orchestration layer that sets the stage for real AI performance. Because the future of AI isn’t just about having the data—it’s about making it usable.
-
Jonathan Turkey
Looking to chat with Jonathan Turkey a conversational AI agent? You should see a widget floating to the bottom right of this web page with a button that says “Gobble Gobble”, click that and enjoy!
-
Introducing my AI Playground and Lab
I’m excited to open up my little corner of the web I’ve been tinkering with – an AI sandbox to easily compare and play with various conversational assistants and generative AI models. This web app, located at labs.jasonmperry.com, provides a simple interface wrapping API calls to different systems that keeps experimentation tidy in one place.
Meet the AI Assistants
Last year, OpenAI released AI Assistants you can train as bots accessing files and calling functions through Retrieval-Augmented Generation (RAG). To test capabilities, I created personalities to check how well these features work for customer service or business needs.
Each of these work assistants works at the fictional firm Acme Consulting, and I uploaded to each bot a company primer detailing the history, leadership, services, values, etc., as a reference. The bots include:
- IT manager, Zack “Debugger” Simmons, is here to help with helpdesk inquiries or to suggest best practices and can help troubleshoot issues or explain configurations.
- HR Coordinator Tina “Sunbeam” Phillips is armed with general HR knowledge and a fictional employee handbook with policy details she can cite or reference. Ask her about the holiday schedule and core schedule or for benefits advice.
- Support Coordinator, Samantha “Smiles” Miles is part of the Managed Services team and helps maintain support tickets in the Jira Service Desk for all of our corporate clients. By using RAG, you can ask for updates on support tickets she can grab with phrases like “Tell me what tickets I have open for Microsoft” or “Get me the status of ticket MS-1234” which call mock endpoints.
In addition to the Acme workers, I wanted to experiment with what an assistant powering something like Humane’s upcoming AI pin might function like; after all, we know that the product makes heavy use of OpenAI’s models.
- The witty assistant Mavis “Ace” Jarvis is trained with a helpful instruction set and some RAG operations that allow her to get the weather or check stock prices. She can also show locations on a map based on a query. Try asking her, “Will the weather in Las Vegas be warm enough for me to swim outside?” or “Nvidia is on a tear, how’s the stock doing today?”
Finally, I used Anthropic’s Claude to create backgrounds for three fictional US political commentators with different background stories. You can get political insight, debate, or get views on current issues from Darren, the Conservative, progressive Tyler, and moderate Wesley. In the wake of a push to create AI that bends to different philosophies, I figured these assistants could offer a view into how three distinct personalities might respond to similar prompts while all trained on the same core data.
Text Generation
Compare multiple models’ outputs side-by-side – currently supporting Cohere, Jurassic, Claude, and ChatGPT. Specify max length, temperature, top p sampling, and more for more tailored responses. I plan to continually add the latest models as they become available for testing how phrasing, accuracy, creativity etc. differ when asking the same prompt.
Image Generation
Similarly, visually compare image results from DALL-E and Stable Diffusion by entering identical prompts. The interpretation variance based on the artists and datasets used to train each is intriguing.
Of course, as a playground and lab, I’m continually adding features and experiments, and I plan to add video generation, summarizers, voice cloning, etc. So check back for the latest or suggest additions.
-
Reddit and end of Open API’s
This sucks. Apollo has been my go-to Reddit reader, and I don’t want that to change, but come the end of the month, it’s happening if I like it or not.If you’re not in the loop on Reddit’s API drama, the TL;DR is Reddit moved from open and free APIs to a fee-based system that charges based on the number of API calls you make. If this sounds familiar, it’s because Twitter went down a similar path, and many other open platforms have decided to shut the doors to open API access. The argument for why? OpenAI and AI models are being trained on hordes of open Internet data and, of course, the possibility to eek some revenue out of all the folks hooked on Reddit’s content.
As you might imagine, that approach makes the cost to run something like Apollo unsustainable.
Is Reddit wrong? Apps built for platforms like Twitter and Reddit are like symbiotic bacteria, but one organism is much more dependent on the other. As a platform, Reddit is about user-generated content, and as with Twitter or LinkedIn, it makes us feel like investors or partners in this whole social sharing experiment. But let’s be honest. Revenue and control of the platform you own is what this is really about. If you’re not in control of the last mile, you can’t control how your consumers interact with you or it. You’re constantly limited in how you can advertise, how you personalize, and the ways you can use them to generate revenue.
Hey Reddit, when you fix the mobile and iPad apps, call Mindgrub. We make great mobile apps.
-
Open API’s
The idea of open APIs and access to platforms has become a surprisingly divisive thing. Like most stories in 2023, our story of APIs starts with Elon Musk and Twitter and the decision to shut down third-party app access.
Many, many, many folks were upset that Twitter would shut off access to TweetBot or Twitterific. These apps have been part of Twitter from the start and one inspired Twitter’s logo. To add insult to injury, this made us all collectively realize that Twitter’s mobile app is not great (call me, Mindgrub builds excellent apps). But, Twitter didn’t just band third-party apps – what it did instead is rate limit API calls and implement a new system to charge based on the amount of API calls per month. The price tag was so hefty that bit by bit folks said nope.
Unrelated to Twitter, OpenAI blew through the doors of technology like the Kool-Aid man. Whoever had AI on the 2023 Bingo card deserves all the money. The products like DALI-2 and ChatGPT continue to blow all of our socks off – but then the deep dark secrets of OpenAI and other AI platforms began to drip out.
These LLM (Large Language Model) systems need data, and when I say data, they need all the data. The more you can feed the dang thing, the better – it’s like Seymore, and it wants all the information. Some of this best information came from the most open of sources, places like Twitter, Reddit, and Stack Overflow. These platforms are unique in havings tons of experts who share their advice or answer questions in the most open forums.
Elon Musk and Twitter responded that this was why they needed to lock down APIs and tweets so that eager AI training models won’t try to consume this valuable training data without paying the troll toll. Reddit and other sources of these models followed, and now we find ourselves full circle.
Apollo, my preferred Reddit reader and the only reader with a pixel pet finds itself facing the same issue as Tweetbot and Twitteriffic, the costs for the APIs they need to use from Reddit cost too much.
I get it. I understand it. But sometimes I think of the founding of our great Internet and a time when information was free and people linked to link for love. I guess that was the Internet’s 70’s – and today is a differnet time, but I can’t help but wonder if the hordes of people training models on the open Internet might find the reigns getting pulled a little tighter. I also wonder if this just continues the trend of pay walls popping up everywhere.