23 result's for “perplexity ai”
-
Cloudflare just entered the AI monetization chat.
One of the core tensions with SEO and now AEO (AI Engine Optimization) is access. If you block bots from crawling your content, you lose visibility. But if you don’t block them, your content gets scraped, summarized, and served elsewhere without credit, clicks, or revenue.
For publishers like Reddit, recipe sites, and newsrooms, that’s not just a tech issue, it’s an existential one. Tools like Perplexity and ChatGPT summarize entire pages, cutting publishers out of the traffic (and ad revenue) loop.
Now Cloudflare’s testing a new play: charge the bots. Their private beta lets sites meter and monetize how AI tools crawl their content. It’s early, but it signals a bigger shift. The market’s looking for a middle ground between “open” and “owned.” And the real question is—who gets paid when AI learns from your work?
-
AI Workshop: Are You Ready for Answer Engine Optimization?
How to Make Your Content Discoverable by AI, Not Just Google
Session Description:
This interactive, 90-minute session is designed for marketers, content creators, and business professionals who want to future-proof their visibility in a world where generative AI—not Google—is answering the questions.
As tools like ChatGPT, Perplexity, and Claude become the first stop for users seeking answers, the way content is found, cited, and used has fundamentally changed. Traditional SEO is no longer enough. In this session, you’ll learn how answer engines work, what AI models prioritize when retrieving and generating content, and how to structure your website, documents, and knowledge bases to stay relevant in an AI-powered ecosystem.
We’ll walk through the shift from search to synthesis, show you how AI “reads” your content, and help you take the first steps in adopting Answer Engine Optimization (AEO) for your business or team.
Download Files or Access ResourcesExcellent information in a simple format.
It was the perfect beginners look at AI.
It was great insight into the use of AI from a different perspective than simply ask and answer a question.
A masterclass by Jason Perry on customization of ChatGPT. The forward-thinking strategies discussed have opened new avenues for me to explore.
The workshop was very informative.
Very much enjoyed the presentation! Interesting to see the variety of AI products available.
What You’ll Learn:
- What AEO is and how it’s different from traditional SEO
- Why generative AI is reducing search traffic—and what that means for your content
- How AI models retrieve and synthesize answers using trained data or real-time tools
- The impact of robots.txt, training permissions, and data visibility on your brand
- What Claude, Perplexity, and ChatGPT actually prioritize when citing content
- How to structure your website, documents, and FAQs for AI comprehension
- Why now is the time to invest in AI-optimized content—and how to start
Instructor
Jason Michael Perry
Founder & Chief AI Officer, PerryLabsJason Michael Perry brings over 20 years of experience at the intersection of technology, innovation, and strategy. As the Founder and Chief AI Officer of PerryLabs, he helps teams and organizations confidently integrate AI into their everyday work without the fluff or overwhelm.
He’s the author of The AI Evolution and writes the weekly newsletter Thoughts on Tech & Things, where he breaks down complex tech topics with clarity, wit, and a practical lens.
Jason has advised Fortune 500s, government agencies, and nonprofits, and has taught 1,000+ professionals how to use AI to boost productivity, improve communication, and streamline decision-making.
He also serves as:
- Entrepreneur in Residence at the University of Baltimore
- Senior Advisor at the World Trade Center Institute
- Board Member at the Baltimore Symphony Orchestra
- Creator of AI in A Minor, an award-winning AI/music collaboration with AWS
Whether he’s speaking on stage or guiding a hands-on workshop, Jason is known for making AI accessible, engaging, and immediately useful.
-
A Week of Dueling AI Keynotes
Microsoft Build. Google I/O. One week, two keynotes, and a surprise plot twist from OpenAI. I flew to Seattle for Build, but the week quickly became about something bigger than just tool demos; it was a moment that clarified how fast the landscape is moving and how much is on the line.
For Microsoft, the mood behind the scenes is… complicated. Their internal AI division hasn’t had the impact some expected. And the OpenAI partnership—the crown jewel of their AI strategy—feels increasingly uneasy. OpenAI has gone from sidekick to wildcard. Faster releases, bolder moves, and a growing sense that Microsoft is no longer in the driver’s seat.
Google has its own tension. It still prints money through ads, but it just lost two major antitrust cases and is deep in the remedies stage, which could change the company forever. Meanwhile, the company is trying to reinvent itself around AI, even at its core business model (search + ads) starts to look shaky in a world where answers come from chat, not clicks.
Let’s start with Microsoft
The Build keynote focused squarely on developers and, more specifically, how AI can make them exponentially more powerful. This idea—AI as a multiplier for small, agile teams—is core to how I think about Vibe Teams. It’s not about replacing engineers. It’s about amplifying them. And this year, Microsoft leaned in hard.
One of the most exciting announcements was GitHub Copilot Agents. If you’ve played with tools like Claude Code or Lovable, you know how quickly AI is changing the way we write software. We’re moving from line-by-line coding to spec-driven development, where you define what the system should do, and agentic AI figures out how.
Copilot Agents takes that further. You can now assign an issue or bug ticket in GitHub to an AI agent. That agent will create a new branch, tackle the task, and submit a pull request when it’s done. You review the PR, suggest edits if needed, and decide whether to merge. No risk to your main codebase. No rogue commits. Just a smart collaborator who knows the rules of the repo.
This isn’t just task automation—it’s the blueprint for how teams might work moving forward. Imagine a lead engineer writing specs and reviewing pull requests—not typing out every line of code but conducting an orchestra of agentic contributors. These agents aren’t sidekicks. They’re teammates. And they don’t need coffee breaks.
Sam Altman joined Satya Nadella remotely – another telling sign that their relationship is collaborative but increasingly arms-length. Satya reiterated Microsoft’s long view, and Sam echoed something I’ve said for a while now: “Today’s AI is the worst AI you’ll ever use.” That’s both a promise and a warning.
The next wave of announcements went deeper into the Microsoft stack. Copilot is being deeply embedded into Microsoft 365, supported by a new set of Copilot APIs and an Agent Toolkit. The goal? Create a marketplace of plug-and-play tools that expand what Copilot Studio agents can access. It’s not just about making Teams smarter – it’s about turning every Microsoft app into an environment agents can operate inside and build upon.
Microsoft also announced Copilot Tuning inside Copilot Studio – a major upgrade that lets companies bring in their own data, refine agent behavior, and customize AI tools for specific use cases. But the catch? These benefits are mostly for companies that are all-in on Microsoft. If your team uses Google Workspace or a bunch of best-in-breed tools, the ecosystem friction shows.
Azure AI Studio is also broadening its model support. While OpenAI remains the centerpiece, Microsoft is hedging its bets. They’re now adding support for LLaMA, HuggingFace, GrokX, and more. Azure is being positioned as the neutral ground—a place where you can bring your model and plug it into the Microsoft stack.
Now for the real standout: MCP.
The Model Context Protocol—originally developed by Anthropic—is the breakout standard of the year. It’s like USB-C for AI. A simple, universal way for agents to talk to tools, APIs, and even hardware. Microsoft is embedding MCP into Windows itself, turning the OS into an agent-aware system. Any app that registers with the Windows MCP registry becomes discoverable. An agent can see what’s installed, what actions are possible, and trigger tasks, from launching a design in Figma to removing a background in Paint.
This is more than RPA 2.0. It’s infrastructure for agentic computing.
Microsoft also showed how this works with local development. With tools like Ollama and Windows Foundry, you can run local models, expose them to actions using MCP, and allow agents to reason in real-time. It’s a huge shift—one that positions Windows as an ideal foundation for building agentic applications for business.
The implication is clear: Microsoft wants to be the default environment for agent-enabled workflows. Not by owning every model, but by owning the operating system they live inside.
Build 2025 made one thing obvious: vibe coding is here to stay. And Microsoft is betting on developers, not just to keep pace with AI, but to define what working with AI looks like next.
Now Google
Where Build was developer-focused, Google I/O spoke to many audiences, sometimes pitching directly to end-users and sometimes to developers. Google I/O pushed to give a peek at what an AI-powered future could look like inside the Google ecosystem. It was a broader, flashier stage, but still packed with signals about where they’re headed.
The show opened with cinematic flair: a vignette generated entirely by Flow, the new AI-powered video tool built on top of Veo 3. But this wasn’t just a demo of visual generation. Flow pairs Veo 3’s video modeling with native audio capabilities, meaning it can generate voiceovers, sound effects, and ambient noise, all with AI. And more importantly, it understands film language. Want a dolly zoom? A smash cut? A wide establishing shot with emotional music? If you can say it, Flow can probably generate it.
But Google’s bigger focus was context and utility.
Gemini 2.5 was the headliner, a major upgrade to Google’s flagship model, now positioned as their most advanced to date. This version is multimodal, supports longer context windows, and powers the majority of what was shown across demos and product launches. Google made it clear: Gemini 2.5 isn’t just powering experiments—it’s now the model behind Gmail, Docs, Calendar, Drive, and Android.
Gemini 2.5 and the new Google AI Studio offer a powerful development stack that rivals GitHub Copilot and Lovable. Developers can use prompts, code, and multi-modal inputs to build apps, with native support for MCP, enabling seamless interactions with third-party tools and services. This makes AI Studio a serious contender for building real-world, agentic software inside the Google ecosystem.
Google confirmed full MCP support in the Gemini SDK, aligning with Microsoft’s adoption and accelerating momentum behind the protocol. With both tech giants backing it, MCP is well on its way to becoming the USB-C of the agentic era.
And then there’s search.
Google is quietly testing an AI-first search experience that looks a lot like Perplexity – summarized answers, contextual follow-ups, and real-time data. But it’s not the default yet. That hesitation is telling: Google still makes most of its revenue from traditional search-based ads. They’re dipping their toes into disruption while trying not to tip the boat. That said, their advantage—access to deep, real-time data from Maps, Shopping, Flights, and more—is hard to match.
Project Astra offered one of the most compelling demos of the week. It’s Google’s vision for what an AI assistant can truly become – voice-native, video-aware, memory-enabled. In the clip, an agent helps someone repair a bike, look up receipts in Gmail, make phone calls unassisted to check inventory at a store, reads instructions from PDFs, and even pauses naturally when interrupted. Was it real? Hard to say. But Google claims the same underlying tech will power upcoming features in Android and Gemini apps. Their goal is to graduate features from Astra as they evolve from showcase to shippable, moving beyond demos into the day-to-day.
Gemini Robotics hinted at what’s next, training AI to understand physical environments, manipulate objects, and act in the real world. It’s early, but it’s a step toward embodied robotic agents.
And then came Google’s XR glasses.
Not just the long-rumored VR headset with Samsung, but a surprise reveal: lightweight glasses built with Warby Parker. These aren’t just a reboot of Google Glass. They feature a heads-up display, live translation, and deep Gemini integration. That display can able to silently serve up directions, messages, or contextual cues, pushing them beyond Meta’s Ray-Bans, which remain audio-only. These are ambient, spatial, and persistent. You wear them, and the assistant moves with you.
Between Apple’s Vision Pro, Meta’s Orion prototypes, and now Google XR, one thing is clear: we’re heading into a post-keyboard world. The next interface isn’t a screen, it’s an environment. And Google’s betting that Gemini, which they say now leads the field in model performance, will be the AI to power it all.
And XR glasses seem like a perfect time for Sam Altman to steal the show…
OpenAI and IO sitting in a tree…
Just as Microsoft and Google finished their keynotes, Sam Altman and Jony Ive dropped the week’s final curveball: OpenAI has acquired Ive’s AI hardware-focused startup, IO, for a reported $6.5 billion.
There were no specs, no images, and no product name. Just a vision. Altman said he took home a prototype, and it was enough to convince him this was the next step. ‘I’ve described the device as something designed to “fix the faults of the iPhone,” less screen time, more ambient interaction. Rumors suggest it’s screenless, portable, and part of a family of devices built around voice, presence, and smart coordination.
In a week filled with agents, protocols, and assistant upgrades, the IO announcement begs the question:
What is the future of computing? Are Apple, Google, Meta, and so many other companies right to bet on glasses?
And if it’s not glasses, not headsets, not wearables, we’ve already seen—but something entirely new. What might the new interface to computing look like?
And with Ive on board, design won’t be an afterthought. This won’t be a dev kit in a clamshell. It’ll be beautiful. Personal. Probably weird in all the right ways.
So where does that leave us?
AI isn’t just getting smarter—it’s getting physical.
Agents are learning to talk to software through MCP. Assistants are learning your context across calendars, emails, and docs. Models are learning to see and act in the world around them. And now hardware is joining the party.
We’re entering an era where the tools won’t just be on your desktop—they’ll surround you. Support you. Sometimes, speak before you do. That’s exciting. It’s also unsettling. Because as much as this future feels inevitable, it’s still up for grabs.
The question isn’t whether agentic AI is coming. It’s who you’ll trust to build the agent that stands beside you.
Next up: WWDC on June 10. Apple has some catching up to do. And then re:Invent later this year.
-
Cloudflare Gives Creators Control Over AI Crawlers
Let’s face it—robots.txt wasn’t designed for the age of AI crawlers, which are ravenously consuming content across the web. For creators, it’s tough to swallow that their hard work is being used, often for free, to train AI models.
Cloudflare’s latest feature now allows websites to block AI models or bots with a simple click. If you’ve ever had to prove you’re human before accessing a site, that’s part of the toolkit Cloudflare is offering to help publishers stop the constant battle of restricting access.
While this might be a win for creators in the short term, there’s a lingering question: Will limiting access to AI crawlers make it harder for your content to be found in AI-powered answer engines like Perplexity AI? Only time will tell, but for now, the choice is yours.
-
OpenAI Just Released Search!
I’m surprised it took so long. After all, OpenAI’s ChatGPT powers Microsoft’s Bing search, so in some ways, the company has been in the search game from nearly the start.
What’s interesting is that OpenAI’s approach is less like Bing and Google’s AI Overviews and more like Perplexity AI—my favorite new search tool in years. This is a good thing, changing our relationship with search from a list of results that may hold the answer to our questions, to actual responses that you can drill into with additional questions.
For access you need to join a waitlist, and I’m on it, so I can’t kick the tires just yet. OpenAI expects to integrate search into ChatGPT in the long term rather than maintaining them as separate products.
This means the competition in search is heating up for Google—and so far, their attempts to add AI to search have been lacking.
-
Yelp Seizes the Moment After Google’s Antitrust Defeat
In the wake of Google’s recent antitrust loss, it’s clear that Yelp smells blood in the water. Jeremy Stoppelman, Yelp’s CEO, recently penned a blog post announcing that Yelp is suing Google, accusing it of being a monopoly that unfairly suppresses local search results.
Stoppelman makes a compelling case, arguing that Google has been propping up what Yelp calls an inferior local search product to capture more search traffic within its own ecosystem—something widely known as “zero-click search.”
As I’ve pointed out in my newsletter, this couldn’t come at a worse time for Google. For the first time, competitors like OpenAI and Perplexity AI see a path to challenge Google’s dominance in search. But AI-driven search is a different beast, something I’ve referred to as “answer engines.” Unlike traditional search, these tools don’t provide a list of links or drive traffic to the sources they pull from; instead, they deliver direct answers, posing a new kind of threat to Google’s search empire.
-
Is Apple About to Buy an Answer Engine?
What do you do when $20 billion in revenue might vanish thanks to Google’s looming antitrust fallout?
You buy the best damn answer engine on the block.
Perplexity is already my favorite AI search tool—fast, smart, and actually useful. Imagine it embedded deep into Apple’s many operating systems. A real-time answer engine that could make Siri useful and launch a day-one Google Search competitor. If this happens, it might be Apple’s smartest acquisition in years.
-
Bye SEO, and Hello AEO
If you caught my recent LinkedIn post, I’ve been sounding the alarm on SEO and search’s fading dominance. Not because it’s irrelevant, but because the game is changing fast.
For years, SEO (Search Engine Optimization) has been the foundation of digital discovery. But we’re entering the age of Google Zero—a world where fewer clicks make it past the search results page. Google’s tools (Maps, embedded widgets, AI Overviews) are now hogging the spotlight. And here’s the latest signal: In April, Apple’s Eddy Cue said that Safari saw its first-ever drop in search queries via the URL bar. That’s huge. Safari is the default browser for iPhones and commands over half of U.S. mobile browser traffic. A dip here means a real shift in how people are asking questions.
I’ve felt it in my habits. I still use Google, but I’ve started using Perplexity, ChatGPT, or Claude to ask my questions. It’s not about keywords anymore, it’s about answers. That brings us to a rising idea: AEO — Answer Engine Optimization.
Just like SEO helped businesses get found by Google, AEO is about getting found by AI. Tools like Perplexity and ChatGPT now crawl the open web to synthesize responses. If your content isn’t surfacing in that layer, you’re invisible to the next generation of search.
It’s not perfect—yet. For something like a recipe, the AI might not cite you at all. But for anything involving a recommendation or purchase decision, it matters a lot.
Take this example: I was recently looking for alternatives to QuickBooks. In the past, I’d Google it and skim through some SEO-packed roundup articles. Now? I start with Perplexity or ChatGPT. Both gave me actual product suggestions, citing sources from review sites, Reddit threads, and open web content. The experience felt more tailored. More direct.
If you sell anything—whether it’s a SaaS product, a service, or a physical item this is the new front door. It’s not just about ranking on Google anymore. It’s about being visible to the large language models that shape what users see when they ask.
So, you’re probably asking. How do you optimize for an answer engine? The truth is, the rules are still emerging. But here’s what we know so far:
• Perplexity leans on Bing. It uses Microsoft’s search infrastructure in the background. So your Bing SEO might matter more than you think.
• Sources are visible. Perplexity shows where it pulled info from—Reddit, Clutch, Bench, review sites, etc. If your product is listed or mentioned there, you’ve got a shot.
• Wikipedia still rules. Most AI models treat it as a trusted source. If your business isn’t listed—or your page is thin—you’re missing an easy credibility signal.But the biggest move you can make?
Start asking AI tools what they know about you.Try it. Ask ChatGPT or Perplexity: “What are the top alternatives to [your product]?” or “What is [your business] known for?” See what surfaces. That answer tells you what the AI thinks is true. And just like with Google, you can shape that reality by shaping the sources it learns from.
This shift won’t happen overnight. But it’s already happening.
Don’t just optimize for search. Optimize for answers.