Issue #60: Conversations with My AI Doppelgänger — Jason Michael Perry

Howdy 👋🏾. A classic plot in many movies questions nature vs nurture—are you who you are because of your innate traits, or are you shaped by your experiences? How crucial are your memories to your identity? Could changing or deleting a core memory alter who you fundamentally are? In sci-fi, this trope pops up in various forms: a lost evil twin who somehow finds goodness within or a clone with memories that aren’t their own but borrowed from the person they impersonate.

These stories raise the underlying question: what makes us? Is it having a soul, our memories, the contents of our brain, or something more—a unique combination of elements that truly defines what it means to be an individual?

Earlier this year at SXSW, a group of developers created an AI version of Marilyn Monroe, programming it with details of her life, cloning her voice, and animating a human-like avatar to mimic her. It was Marilyn, but did it know her most intimate moments? Her secrets? How much would an AI need to know to be her genuinely —or at least, convincingly her?

I’ve pondered this for a while, but after watching LinkedIn founder Reid Hoffman interview an AI replica of himself, I felt compelled to build an AI version of myself to explore these questions. So, I’ve been toiling in my lab for the last few weeks, creating an AI Jason bot.

The key to making this bot feel real would be feeding it as much data as possible. I started with a rough timeline of my life from birth to today. I included my resume, old newsletters, various writings, stories that came to mind, and details about my family relationships. This exercise alone highlighted that while I might be able to mimic parts of myself, many of my life’s decisions—what truly makes me who I am—reside in memories, some of which I can’t even recall teaching my replica. No matter what I do, it will always be incomplete, but I tried to pour as much as I could into it.

I also wanted this bot to be conversational, so I turned to ElevenLabs, the tool I used to clone my voice and create a deepfake to speak like me. For a visual component, I generated a looping video using Runway ML from images of myself—and just like that, my AI doppelgänger was born.

Here’s a video of me talking to myself—for you and science. Note: this video has been edited for brevity.

More on this, but first, check out my sponsors and my thoughts on tech & things:

🎵 Can We Even Trust Disses in Songs Anymore?
First, Drake used AI to add a voice clone of Snoop Dogg to a track, and now Grimes, Elon Musk’s ex and mother of his child, was impersonated to create a diss track—all with AI. It’s wild when you realize that even the disses in songs might not be real anymore!

🚗 EVs Are the Future, Just Not as Soon as We Thought
EV sales are up, just not as quickly as expected, but growth remains steady. Range anxiety is still a challenge, but for day-to-day commuting, an electric car is hard to beat—assuming you have the infrastructure at home to support a charger.

📉 Yelp Seizes the Moment After Google’s Antitrust Defeat
In the wake of Google’s recent antitrust loss, Yelp smells blood in the water. Yelp’s CEO Jeremy Stoppelman is taking action, suing Google and accusing it of being a monopoly that unfairly suppresses local search results.

🌐 Closing the Digital Divide: The Urgency of Internet Access for All
Marketplace has a compelling series called “Breaking Ground” that delves into the impact of the Chip Act, which aims to ensure every home in the U.S. has high-speed internet by 2030. This series serves as a powerful reminder that the digital divide is still a reality for many.


Talking to something who thinks it’s you and knows you reasonably well is a surreal experience. AI Jason knew about my history, but I only sometimes thought to provide details that add color or might significantly change how it responds. For example, I moved to DC not by choice but because I was displaced by Hurricane Katrina. I focused so much on the timeline that I forgot to include the context, assuming it would just know.

I purposely withheld certain information, like my favorite number and color (spoiler: it’s not blue or the number 8), to see how it would fill in the gaps. The certainty with which AI Jason spoke made it challenging to distinguish between what was truly about me and what was a hallucination.

What impressed me most was its ability to create new things—like passwords—that were obviously based on its idea of who it is, using the characteristics I provided, or perhaps what it hallucinated. It could retain that information in context and apply it to future answers, even explaining why it made certain choices.

It seems obvious now, but I underestimated AI Jason, as an agent, has access to billions of data points I could never consume. This version of me can use that data to answer questions or provide textbook-perfect explanations for things I might struggle with.

I’m often reminded of scenes in movies where a robot that a human has bonded with is facing a format or upgrade that will forever change it and its personality. If I took all the same data I used to create AI Jason, all the same context and conversations it has had, but changed the underlying foundation model, would AI Jason still be the same? Would it change in ways I might not expect?

-jason

p.s. So, who’s responsible when a self-driving car breaks the law? A witness recently watched an officer walk up to a Waymo, write the car a ticket, and then need clarification about what to do with the ticket.