The Fraud Risk No One’s Ready For
I agree with Sam Altman, AI-powered deepfakes and voice cloning are one of the most alarming risks we’re facing right now.
Here in Baltimore, we’ve already seen how real this can get. Last year, a Pikesville high school principal was accused of making racist comments—until it came out that the entire recording was a voice clone. After that, I experimented with cloning my own voice and doctoring a video of myself. It wasn’t perfect, but it was close enough to be unsettling.
Systems that rely on voice to ID customers just aren’t safe anymore. Tools like ElevenLabs have made cloning voices frighteningly easy, and if you’ve ever said a few words online or been recorded at an event, chances are you’ve given someone enough data to copy you.
Scammers are already using these tools in the wild, and combined with the mountain of personal info already floating around online, it’s easy to imagine how these attacks are getting harder to detect. Security training used to teach us that phishing emails were easy to spot. Look for misspellings, weird tone, bad grammar. But now? The grammar’s perfect. The details are specific. And the sender sounds like your boss, your partner, or your kid.
So what can you do? Start with the basics. And remember, it’s not just about locking down your own account. Train your team. Talk to your family. Assume your voice and your data are already out there, and focus on making it harder for attackers to weaponize them.
Use multi-factor authentication. Slow down before responding to anything unusual. And always, always, verify through a second channel if something feels off. No one’s going to get upset if you hang up and call them back. Or shoot them a text to double-check. That extra step might be the thing that saves you from all this mess.