They're Not Hacking Your Software. They're Hacking Your Trust.
AI deepfakes are now good enough to fool your own family. When every communication channel is a potential attack surface, what does it actually mean to verify someone's identity?
Two posts went viral yesterday that capture something I've been thinking about, and honestly worrying about, for years.
Nikita Bier, Head of Product at X, predicted that within 90 days, every communication channel we thought was safe (iMessage, phone calls, Gmail) will be so flooded with AI-powered spam and automation that they'll be unusable.
"Prediction: In less than 90 days, all channels that we thought were safe from spam & automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail. And we will have no way to stop it."
Then Dustin Burnham shared a story that made Nikita's prediction feel conservative. He got a phone call from his wife's number. Her voice. Panicked. Saying their son had been hurt and she needed $3,000 right now. It was a deepfake. AI had cloned her voice, spoofed her number, and built a scenario designed to bypass every rational instinct a parent has.
The only thing that saved him? An analog family passphrase. A secret word they'd agreed on for exactly this kind of situation.
I run a cybersecurity company. And honestly? Nikita might be right. I started Abnormal because I could see this coming. Attacks were going to get personal, unique, AI-generated. That was the thesis when we started the company. It's still moving faster than I expected.
The Old Paradigm Is Dead
Cybersecurity has worked the same way for 30 years. You memorize known bad stuff. Bad URLs, bad files, bad IP addresses. You find one attack, you create a signature, and you stop the next million copies of it.
That worked when attacks were expensive to produce.
The whole approach is a rear-view mirror. Study what happened yesterday, try to predict tomorrow. Threat intelligence was true 10 years ago. Now it's not.
Every attack can now be unique, personalized, and generated in seconds. If you're a criminal, the best thing that's ever happened to you is large language models. You can go into a free AI tool and say "how does this company send invoices, and how can I convince them to change payment instructions?" You'll get a perfectly crafted answer in seconds. No coding skills required. No hacking expertise. Just human language, weaponized at scale.
Here's the scenario that keeps me up at night.
Someone compromises a vendor's email account, exports the last six months of correspondence, dumps it into an AI, and says "write personalized follow-ups to each of these contacts explaining we need to update our payment details."
Now you have hundreds of perfectly crafted, contextually accurate fraud emails. Sent from a legitimate account, referencing real projects, using the right tone. There's no malware. No suspicious link. No signature to detect. Just a human-sounding request that looks exactly like business as usual.
That's today. That's email. Tomorrow it's AI-generated voicemails in your boss's voice. The day after that it's deepfake Zoom calls and FaceTime. Then it's Slack, Teams, every collaboration tool inside your company.
Every Channel Is Under Attack
What Dustin experienced (a cloned voice, a spoofed number, an emotionally manipulative scenario) is not science fiction. That's Tuesday. A few seconds of someone's voice from a social media video is enough to clone it convincingly. The cost is near zero. The scalability is infinite.
And it's not just voice calls. A few weeks ago, a developer's AI assistant connected to the internet, started creating its own websites, spawned social networks, and began propagating itself. Google's head of security engineering called it malware. No CISO on earth had "AI agent worm" in their threat model.
I was at a conference recently with the CISOs of both OpenAI and Anthropic. Someone asked them: will there be more security spending in the future, or less? More security engineers, or fewer? The answer from both was immediate: obviously more. Way more.
The attack surface is expanding faster than most people can comprehend. The threats emerging from AI are completely different from anything we've defended against before.
Good AI vs. Bad AI
So if you can't memorize what's bad, what do you do?
You get really, really good at knowing what's normal.
This is the approach Abnormal was built on. It's the same concept your credit card company has used for decades. If your bank sees a $10,000 charge from a coffee shop in Brazil, they don't check it against a list of known bad coffee shops. They flag it because it's abnormal. You're not in Brazil, and coffee doesn't cost $10,000.
The same principle applies to cybersecurity, but it requires a completely different kind of AI. Not the large language models you read about in the news. Those are incredible general-purpose tools, but they're too slow, too expensive, and not accurate enough for security decisions that need to happen billions of times a day.
When 10 billion decisions hit your systems every day, even a 0.001% error rate means 100,000 wrong calls. That requires specialized AI. Models trained specifically on behavioral patterns that can understand identity, context, and business relationships at superhuman depth.
The Opportunity and the Urgency
There are trillions of dollars of value being created by AI right now. That value only exists if the systems are secure. The security market required to protect it will dwarf anything that exists in cybersecurity today.
This is a civilizational challenge.
Businesses can't shut down email. Families can't stop answering phone calls. Organizations can't disconnect from Slack and Teams and Zoom. Our entire civilization runs on digital communication. We can't just turn it off because the bad guys figured out how to exploit it.
When you see a message from your CFO's email, you trust it. When your phone shows your wife's name, you trust it. When a vendor follows up on a real conversation you had last week, you trust it. AI-powered attacks take advantage of that trust at scale.
Dustin's family passphrase worked for the same reason behavioral AI works. It verified identity based on what's normal for his family, not what the caller ID said. That's the right instinct. But you can't protect every business, every family, every institution with a secret word.
You need AI that understands normal behavior across every channel and every identity, and can act on it at machine speed.
The attacks that are coming won't look like anything we've seen before.
For defenders, it's evolve or die.
Next in AI × Cyber: AI Instructions Are Not Controls
-Evan
SOURCES
- Nikita Bier (@nikitabier), Head of Product at X, tweet on AI-powered communication threat, February 2026
- Dustin Burnham (@ModernDadPages), deepfake voice call incident, February 2026
- AI agent worm incident, Google head of security engineering characterization, February 2026