Skip to main content
0

The FBI Says AI Scams Cost Americans $16 Billion Last Year. Here's How Not to Be Next.

A
a-gnt Community11 min read

Voice cloning, deepfake video calls, hyper-personalized phishing — the FBI's April 2026 IC3 report is alarming. Concrete steps real people can take right now.

The FBI Says AI Scams Cost Americans $16 Billion Last Year. Here's How Not to Be Next.

Your mom picks up the phone on a Tuesday afternoon. Your voice — your actual voice, the one she's known for thirty-seven years — says you've been in a car accident, you're at the hospital, and you need her to wire $4,000 to cover the deductible before they'll treat you. She can hear the panic. She can hear the tremor. She sends the money within nine minutes.

You were at your desk the entire time, eating a sandwich.

That call cost three seconds of audio scraped from a birthday video on Instagram and about forty cents of computing power. It is the most common shape of the new AI scam economy, and according to the FBI's Internet Crime Complaint Center report released this April, Americans lost $16.6 billion to online fraud in 2025 — more than they lost to burglary, motor vehicle theft, and shoplifting combined. A staggering portion of that figure traces back to schemes powered or enhanced by artificial intelligence.

This isn't a story about some distant future. This is what happened last year, to real people, in every state. And the uncomfortable truth is that the old advice — "just look for typos" or "don't click suspicious links" — doesn't work anymore. The scams got smarter faster than the advice did.

So here's the new advice. Five habits. None of them require technical skill. All of them work.

But first, you need to understand what you're actually up against.

The voice on the phone is not a voice anymore

Voice cloning used to require hours of recorded speech and a dedicated sound engineer. Now it needs a clip shorter than a TikTok. Three seconds is enough. Ten seconds produces something nearly indistinguishable from the original, including breathing patterns and the little vocal fry at the end of sentences.

The grandparent scam — where someone calls a senior citizen pretending to be a grandchild in distress — has existed for decades. What's changed is that it no longer sounds like a stranger reading a script. It sounds like family. The FBI report notes that Americans over 60 lost more than $4.8 billion to fraud in 2025, and elder fraud complaints rose 46% in a single year. Voice cloning didn't cause all of that, but it turned a clumsy grift into something precise and devastating.

Here's what the call typically looks like: the cloned voice opens with urgency ("I've been arrested," "I'm in the hospital," "My car went off the road"), asks for money through a channel that's hard to reverse (wire transfer, gift cards, cryptocurrency), and pressures immediate action ("please don't tell Dad," "I only have five minutes"). The emotional architecture is simple — panic suppresses skepticism.

And it's not just targeting grandparents. Business executives have received calls from cloned voices of their CEOs authorizing emergency wire transfers. One case in the IC3 report involved a CFO who transferred $900,000 after a cloned voice call followed up with a spoofed email. The voice confirmed the email. The email confirmed the voice. Both were fake.

The face on the screen might not be a face either

If the voice clone is the scalpel, the deepfake video call is the sledgehammer.

In February 2024, a finance worker at a multinational firm joined a video call with what appeared to be the company's chief financial officer and several other colleagues. Everyone looked real. Everyone sounded real. The worker authorized $25 million in transfers. Every person on that call except the worker was an AI-generated deepfake, running in real time.

That case, reported by Hong Kong police, felt like science fiction a year ago. By 2025, the tools to produce real-time deepfake video had dropped from research-lab prototypes to consumer-grade software costing less than a streaming subscription. The IC3 report flags a sharp increase in "business impersonation via synthetic media" — bureaucratic language for someone wearing your boss's face on a Zoom call.

Romance scams have adopted the same technology. Where catfishers once relied on stolen photos and careful typing, some now conduct weeks of video calls using AI-generated faces synced to their own voice (or a cloned one). Victims describe being certain they were looking at a real person. They were certain because the technology is, frankly, very good.

The email that doesn't smell wrong

Remember when phishing emails had obvious tells? Bad grammar. Weird formatting. "Dear Valued Customer" from a bank you've never used. A return address like security-alert-paypa1@mail.ru.

AI killed the obvious tell.

Large language models write flawless English (or flawless Spanish, or flawless Mandarin — the scam scales across languages now). They match the tone and formatting of whatever institution they're impersonating. They pull in details from data breaches — your real address, the last four digits of your card, the name of your actual bank — and weave them into messages that pass every smell test a normal person can run.

The IC3 report describes a phishing campaign that impersonated a major health insurance provider during open enrollment season. The emails included the recipient's correct plan tier, their employer's name, and a link to a site that was a pixel-perfect copy of the real portal. Thousands of people entered their Social Security numbers. The emails had been generated and personalized by AI at a rate of roughly 80,000 per hour.

This is the part that's hard to accept: you cannot reliably spot an AI-generated phishing email by reading it carefully. The signals that used to work — awkward phrasing, generic greetings, urgency without specifics — have been patched out. The emails are not written by humans making mistakes. They're written by machines that don't make those kinds of mistakes.

If you've ever wanted to test whether a piece of text was likely AI-generated, tools like AAI Writing Detection exist for exactly that purpose — they cross-check text against multiple detection engines at once. Not foolproof, but a useful second opinion when something in your inbox feels off despite looking right.

What doesn't work anymore

Before the five habits, a brief funeral for the advice that's expired:

"Look for typos and bad grammar." Dead. AI writes better prose than most humans. The scam email from 2025 reads better than the legitimate email from your actual bank.

"Check the sender's email address." Dying. Email spoofing is trivial, and many phishing campaigns now use compromised legitimate accounts. The email comes from sarah.chen@realcompany.com because Sarah's account was breached two weeks ago.

"Don't click links from strangers." Insufficient. Many attacks now come from people you know — or appear to — via compromised accounts, cloned identities, or social media impersonation.

"If it sounds too good to be true, it probably is." Still decent advice, but the new scams don't promise anything too good. They promise something plausible and urgent. A tax refund of $847. A package that couldn't be delivered. A security alert about your account. Nothing outlandish. Everything normal-sized.

"Just use common sense." The cruelest advice of all, because it implies the victims lacked sense. They didn't. They encountered technology specifically designed to defeat common sense. Telling someone to use common sense against a real-time deepfake is like telling them to use common sense against a counterfeit bill that's chemically identical to the real thing.

Five habits that actually work

These are not complicated. They don't require software, subscriptions, or a degree in anything. They work because they route around the parts of your brain that AI scams are designed to exploit.

1. Establish a family code word

Pick a word or short phrase that only your family knows. Something unusual enough that no one would guess it but easy enough to remember. "Purple Thursday." "Cinnamon lighthouse." Whatever you want.

The rule: if anyone calls claiming to be a family member and asking for money, help, or sensitive information, they must say the code word. No code word, no action — no matter how real the voice sounds, no matter how urgent the story.

This single habit defeats the entire voice-cloning attack chain. The AI can clone a voice. It cannot clone a secret.

Have this conversation at the next family dinner. Include the grandparents. Include the teenagers. Write the code word on a card and put it in their wallet. This is the single most protective thing in this entire article.

2. Verify through a different channel

If your bank emails you about suspicious activity, don't click the link in the email. Open a new browser tab and type your bank's URL yourself. Or call the number on the back of your card. Not the number in the email. The number on the physical card in your hand.

If your boss calls and asks you to wire money urgently, hang up and call them back on the number you already have saved. If a friend texts asking for an emergency loan, call them. If the IRS sends you a letter, call the IRS at the number on irs.gov, not the number in the letter.

The principle is simple: never verify a message using information contained in that message. Always verify through a channel you control.

This breaks the reinforcement loop — the pattern where a scammer sends a fake email and then follows up with a fake phone call that "confirms" it. Each fake message validates the other. Switching to a channel the attacker doesn't control collapses the whole structure.

3. Slow down on purpose

Every AI scam has a timer. "Act within 24 hours." "Your account will be locked." "They're about to take me to booking." The urgency is the weapon. Not the fake voice, not the fake email — the urgency. Everything else is just set dressing to make the urgency feel real.

So build a personal policy: any request involving money, passwords, or personal information gets a mandatory pause. Fifteen minutes. Long enough to call someone. Long enough to think. Long enough for the adrenaline to recede and the frontal cortex to come back online.

No legitimate institution will punish you for taking fifteen minutes to verify. Your bank will not close your account because you waited. The IRS will not arrest you because you paused. If the person on the phone says you can't wait — that you must act right now — that's not a red flag. That's the whole scam.

4. Lock down your voiceprint and your face

The raw material for voice clones comes from publicly available audio: social media videos, podcast appearances, conference talks, voicemail greetings. You can't scrub the internet, but you can reduce the supply.

Set your social media accounts to private, or at minimum, restrict who can see videos where you're speaking. Audit your voicemail greeting — a generic "please leave a message" is harder to clone than one where you say your full name and fifteen words of natural speech. If you record podcasts or give talks, know that you're expanding your cloneable surface. That's not a reason to stop — it's a reason to have a family code word.

For video: the same deepfake tools that can animate a face from a photo get dramatically better with more footage. A dozen public photos is enough for a basic fake. Hundreds of tagged photos across social media is enough for a convincing one.

If you're curious about what AI can and can't do with digital media, TTinaMind AI is a privacy-focused browser assistant that can help you understand what you're exposing as you browse — without sending your data somewhere else in the process.

5. Turn on two-factor authentication everywhere

This is the only habit on this list that involves touching a setting on your phone, and it takes about ninety seconds per account.

Two-factor authentication (2FA) means that even if a scammer gets your password — through phishing, a data breach, or brute force — they still can't log into your account without the second factor, which is usually a six-digit code from an app on your phone.

Turn it on for email first. Your email account is the skeleton key — if someone controls your email, they can reset the password on almost everything else. Then your bank. Then social media.

Use an authenticator app (Google Authenticator, Authy, or the one built into your phone), not SMS codes. SIM-swapping — where a scammer convinces your carrier to transfer your phone number to their device — makes SMS codes unreliable. An authenticator app lives on your physical phone and can't be swapped.

If you need help walking through any of this, 🫖The Patient Tech Guide on a-gnt exists specifically for people who want technology explained without jargon, without condescension, and without hurry. It's built for exactly this kind of setup.

The conversation you need to have this weekend

The people most vulnerable to AI scams are not the people reading this article. They're the people you'll forward it to — a parent who still answers calls from unknown numbers, an uncle who clicks every link, a grandparent who would do anything for a grandchild in distress and has the savings to prove it.

The FBI's report makes clear that Americans over 60 bear a wildly disproportionate share of these losses. Not because they're naive. Because the scams are specifically calibrated to exploit the instincts that make them good people — generosity, trust, love for their families.

So have the conversation. Bring up the code word at dinner. Help them set up 2FA on their email — do it together, on the couch, with their phone in their hands. Explain that a voice on the phone can be faked now, the way a photo can be faked, and that this isn't science fiction but the reason the FBI wrote a report about it.

Don't lecture. Don't scare. Just give them the five habits, and make the code word something that'll make them laugh.

This is the thing about AI scams: the technology is sophisticated, but the defense is not. A code word. A callback. A pause. A locked-down profile. A six-digit code on your phone. None of it requires understanding how neural networks work or what a large language model is. It requires the same thing it's always required — the willingness to slow down when everything is screaming at you to speed up.

The FBI says $16.6 billion. That's the number that made the headline. But every dollar of that was a person. A retired teacher who lost her savings to a cloned grandchild's voice. A small-business owner who wired money to a deepfaked partner. A college student who entered their Social Security number on a site that looked exactly right.

They weren't careless. They were targeted by the most sophisticated social engineering tools ever built, wielded by people who do this professionally, at scale.

The five habits won't make you invulnerable. Nothing will. But they'll put you in the 90% who don't get caught — because the scammers, like all predators, move on to easier targets. Make yourself a hard target. Then make your family hard targets too.

The code word is the place to start. Pick one tonight.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.