AI-Powered Scams Targeting Seniors: How Artificial Intelligence Is Making Fraud Harder to Detect

  1. How AI Is Changing the Scam Landscape
  2. AI Scam Techniques Targeting Seniors
  3. US Heat Map — AI-Related Fraud Losses (2025)
  4. AI-Related Fraud Losses by State
  5. How To Protect Yourself From AI Scams
  6. AI “Digital Arrest” Scams
  7. AI-Enhanced “Pig Butchering” (Crypto Investment Fraud)
  8. AI Recovery Scams (Second-Strike Fraud)
  9. If You Suspect an AI-Powered Scam

Already been scammed? Read our First 24 Hours Emergency Guide for critical steps to take immediately.

2025 FBI IC3 Data: AI-related fraud cost American seniors $352 million in 2025, with over 3,100 victims aged 60+. AI is now a factor in investment scams, romance fraud, voice cloning attacks, and more. These numbers represent only reported cases — the true scale is likely far higher, as many AI-enhanced scams go unrecognized by victims.

How AI Is Changing the Scam Landscape

For decades, online scams followed familiar patterns: poorly written emails, obvious fake websites, and phone calls from strangers with suspicious accents. Seniors were taught to look for red flags like bad grammar, generic greetings, and too-good-to-be-true offers.

Artificial intelligence has erased nearly all of those warning signs.

Today’s AI tools allow criminals to clone a loved one’s voice from a few seconds of audio, generate photorealistic images of people who don’t exist, write flawless personalized emails, and even conduct real-time video calls using deepfake technology. The result: scams that are harder to detect than ever, even for tech-savvy individuals.

Seniors are disproportionately targeted because many are less familiar with AI capabilities and more likely to trust what they see and hear. A phone call that sounds exactly like a grandchild, a video call from a “bank manager,” or a perfectly worded email from “Medicare” can bypass the instincts that once protected people from fraud.

AI Scam Techniques Targeting Seniors

Below are the most common ways criminals are using artificial intelligence to defraud older adults. Click each link for a detailed guide on how the technique works, how to spot it, and how to protect yourself.

  • Voice Cloning Scams — AI can clone anyone’s voice from just a few seconds of audio found on social media, voicemail, or video. Criminals use cloned voices to impersonate family members in fake emergency calls (“Grandma, I’m in jail, I need bail money”), making the classic grandparent scam virtually undetectable by ear alone.
  • Deepfake Video Scams — Real-time deepfake technology lets scammers appear as someone else on a video call. Victims may believe they are speaking face-to-face with a bank official, financial advisor, or romantic interest — when the person on screen is entirely fabricated. This technique is rapidly growing in romance and investment fraud.
  • AI-Generated Phishing & Smishing — AI writes phishing emails and text messages with perfect grammar, personalized details, and convincing branding. Gone are the days when typos and generic greetings were reliable warning signs. AI can craft thousands of unique, targeted messages that reference your real name, bank, or recent purchases.
  • AI Romance Scams — Scammers use AI to generate photorealistic profile photos that don’t appear in any reverse image search, and deploy chatbots capable of maintaining emotionally convincing conversations around the clock. Some even use deepfake video for “dates,” making it nearly impossible for victims to detect the deception.
  • AI Investment Fraud — Criminals build entirely fake trading platforms with AI-generated dashboards showing fabricated returns, create deepfake videos of celebrities endorsing investments, and use AI chatbots as “personal financial advisors.” The sophistication makes these schemes far harder to distinguish from legitimate platforms.
  • AI Voice Phishing (Vishing) — AI-powered phone systems can conduct natural-sounding conversations in real time, adapting to what the victim says. These systems can impersonate bank fraud departments, government agencies, or tech support, staying on the line for extended conversations that would be impractical for human scammers to scale.
  • AI “Digital Arrest” Scams — Criminals use AI to generate fake arrest warrants, court orders, and legal documents with the victim’s real personal information, then conduct deepfake video calls posing as judges or federal agents to demand immediate payment. Victims are told they face imminent arrest and are forbidden from contacting family or attorneys — a terrifying scam that is growing rapidly in 2025.
  • AI-Enhanced “Pig Butchering” — The single largest fraud category by dollar loss. Criminals use AI chatbots to build trusted relationships with victims over weeks or months, then steer them into fake cryptocurrency trading platforms with AI-generated dashboards showing fabricated returns. AI has industrialized these operations: one criminal network can now manage hundreds of simultaneous victim relationships. Seniors lose the most of any age group — $4.4 billion in crypto fraud in 2025.
  • AI Recovery Scams (Second-Strike Fraud) — After a victim has already been scammed, criminals come back posing as law enforcement, attorneys, or recovery firms, promising to retrieve the stolen money — for a fee. AI enables them to clone the voices of real FBI agents, generate fake official documents and court orders, and build convincing fake recovery firm websites. This is one of the cruelest forms of fraud because it targets people who are already devastated.

US Heat Map — AI-Related Fraud Targeting Seniors (2025)

US Heat Map - AI-Related Scams Targeting Seniors (2025)

AI-Related Fraud Losses by State (2025 FBI IC3 Data)

Source: FBI IC3 2025 Annual Report. The “AI Related” descriptor tracks crimes where artificial intelligence was used as a tool in the commission of the fraud. National total: $352,496,231 in losses from 3,143 senior victims. View all crime types on the national hub page.

RankState / TerritoryAI-Related LossVictims
1California$63,756,748438
2Texas$43,761,799318
3Florida$39,909,303350
4New York$18,873,395149
5Maryland$14,920,41069
6New Jersey$14,661,28080
7Arizona$12,330,60297
8Michigan$10,573,17061
9Georgia$10,472,32471
10Colorado$8,839,71471
11Virginia$7,817,99294
12Connecticut$7,758,71823
13North Carolina$7,433,72497
14Washington$7,430,02295
15Illinois$6,921,39566
16Alabama$5,543,55538
17Massachusetts$5,342,26854
18Wisconsin$5,209,03625
19Tennessee$4,777,38257
20Ohio$4,577,83767
21Oregon$4,134,96579
22Nevada$3,558,42561
23South Carolina$2,963,89643
24Oklahoma$2,868,74226
25Kentucky$2,598,60230
26Missouri$2,265,50857
27Pennsylvania$2,256,03581
28Utah$2,180,14937
29Maine$1,951,4749
30Arkansas$1,908,56922
31Minnesota$1,646,35427
32Louisiana$1,614,67924
33Iowa$1,578,96722
34New Mexico$1,518,56216
35Puerto Rico$1,418,9687
36Indiana$1,164,07533
37Nebraska$1,068,56111
38New Hampshire$992,58711
39Mississippi$868,55915
40Hawaii$725,97423
41Wyoming$642,9353
42West Virginia$335,00010
43Idaho$328,00430
44Kansas$296,12316
45Alaska$290,6208
46Montana$232,51950
47North Dakota$135,0001
48Vermont$87,2905
49Rhode Island$48,0611
50District of Columbia$7,6122
51South Dakota$7,0005
52Delaware$353

How To Protect Yourself From AI-Powered Scams

  • Establish a family code word. Agree on a secret word or phrase with close family members that only you would know. If someone calls claiming to be a relative in distress, ask for the code word before sending any money.
  • Verify by calling back. If you receive a suspicious call, hang up and call the person directly using a phone number you already have — not the number that called you. This defeats both voice cloning and caller ID spoofing.
  • Be skeptical of video calls from strangers. Deepfake video technology is now accessible to criminals. A “bank official” or “romantic interest” who only communicates via video may not be who they appear to be. Ask to meet in person or verify through official channels.
  • Don’t trust perfect communication. AI has eliminated the grammar mistakes and awkward phrasing that once helped identify scams. A flawlessly written email or text is no longer proof it’s legitimate.
  • Reverse image search isn’t enough. AI can generate entirely new faces that won’t appear in any search results. If an online contact’s photo returns zero matches, it doesn’t mean the person is real — it may mean the photo was AI-generated.
  • Never act under time pressure. Regardless of how convincing the contact appears — voice, video, or text — legitimate organizations and family members will give you time to verify. Urgency is the scammer’s most powerful tool.
  • Limit personal information online. AI tools scrape social media for voice samples, photos, personal details, and relationship information. Consider making accounts private and limiting what you share publicly.
  • Talk to someone you trust. Before acting on any unexpected request for money or information, discuss it with a family member, friend, or advisor. Scammers deliberately isolate their victims.

If You Suspect an AI-Powered Scam:

  • Report it to the FBI’s Internet Crime Complaint Center (ic3.gov) — specifically mention if AI, voice cloning, or deepfakes were involved
  • Contact your local police and your state Attorney General’s office
  • Notify your bank or financial institution immediately if money was transferred
  • Report to the FTC at reportfraud.ftc.gov
  • Save all evidence: screenshots, call logs, emails, recordings if possible

Remember: AI has made it possible for scammers to sound like your family, look like a trusted professional, and write like a legitimate organization. The best defense is no longer spotting obvious fakes — it’s building verification habits that work even when the deception is perfect.


View the 2025 FBI Elder Fraud national data | Find your state Attorney General | Emergency: First 24 Hours Guide