AI-Powered Scams Targeting Seniors: How Artificial Intelligence Is Making Fraud Harder to Detect
- How AI Is Changing the Scam Landscape
- AI Scam Techniques Targeting Seniors
- US Heat Map — AI-Related Fraud Losses (2025)
- AI-Related Fraud Losses by State
- How To Protect Yourself From AI Scams
- AI “Digital Arrest” Scams
- AI-Enhanced “Pig Butchering” (Crypto Investment Fraud)
- AI Recovery Scams (Second-Strike Fraud)
- If You Suspect an AI-Powered Scam
Already been scammed? Read our First 24 Hours Emergency Guide for critical steps to take immediately.
2025 FBI IC3 Data: AI-related fraud cost American seniors $352 million in 2025, with over 3,100 victims aged 60+. AI is now a factor in investment scams, romance fraud, voice cloning attacks, and more. These numbers represent only reported cases — the true scale is likely far higher, as many AI-enhanced scams go unrecognized by victims.
How AI Is Changing the Scam Landscape
For decades, online scams followed familiar patterns: poorly written emails, obvious fake websites, and phone calls from strangers with suspicious accents. Seniors were taught to look for red flags like bad grammar, generic greetings, and too-good-to-be-true offers.
Artificial intelligence has erased nearly all of those warning signs.
Today’s AI tools allow criminals to clone a loved one’s voice from a few seconds of audio, generate photorealistic images of people who don’t exist, write flawless personalized emails, and even conduct real-time video calls using deepfake technology. The result: scams that are harder to detect than ever, even for tech-savvy individuals.
Seniors are disproportionately targeted because many are less familiar with AI capabilities and more likely to trust what they see and hear. A phone call that sounds exactly like a grandchild, a video call from a “bank manager,” or a perfectly worded email from “Medicare” can bypass the instincts that once protected people from fraud.
AI Scam Techniques Targeting Seniors
Below are the most common ways criminals are using artificial intelligence to defraud older adults. Click each link for a detailed guide on how the technique works, how to spot it, and how to protect yourself.
- Voice Cloning Scams — AI can clone anyone’s voice from just a few seconds of audio found on social media, voicemail, or video. Criminals use cloned voices to impersonate family members in fake emergency calls (“Grandma, I’m in jail, I need bail money”), making the classic grandparent scam virtually undetectable by ear alone.
- Deepfake Video Scams — Real-time deepfake technology lets scammers appear as someone else on a video call. Victims may believe they are speaking face-to-face with a bank official, financial advisor, or romantic interest — when the person on screen is entirely fabricated. This technique is rapidly growing in romance and investment fraud.
- AI-Generated Phishing & Smishing — AI writes phishing emails and text messages with perfect grammar, personalized details, and convincing branding. Gone are the days when typos and generic greetings were reliable warning signs. AI can craft thousands of unique, targeted messages that reference your real name, bank, or recent purchases.
- AI Romance Scams — Scammers use AI to generate photorealistic profile photos that don’t appear in any reverse image search, and deploy chatbots capable of maintaining emotionally convincing conversations around the clock. Some even use deepfake video for “dates,” making it nearly impossible for victims to detect the deception.
- AI Investment Fraud — Criminals build entirely fake trading platforms with AI-generated dashboards showing fabricated returns, create deepfake videos of celebrities endorsing investments, and use AI chatbots as “personal financial advisors.” The sophistication makes these schemes far harder to distinguish from legitimate platforms.
- AI Voice Phishing (Vishing) — AI-powered phone systems can conduct natural-sounding conversations in real time, adapting to what the victim says. These systems can impersonate bank fraud departments, government agencies, or tech support, staying on the line for extended conversations that would be impractical for human scammers to scale.
- AI “Digital Arrest” Scams — Criminals use AI to generate fake arrest warrants, court orders, and legal documents with the victim’s real personal information, then conduct deepfake video calls posing as judges or federal agents to demand immediate payment. Victims are told they face imminent arrest and are forbidden from contacting family or attorneys — a terrifying scam that is growing rapidly in 2025.
- AI-Enhanced “Pig Butchering” — The single largest fraud category by dollar loss. Criminals use AI chatbots to build trusted relationships with victims over weeks or months, then steer them into fake cryptocurrency trading platforms with AI-generated dashboards showing fabricated returns. AI has industrialized these operations: one criminal network can now manage hundreds of simultaneous victim relationships. Seniors lose the most of any age group — $4.4 billion in crypto fraud in 2025.
- AI Recovery Scams (Second-Strike Fraud) — After a victim has already been scammed, criminals come back posing as law enforcement, attorneys, or recovery firms, promising to retrieve the stolen money — for a fee. AI enables them to clone the voices of real FBI agents, generate fake official documents and court orders, and build convincing fake recovery firm websites. This is one of the cruelest forms of fraud because it targets people who are already devastated.
US Heat Map — AI-Related Fraud Targeting Seniors (2025)

AI-Related Fraud Losses by State (2025 FBI IC3 Data)
Source: FBI IC3 2025 Annual Report. The “AI Related” descriptor tracks crimes where artificial intelligence was used as a tool in the commission of the fraud. National total: $352,496,231 in losses from 3,143 senior victims. View all crime types on the national hub page.
| Rank | State / Territory | AI-Related Loss | Victims |
| 1 | California | $63,756,748 | 438 |
| 2 | Texas | $43,761,799 | 318 |
| 3 | Florida | $39,909,303 | 350 |
| 4 | New York | $18,873,395 | 149 |
| 5 | Maryland | $14,920,410 | 69 |
| 6 | New Jersey | $14,661,280 | 80 |
| 7 | Arizona | $12,330,602 | 97 |
| 8 | Michigan | $10,573,170 | 61 |
| 9 | Georgia | $10,472,324 | 71 |
| 10 | Colorado | $8,839,714 | 71 |
| 11 | Virginia | $7,817,992 | 94 |
| 12 | Connecticut | $7,758,718 | 23 |
| 13 | North Carolina | $7,433,724 | 97 |
| 14 | Washington | $7,430,022 | 95 |
| 15 | Illinois | $6,921,395 | 66 |
| 16 | Alabama | $5,543,555 | 38 |
| 17 | Massachusetts | $5,342,268 | 54 |
| 18 | Wisconsin | $5,209,036 | 25 |
| 19 | Tennessee | $4,777,382 | 57 |
| 20 | Ohio | $4,577,837 | 67 |
| 21 | Oregon | $4,134,965 | 79 |
| 22 | Nevada | $3,558,425 | 61 |
| 23 | South Carolina | $2,963,896 | 43 |
| 24 | Oklahoma | $2,868,742 | 26 |
| 25 | Kentucky | $2,598,602 | 30 |
| 26 | Missouri | $2,265,508 | 57 |
| 27 | Pennsylvania | $2,256,035 | 81 |
| 28 | Utah | $2,180,149 | 37 |
| 29 | Maine | $1,951,474 | 9 |
| 30 | Arkansas | $1,908,569 | 22 |
| 31 | Minnesota | $1,646,354 | 27 |
| 32 | Louisiana | $1,614,679 | 24 |
| 33 | Iowa | $1,578,967 | 22 |
| 34 | New Mexico | $1,518,562 | 16 |
| 35 | Puerto Rico | $1,418,968 | 7 |
| 36 | Indiana | $1,164,075 | 33 |
| 37 | Nebraska | $1,068,561 | 11 |
| 38 | New Hampshire | $992,587 | 11 |
| 39 | Mississippi | $868,559 | 15 |
| 40 | Hawaii | $725,974 | 23 |
| 41 | Wyoming | $642,935 | 3 |
| 42 | West Virginia | $335,000 | 10 |
| 43 | Idaho | $328,004 | 30 |
| 44 | Kansas | $296,123 | 16 |
| 45 | Alaska | $290,620 | 8 |
| 46 | Montana | $232,519 | 50 |
| 47 | North Dakota | $135,000 | 1 |
| 48 | Vermont | $87,290 | 5 |
| 49 | Rhode Island | $48,061 | 1 |
| 50 | District of Columbia | $7,612 | 2 |
| 51 | South Dakota | $7,000 | 5 |
| 52 | Delaware | $35 | 3 |
How To Protect Yourself From AI-Powered Scams
- Establish a family code word. Agree on a secret word or phrase with close family members that only you would know. If someone calls claiming to be a relative in distress, ask for the code word before sending any money.
- Verify by calling back. If you receive a suspicious call, hang up and call the person directly using a phone number you already have — not the number that called you. This defeats both voice cloning and caller ID spoofing.
- Be skeptical of video calls from strangers. Deepfake video technology is now accessible to criminals. A “bank official” or “romantic interest” who only communicates via video may not be who they appear to be. Ask to meet in person or verify through official channels.
- Don’t trust perfect communication. AI has eliminated the grammar mistakes and awkward phrasing that once helped identify scams. A flawlessly written email or text is no longer proof it’s legitimate.
- Reverse image search isn’t enough. AI can generate entirely new faces that won’t appear in any search results. If an online contact’s photo returns zero matches, it doesn’t mean the person is real — it may mean the photo was AI-generated.
- Never act under time pressure. Regardless of how convincing the contact appears — voice, video, or text — legitimate organizations and family members will give you time to verify. Urgency is the scammer’s most powerful tool.
- Limit personal information online. AI tools scrape social media for voice samples, photos, personal details, and relationship information. Consider making accounts private and limiting what you share publicly.
- Talk to someone you trust. Before acting on any unexpected request for money or information, discuss it with a family member, friend, or advisor. Scammers deliberately isolate their victims.
If You Suspect an AI-Powered Scam:
- Report it to the FBI’s Internet Crime Complaint Center (ic3.gov) — specifically mention if AI, voice cloning, or deepfakes were involved
- Contact your local police and your state Attorney General’s office
- Notify your bank or financial institution immediately if money was transferred
- Report to the FTC at reportfraud.ftc.gov
- Save all evidence: screenshots, call logs, emails, recordings if possible
Remember: AI has made it possible for scammers to sound like your family, look like a trusted professional, and write like a legitimate organization. The best defense is no longer spotting obvious fakes — it’s building verification habits that work even when the deception is perfect.
View the 2025 FBI Elder Fraud national data | Find your state Attorney General | Emergency: First 24 Hours Guide
