AI scams and fraud: 5 trends to look out for as 2025 ends
Artificial intelligence has received widespread adoption in industries across the globe. While the hope is that these tools will only be used for good, the reality is that many people will use them for harm. AI-powered scams and fraud are becoming increasingly common, and the FBI has reported that internet crime losses rose 33% in 2024 from 2023.
Consumers need to be aware of ways to protect themselves. That begins with awareness. For this story, Lifeguard compiled information on five consumer scam trends that have emerged in 2025.
1. Billions of dollars are being lost to cybercrime
Cybercrime is a massive industry, with billions of dollars being lost every single year. This total is only growing. In 2024, the FBI Internet Crime Complaint Center reported $16.6 billion in losses. This number was the sum total of 859,532 complaints made that year.
This is a 33% increase from the previous year, indicating a rise in cybersecurity threats. Of that $16.6 billion, 83% was due to cyber-enabled fraud. Examples of this can include phishing, identity theft, and ransomware.
What this means for consumers: The substantial rise in online fraud and the severity of losses highlight the importance of consumer vigilance.
Consumer protection tips: The Federal Deposit Insurance Corporation offers a variety of cybersecurity tips for consumers to follow to keep themselves protected.
- Never open emails from people you don’t know.
- Be careful with links and websites that appear to have a new spelling.
- Utilize preventative software for threat detection and removal.
2. AI-generated scams are swindling victims more easily
Central Bank and Experian expect that AI will make fraud easier than ever to conduct. Deepfake technology, AI-generated phishing emails, and other attacks are all becoming commonplace. AI has an innate ability to learn from past experiences, enabling it to create more sophisticated fraud attempts with each iteration.
Meriwest Credit Union states that in 2023, 20% of the people targeted by imposter scams lost money. This percentage will only rise as the threats become increasingly convincing.
The generational divide: Interestingly, generations that have more exposure to this type of technology are not better at detecting scams. Time, in an analysis of a Deloitte study from 2023, found that Gen Z individuals are three times more likely than baby boomers to fall for an online scam.
Consumer protection tips: NYC.gov offers insights on a few clear signs that you may be a target of an AI scam. These include being contacted out of the blue, being pressured to take action immediately, having personal and private information requested, and being asked to keep the conversation private.
3. AI companies are taking action to prevent misuse
One clear sign that AI is evolving and being used in scams is the industry’s reaction. For example, Reuters reported that Anthropic, a leader in the AI space, had successfully blocked hackers who were trying to use Claude AI, their AI-powered writing tool, to write phishing emails and hacking-related code. Actions taken included banning certain accounts and improving filters.
How AI is being weaponized: The University of Wisconsin-Madison’s IT department outlines a few ways AI is being used to scam people. Deepfakes (AI-generated lookalike videos), AI-powered voice fraud, phishing, and spear phishing are the most common.
Consumer protection tips: Educate yourself on the tools that are used to create deepfakes, voice cloning, and phishing emails. Use resources like the MIT Media Labs.
4. Scammers are exploiting Google AI to display fake business numbers
Google’s AI overview is a powerful tool that’s used to quickly summarize the most relevant responses to a user’s inquiry. However, some scammers have learned to manipulate search engine crawls so that their fraudulent results appear at the top of the search page.
How the scam works: Yahoo News gave an example of how this scam works. A businessman needed help with a cruise he had scheduled. He searched for the cruise line’s customer support number on Google and used the one given in the AI overview. It was fraudulent, and he was charged $768, which was later reversed.
Consumer protection tips: Always verify the correct customer support number by going directly to the company’s website first. Do not trust AI overviews to give you the right number.
5. OpenAI worries about AI voice fraud and its effect on banking
Voice print authentication is a security technology that is used to verify access. It uses the unique subtleties of your voice to prove that you’re the one accessing the information. It’s similar to how fingerprints are used to access personal smartphones instead of passwords.
Sam Altman, CEO of OpenAI, recently expressed grave concern over AI’s ability to exploit this. He explains that AI has already surpassed the ability to replicate an individual’s voice and that banking systems in particular need to take drastic security measures.
The voice fraud threat: Scammers can use AI to create clones of your voice and use it to access sensitive financial accounts that use voice as a means of verification.
Consumer Protection Tips: Do not utilize voice authentication as a way to log in to your bank accounts. Use a combination of secure, complex passwords and mobile or email verification.
Now is the time to protect yourself
AI-powered scams and fraud are on the rise, and their capabilities will only get more sophisticated. Deepfakes, phishing attempts, and voice fraud can result in billions of dollars of damage, and even the most internet-savvy consumers struggle to identify these advanced scams.
The best way to prevent these scams from happening to you is to protect yourself. Take time to learn about how the current threats work and what their signs are, and make sure to be careful and vigilant when communicating through your phone or email.
This story was produced by Lifeguard and reviewed and distributed by Stacker.
(0) comments
Welcome to the discussion.
Log In
Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.