The rise of AI-powered scams: what to watch for

Impact
28 10월 2025
The rise of AI-powered scams: what to watch for

One evening earlier this year, a man in Canada received a frantic call from what sounded like his teenage son. The boy’s voice was trembling, panicked, and begged for help. Within moments, the supposed “son” handed the phone to someone else who demanded money. The caller ID looked local. The voice sounded real. Only later did the man learn his son was safe at school, and the call had been nothing more than a sophisticated AI voice cloning scam.

Stories like this are no longer rare. Advances in generative AI scams have opened a new chapter in fraud. Where scams once relied on misspelled emails or clumsy robocalls, today they can involve eerily convincing voices, flawless text, and synthetic images or videos. Technology that seemed experimental just a few years ago is now cheap, easy to use, and widely available.

The question is not whether artificial intelligence scams will happen, but how often, in what forms, and how we can prepare to recognize and resist them.

The new shape of an old problem

Scams are not new. Humans have always found ways to trick one another, from street cons to phishing emails. What has changed is the scale and polish. AI fraud techniques act as amplifiers. They allow scammers to generate more attempts, in more believable formats, at a fraction of the cost.

Think back to email scams from a decade ago. Many were easy to spot because of odd phrasing or suspicious formatting. But with AI-generated phishing emails, grammar mistakes vanish. Messages now sound natural, professional, and personal. A single scammer can produce hundreds of polished messages in minutes.

At the same time, AI-powered phone scams using voice cloning have turned simple fraud into something deeply persuasive. A short clip of someone’s voice, scraped from a video or voicemail, can be enough to reproduce their speech with eerie accuracy.

Phishing in the age of AI fraud

Phishing has always been a popular entry point for digital crimes. Traditionally, attackers sent fake messages designed to trick people into revealing passwords or clicking malicious links. What makes AI phishing scams different is personalization.

AI can scrape social media, public records, and past communications to create messages tailored to an individual. Instead of a generic “Dear customer,” you might receive an email referencing your vacation photos or workplace. The tone feels right, the details line up, and suspicion drops.

Attackers can even use AI text generators to mimic someone’s exact writing style. Imagine an email that looks and reads exactly like one from your boss or colleague. That sense of familiarity makes AI phishing emails particularly dangerous.

Voice cloning and deepfake scams

Perhaps the most unsettling development is AI voice cloning fraud. It targets instinct. We trust voices we recognize, especially those of family and colleagues. When you hear a familiar voice in distress, your first impulse is to respond emotionally, not logically.

AI has dramatically lowered the barrier to entry. With just a few seconds of audio, online tools can generate realistic voices. Scammers then use them in real time during phone calls or pre-recorded messages.

So-called AI kidnapping scams have already been reported. A victim hears what sounds like their loved one in distress and is pressured into sending money before verifying the situation.

Businesses are also vulnerable. A UK company lost hundreds of thousands of dollars after an executive was tricked by a phone call mimicking his boss’s voice. The caller instructed him to transfer funds for a “legitimate” transaction. Only later was it revealed the voice had been artificially generated.

The rise of synthetic media in fraud

AI’s reach goes beyond voices. It can now generate images and videos that are difficult to distinguish from real ones. Fake profile pictures, once stolen from others, can now be generated from scratch by AI image tools. This makes fraudulent accounts harder to track and remove.

Even more concerning are AI deepfake video scams. Imagine joining a video call with someone who looks and sounds exactly like your coworker. In reality, it’s a scammer using AI to overlay a digital mask in real time. The technology still has flaws, but it is improving rapidly.

Synthetic media can be weaponized in many ways: manipulating investors with fake CEO statements, spreading misinformation through fabricated news clips, or impersonating individuals for AI social engineering attacks.

Discover AI-powered browsing

Aria, Opera’s AI, is free on both desktop and mobile. Generate images and text, and get up-to-date answers in over 50 languages without any account.


더 보기

Everyday tools and subtle risks

The same tools that make everyday tasks easier can be weaponized. AI text generators draft professional emails, but they also power AI phishing attempts. Voice synthesizers enable accessibility, but they also create convincing AI robocalls.

Generative AI chatbots are another risk. A scam website may host a chatbot that mimics a customer service representative. The smooth, natural interaction makes it easier for victims to share sensitive information.

Even browsers and search engines need to adapt. Some, like the Opera browser, have a history of pioneering protective features. Opera was one of the first to include pop-up blockers and private browsing. More recently, it has integrated AI security features directly into the browser. This history of user-focused innovation shows how defenses evolve alongside threats.

Why AI scams are harder to detect

What sets AI-powered scams apart from traditional ones is adaptability. Scammers can test dozens of variations and let AI optimize the most successful versions. It’s similar to digital ad targeting, except the “product” is fraud.

Traditional red flags — misspellings, awkward phrasing, suspicious formatting — are fading. Scams now look flawless, shifting the burden of detection onto human judgment.

Speed also matters. A single scammer with AI tools can launch operations that once required teams. The sheer volume of AI-generated scams means even a small success rate has a large impact.

How to stay safe from AI-powered scams

The good news is that awareness and precautions can reduce risks. Recognizing that voices, emails, and images can be fabricated is the first step.

Practical steps include:

  • Pause before reacting emotionally to urgent calls. Verify through a trusted channel.
  • Double-check links before clicking. Hover to reveal the true destination.
  • Confirm sensitive requests, like money transfers, through a different communication method.
  • Use multifactor authentication and hardware security keys for stronger protection.

Businesses and cybersecurity teams are also turning to AI detection tools. Just as attackers use AI to generate scams, defenders use it to identify patterns and anomalies. The contest between offense and defense is ongoing.

The psychology of AI scam tactics

AI scams succeed not only through technology but also through psychology. They exploit urgency, trust, and fear. A panicked voice overrides rational thought. A polished email from a “boss” pressures immediate action.

Understanding these levers helps build resilience. If a message feels unusually urgent or emotional, pause and verify. Scams thrive on reaction. Deliberation is their enemy.

Where regulation and policy fit in

Governments are beginning to address AI fraud and scams. Proposals include requiring AI companies to watermark outputs or improve detection methods. Others emphasize consumer awareness and reporting systems.

But regulation has limits. Tools are widely accessible, and scammers often operate across borders. A realistic path forward will combine technical safeguards, education, and legal enforcement.

Looking ahead

As AI scams continue to evolve, fraud will grow more sophisticated. But this is part of a broader pattern. Every communication technology — the telephone, email, social media — has been exploited for scams. The difference lies in how quickly we adapt.

It may help to view artificial intelligence scams as a new dialect of an old language. The grammar has changed, the vocabulary has expanded, but the intent is the same. By learning to “speak” this new dialect, we become harder to fool.

Conclusion

AI-powered scams are already here. They unfold quietly in inboxes, phone calls, and video chats. Their power lies not just in the technology but in exploiting human instinct.

Staying safe doesn’t require paranoia, but it does require vigilance. The next time you receive an unusual email, urgent voice call, or suspicious online interaction, remember that appearances can be manufactured. Verification and patience are your strongest defenses.

Technology will keep evolving, as will scams — and so will our ability to resist them. The goal is not to eliminate all risk but to navigate it with clarity and care.

Related articles

더 좋은 브라우저를 사용할 권리가 있습니다

무료 VPN, 광고 차단 및 전송 기능 - Opera에 내장된 몇 가지 필수 기능으로 더 빠르고 원활하며, 방해 없는 브라우징이 가능합니다.
더 좋은 브라우저를 사용할 권리가 있습니다 더 좋은 브라우저를 사용할 권리가 있습니다

Press Contacts!

Our press team loves working with journalists from around the world. If you’re a member of the media and would like to talk, please get in touch with us by sending an email to one of our team members or to press-team@opera.com