Deepfake AI scams and why you need a safeword

AI generated image of sad child with teddy bear

Would you be able to spot your child versus a deepfake if they called you for help?

Nobody who has been paying attention for the past 20 years is likely to reply to an email from a Nigerian prince offering untold riches. But even some of the smartest have fallen for the latest generation of highly advanced deepfake AI scams. Which is why you should consider a safeword. Sexy, right?

Fake calls and texts

Fake texts purporting to be from your offspring saying they’re in trouble or need money are old news. But AI has enabled scammers to send you a convincing voice note or even set up a video call that looks and sounds exactly like your child. It could be a deepfake with their [your child’s] face and voice planted on to another person, who is actually talking to you live. Like wearing a living mask. It’s insanely believable.

… you could have a live conversation with someone who looks and sounds exactly like your son or daughter

AI technology can make scarily accurate videos and audio recordings using images, audio data or videos already published. Creators take existing images or videos and feed the data to an AI generator, creating a believable replica. The voice and face can be planted on another person. So you could have a live conversation with someone who looks and sounds exactly like your son or daughter. It is emotionally manipulative and designed to tug on the heartstrings. After all, what kind of monstrous parent isn’t going to help their child if they call with an emergency?

So far, these types of deepfakes have mostly been used to replicate famous faces. In 2022, a deepfake surfaced of Ukrainian president Volodymyr Zelenskiy calling on his soldiers to lay down their weapons and return home. Videos of Donald Trump being arrested have also been widely circulated. Depressingly, some celebrities have found porn being made using deepfakes of their images.

But this doesn’t mean you are immune to receiving a deepfake impersonation scam. Especially if you have family members with a visible online presence. It is important to verify the authenticity of unexpected requests, especially if they’re asking for money or sensitive information.

Phishing using ChatGPT

Phishing is a cyber-attack where the criminals impersonate bona fide individuals, businesses, or organisations to deceive people into sharing sensitive info. Such as usernames, passwords, and credit card details.

Gone are the days of looking out for grammar and spelling mistakes to decipher if an email is a phishing expedition. Scammers are now using ChatGPT and other similar programmes to make very believable deepfake emails and messages that look like they have come from legitimate sources.

ChatGPT and similar AI applications can even copy the tone of a company’s communications and automatically correct poor spelling and grammar. Which means looking for the old tale-tell signs is becoming harder.

Read more: Navigating red flags in online dating

Fake verification

On to banks. And online banks, such as Starling or Monzo, have grown in popularity. But their rise has opened the door to fake verification scams. These banks don’t have bricks-and-mortar branches where humans can verify your identity. Instead, they rely heavily on video and photographic verification. Hello AI corruption and identity theft.

Fake accounts with fake identities and even fake videos are being made to secure loans, credit cards…

When signing up to these services, the banks will generally ask for a photo of your ID and a video of yourself to verify your documents. But what happens when these are AI-generated deepfakes? Fake accounts with fake identities and even fake videos are being made to secure loans, credit cards, and make transfers, leaving unsuspecting people out of pocket.

These sorts of scams can happen with banks that have physical branches too. But with precautions, such as two-factor verification with a fingerprint, and being sent a verification code by text when using their online services, and stringent fraud protection protocols, such as automatically blocking suspicious-looking transactions and alerting customers immediately, it is generally more difficult for opportunistic scammers.

With online-only banks being especially popular with younger customers, it’s worth having a bank security conversation with your children, even if they’re already adults.

How to protect yourself from deepfakes

To mitigate the risks associated with deepfake impersonation scams, it’s crucial firstly to be aware of the games being played, so you can at least try to spot them. Also for individuals to verify the authenticity of unexpected requests, especially those involving financial transactions or sensitive information.

Smell a deepfake rat, such as ‘your child’ asking for money? Try to directly contact the person who is supposedly calling you. Or seek confirmation from a reliable source before taking any action.

Choose a safeword or phrase

image shows extract from dictionary showing the word deepfakes

A good place to start is with a safeword or code or phrase to protect yourself and those close to you from fakes such as the frantic phone call that starts with “Mum, I’ve crashed the car and need money!” Choose a safeword or phrase that you can all use and share to identify that you’re real.

It is worth sitting down with family and maybe even close friends to agree this together. Make sure it’s unique so that scammers can’t guess, but something that everyone will remember. And, obviously, tell everyone in the safeword circle of trust not to share it. Otherwise this will quickly become a pointless exercise.

If you get calls, texts, emails or whatever, supposedly from a friend or family member and you don’t trust it, make sure to ask them for the secret code so you know it’s really them.

Extra steps to foil fraudsters

Even the most memorable code can slip someone’s mind. But there are additional steps you can take to make sure it’s a genuine call, message or email. If the caller is getting in touch to tell you they have a new number or email address, get in touch with them via their previous details to check before engaging.

Never fall for “Don’t tell anyone” line

This tactic is difficult if it’s a distressed call. But if it seems dodgy, hang up and call the number back. If you have another device handy, such as a laptop or landline, stay on the phone. But get in touch with the caller via another method at the same time. A quick Facebook or desktop WhatsApp message to ask if the person is OK could save the day.

Never fall for “Don’t tell anyone” line. This can be a way to minimise the chance of you finding out that this is a blag. And the person being impersonated is actually fine. Even if it feels wrong and a bit intrusive, consider asking other people if they’ve heard about the caller’s supposed emergency.

Spotting fake emails and texts

While fake emails and texts are less likely to be riddled with spelling and grammar mistakes there are other telltale signs that you’ve received a scummy AI-generated fake. The obvious precaution is to not click on a link or open an attachment from a sender you don’t trust. This still happens with alarming frequency, drawing unsuspecting people onto malware or phishing sites.

Check the email address carefully. It might look legitimate, but have a slight variation on the real thing. For example, the format could be slightly different, such as ending with a .com when the real company uses the format, or the company name might be presented differently to the genuine emails.

Generic greetings, such as “Dear Valued Customer”, can be another giveaway. Even mass mail-outs from legitimate companies can personalise every email, so it’s addressed to you.

If the email or text claims to be urgent or makes a threat such as claiming your account will be suspended unless you provide immediate information, this creates pressure for the receiver to act quickly. And when someone acts in a state of panic, they might not do their due diligence and check if the message is real.

Protect your information and identity

Always be wary of requests for personal information, such as passwords or bank account details. Legitimate companies and organisations do not ask for such data by email or text. Messages that claim changes have been made to your account that you didn’t authorise or expect should be checked out too. Verify the claim via the company’s official website or customer service channels.

If you have any doubts at all, it is wise to check with the company directly rather than engaging with the email or text. Investing in antivirus software and email filters can help detect and block suspicious messages, including those with deepfake content.

Staying informed about emerging deepfake technologies can help protect against manipulation and fraud. And, as the old saying goes, if something seems too good to be true, it probably is.

If you’ve been snared by an AI deepfake or any kind of scam, start HERE

Read all about it

Silver footer with glowing purple - link to home page


Just so you know – as if you didn’t – sometimes if you click on a link or buy something that you’ve seen on Silver, we may make a little commission. We don’t allow any old links here though. Read why you should trust us

About Lili Lowe
Lili works across all the channels; writing articles, taking photographs, creating content, and designing eye-capturing imagery. She's an animal-lover who cries just seeing a picture of a baby sloth.

Leave a comment

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.