
Deepfaking it: What to know about deepfake-based sextortion schemes
Criminals are increasingly creating deepfake nudes out of public photos of tame people to extort money from them, FBI warns
The US Federal Bureau of Investigation (FBI) is warning of an increase in extortion campaigns in which criminals leverage readily available artificial intelligence (AI) tools to create sexually explicit deepfakes out of photos of innocent people and then harass or blackmail them.
According to recent Public Service Announcements, The bureau has received a growing number of reports from victims “whose photos or videos were turned into explicit content”. Videos featuring adults and minors are circulating on social media or porn sites.
Worryingly, rapidly emerging technology is enabling nearly anyone to create fake explicit content that appears to feature disapproving adults and even children. This then leads to harassment, blackmail and sextortion in particular.
Sometimes victims find the content themselves, sometimes they are warned by others, and sometimes they are contacted directly by the bad actors. What then happened was one of two things:
- Bad actors demand payment or they will share content with friends and family
- They demand original sexually themed images or videos
Another driver for sextortion
The latter may involve sextortion, a form of blackmail in which threat actors trick or coerce victims into sharing their own sexually explicit content, and then threaten to release it unless they pay them or send more images/videos. This is another rapidly growing trend for which the FBI has been forced to issue public warnings over the past year.
Usually in sextortion cases, the victim befriends online someone pretending to be someone else. They tie up the victim, until they receive explicit images/videos. In the case of deepfake-powered extortion, fake images are how victims are held for ransom – no friends required.
On a related note, some crooks commit sextortion scams involving e-mail in which they claim to have installed malware on the victim’s computer that allegedly allowed them to record individuals watching porn. That includes personal details such as old email passwords obtained from historical data breaches to make threats – almost always silent threats – appear more realistic. The phenomenon of email sextortion scam arises from the increasing public awareness of sextortion itself.
The problem with deepfakes
Deepfakes are built using neural networks, which allow users to effectively fake someone’s appearance or audio. In terms of visual content, they are trained to take video input, compress it via an encoder, then rebuild it with a decoder. This can be used to effectively teleport the target’s face onto another person’s body, and make them mimic the same facial movements as the latter.
Technology has been around for a while. One example of a virus is Tom Cruise videos played golf, did magic, and ate lollipops, and was watched a million times before being deleted. The technology is of course also used to insert the faces of celebrities and other people into obscene videos.
The bad news is that technology is becoming more and more available to everyone and is maturing to the point where tech novices can use it to pretty convincing effect. That’s why (not only) the FBI is concerned.
How to beat deepfaker
Once such synthetic content is released, victims may face “significant challenges that prevent the continued sharing of manipulated content or removal from the internet.” It may be harder in the US than in the EU, where is GDPR rules regarding the “right to delete” mandate the service provider removes certain content at the request of an individual. However, even so, it will be a sad experience for the parents or their children.
In a digital world that is always on and has to be shared, many of us are pressing publish and have mountains of personal videos and photos piling up on the internet. It’s pretty harmless but sadly, many of these images and videos are available for anyone to view. Those with malicious intent always seem to find ways to use these visual assets and available technology for bad purposes. That’s where a lot of deepfakes come in because, nowadays, almost anyone can create synthetic but convincing content.
It’s better to be ahead of the trend now, to minimize the potential damage to you and your family. Consider these steps to reduce the risk of becoming a victim of a deepfake in the first place, and to minimize potential losses should a worst-case scenario occur:
For you:
- Always think twice when posting pictures, videos and other personal content. Even the most innocuous content can theoretically be used by bad actors without your consent to turn into a deepfake.
- Learn about the privacy settings on your social media accounts. It makes sense to keep your profile and friends list private, so pictures and videos will only be shared with people you know.
- Always be careful when accepting friend requests from people you don’t know.
- Never send content to people you don’t know. Be particularly wary of individuals pressing to view certain content.
- Watch out for “friends” who start acting out of the ordinary online. Their accounts may have been hacked and used to obtain content and other information.
- Always use complex and unique passwords and multi-factor authentication (MFA) to secure your social media accounts.
- Run regular searches for yourself online to identify personal information or publicly available video/image content.
- Consider a reverse image search to find photos or videos that have been published online without your knowledge.
- Never send money or any graphic content to people you don’t know. They will only ask for more.
- Report any sextortion activity to the police and relevant social media platforms.
- Report deepfake content to the platforms where it was published.
For parents:
- Run regular online searches on your children to identify how much private info and content is publicly available online.
- Monitor your children’s online activity, within reason, and discuss with them the risks associated with sharing private content.
- Think twice about posting your child’s content that features their face.
Cheap deepfake technology will continue to improve, democratizing blackmail and harassment. Maybe that’s the price we pay for an open internet. But by being more careful online, we can reduce the chances of something bad happening.