feature1 / Opinion

Are Deepfakes the New Identity Theft Aiding Blackmail?

Deepfakes have been in the news, more often bad news. Deepfakes are the technology to alter faces, usually using an AI, to put an individual’s face onto the body of someone else. This can be either the person’s face or someone else’s.

Recently, a TikTok user pretended to be Tom Cruise in a deepfake that looked pinpoint accurate. A mom also deepfaked a video of a rival cheerleading team to make it look like teens were naked, smoking, or drinking to try and get the girls off the team. That particular situation ended with the mom being charged with three misdemeanor counts of cyber-harassment of a child and three misdemeanor counts of harassment.

Good or evil deepfakes are becoming a cyber security risk. There have been numerous cases online of people being blackmailed by scammers threatening to send explicit pictures to friends and relatives posing as them if they don’t send the scammer money. Some say it can feel crippling to have an image of you out there that simply isn’t you.

Deepfake’s were started as porn, and can ruin the lives of women and men involved. Helen Mort a poet and broadcaster was terrified when she was told explicit images of her were online when she never sent any intimate photos. She discovered the photos were based off of pictures taken between 2017 and 2019 from her private social media.

 “It really makes you feel powerless, like you’re being put in your place. Punished for being a woman with a public voice of any kind. That’s the best way I can describe it. It’s saying, ‘Look: we can always do this to you,’” she said to technologyreview.com

Sensity AI is a research company that has tracked online deepfake videos since December of 2018. They found that between 90% and 95% of them are non consensual porn. About 90% of that is non consensual porn of women specifically. Domestic violence victims could be the most at risk with this technology, especially since you can find it in the App Store.

The repercussions and uses of deepfakes can reach political spheres, as the spreading of misinformation has become a big topic with the talk of “fake news.” Deepfakes can make it seem like politicians are saying things they aren’t and further spread distrust. Former President Donald Trump repeatedly retweeted manipulated videos of then candidate Joe Biden, making him look bad during the election. Another is a video that went viral in 2019 showing House Speaker Nancy Pelosi as ill or intoxicated.

Some developers are trying different systems to limit deepfakes getting out but besides having preselected backgrounds and watermarks there is no technology ingrained in social media at the moment to fully vet out deepfakes. There is more research going into deepfake technology and detection along with laws being signed in certain states that allow the victim to sue.

The difficulty legally speaking is everyone’s guaranteed First Amendment rights. The University of Illinois Law Review put out an article saying deepfake laws, “In the U.S., injunctions against deepfakes are likely to face First Amendment challenges. Even if such injunctions survive a First Amendment challenge, lack of jurisdiction over extraterritorial creators of deepfakes would inhibit their effectiveness. Therefore, injunctions against deepfakes may only be granted under few specific circumstances, including obscenity and copyright infringement.”

While more news comes out the law may need to catch up with the technology available.

Author