InvestigateTV - Technology used to create lifelike videos of an internationally famous actor, a former U.S. President, and even a world leader in the middle of a violent conflict is being used on everyday Americans.
The reason: Using your friendly face can easily entrap people you know into a scam.
These eerie fakes are created with cutting-edge computer science designed to mimic the human brain.
In the artificial intelligence (AI) community, the videos are called “deepfakes.” The term is used for audio, images or videos that have been manipulated to appear real.
Deepfakes use a form of AI called “deep learning,” a technology that tries to copy how humans think and learn. It’s also where the “deep” in deepfake is derived.
Recently, Tiktok user Chris Ume went viral with his deepfake of actor Tom Cruise. In 2018, director Jordan Peele and Buzzfeed circulated a deepfake of former President Barack Obama to warn people of the advances in technology and how they could spread misinformation through misuse.
When it comes to tracking these potentially problematic videos, one of the few organizations with data on deepfakes is Sensity, an Amsterdam-based company that employs deep learning and computer technologies to detect deepfakes.
According to Sensity, deepfakes emerged in late 2017 and the numbers online have rapidly increased. In 2018, the company tracked more than 7,000 deepfake videos online. By December of 2020, their report shows the number skyrocketed to more than 85,000 online deepfakes.
Sensity’s data only tracks incidents involving public figures. It doesn’t include incidents involving private individuals. However, hackers are faking more than just celebrities and politicians.
Hacked and deepfaked
Kyle Hawkins knows that all too well. He unwittingly entered the world of deepfakes when his social media accounts were hacked in February 2022.
Hawkins is an insurance agent specializing in Medicare and retirement planning in Richmond, Virginia.
One day, he opened Instagram and said he saw a message from an old friend. Hawkins thought the friend was reaching out about his services and seeking help.
“I got a message through Instagram from somebody who I was friends with on there who I assumed the same thing had happened to them, but I didn’t know,” Hawkins said.
It turns out, that friend had been hacked. When Hawkins clicked on a link in the message, he said he quickly lost control of his account.
“I wasn’t thinking anything about it,” Hawkins said. “And then I was able to sort of get Instagram that morning and then by the time I checked it was lunchtime, everything was off there.”
Hawkins said both his Instagram and his linked Facebook account were hacked, opening his followers to similar attacks.
That’s where he said the deepfake began. Hawkins said a 16-second deepfake video was sent to his friends and followers encouraging them to invest in Bitcoin mining. He confirmed the video looks and sounds just like him.
“It looks real, but they are sending it to people. They have made other ones, I think,” Hawkins said.
He said the video has been posted on Instagram stories every day since the initial hack. In it, he said the “fake Hawkins” shares how much money he’s made through Bitcoin. The thing is, Hawkins said he has never invested in cryptocurrency.
“I don’t have any Bitcoin, so I haven’t done that,” Hawkins said.
Hawkins said he’s reached out to both social media platforms in hopes of shutting down his account, but both his Instagram and Facebook accounts are still active.
Deepfake expansion and regulations
Ben Coleman, CEO of Reality Defender, works with organizations and government agencies to scan audio, images and video to protect the privacy of individuals, as well as combat fraud, inappropriate content, and search for a solution to the rise of deepfakes.
“Face swapping are deep fakes,” Coleman said. “Some of them are funny, and some of them are used for fraud.”
He said the videos can also be potentially dangerous.
On March 16, during Russia’s military action in Ukraine, a deepfake surfaced on social media of Ukrainian President Volodymyr Zelensky. The video depicted Zelensky giving a speech. However, he was pixelated and had a deeper-than-usual voice. Once the video was labeled a deepfake, Meta - Facebook’s parent company – quickly moved to take down the video from all its platforms and issued the following statement, saying the company “quickly reviewed and removed this video for violating our policy against misleading manipulated media, and notified our peers at other platforms.”
This wasn’t the first time Meta had addressed deepfakes. Ahead of the 2020 presidential election, the company banned deepfakes and other manipulated videos citing dangerous tactics that could mislead the public.
In a 2020 Facebook press release, the company said it is working on the issue and “strengthening their policy toward misleading manipulated videos.” Facebook’s manipulated media policy outlines that non-parody or satirical videos edited to mislead people, or videos that use AI to appear authentic will be removed.
There are no public numbers on how many deepfake videos Facebook has removed, but in a statement, the company said it is “working with others in this area to find solutions with real impact.”
In September 2019, the company created a “Deep Fake Detection Challenge” that asked experts in the field to help create open-source tools to detect deepfakes.
Meta also partnered with media outlets like Reuters to help identify deepfakes and provide free online training on how to identify manipulated visuals.
Ben Coleman said while social media companies and organizations are trying to combat the problem, there are significant hurdles that remain.
“A lot of times these companies have big challenges because they have human moderators and human moderators just can’t tell the difference between real and fake anymore,” Coleman said.
Senator Rob Portman (R-OH) introduced a bill in Congress last year to require the Department of Homeland Security and the White House Office of Science and Technology Policy to establish a temporary National Deepfake Provenance Task Force. The bill has been referred to the Committee on Homeland Security and Governmental Affairs and was “ordered to be reported without amendment favorably”.
Coleman said there are no current policies in the U.S. that require companies to flag synthetic and fake media the way they currently flag nudity and underage violence.
“For the most part, [companies are] asking users to flag things,” Coleman said. “They are expecting users to be experts, and if they see something, they should say something and then it gets sent to a human moderator team.”
Public and private deepfake solutions
According to Coleman, Reality Defender is currently working on creating a browser extension and website to help consumers spot deepfakes from their personal computers.
But Reality Defender isn’t alone in the fight against deepfakes.
At the University of Virginia, a team of third-year students is developing a website for the public, where one day consumers could upload questionable videos and photos to check if they are fake.
Two of those students, Ahmed Hussain and Sam Buxbaum, are studying computer science and physics. The pair won the top prize at the Innovative Discovery Science Platform (iDISPLA) competition. Their proposal, which targeted combatting deepfakes using AI, came about after the duo saw an increase in deepfake videos surfacing on the internet.
“It’s definitely possible that deepfakes within the next five years will be nearly indistinguishable from real people in some cases,” Hussain said. “They’re getting to the point where it’s pretty difficult to do so.”
Hussain said he believes the solution is not to fight fire with fire, but to use blockchain, a system that records information and makes it difficult to hack or cheat the system.
Buxbaum said their website would allow people to upload a video and the algorithm will indicate whether the video is fake.
“Some of the things that are different between a deepfake, and real video are only detectable to a computer, but they still make you feel weird when you watch it,” Buxbaum said.
Protecting your account and detecting deepfakes
As online solutions and lawmakers catch up to technology, Coleman suggested several steps to help prevent a hacker from using your photos and videos to create deepfakes:
- Secure all your social accounts and have a different password for each one
- Turn on two-factor authentication
- If a video seems off, flag it to the platform you are using and pick up your phone and call the person
When it comes to spotting deepfakes, researchers at the Massachusetts Institute of Technology suggest looking at the facial features of the video:
- Watch for how the eyes and lips move
- Look to see if the skin is too smooth or too wrinkly
- Look for abnormal shadows in the video or photo
- Do not click on any links that are associated with a video you feel uneasy about
Kyle Hawkins said his experience made him wary of social media and this new style of cyber-scamming.
“Just be extra cautious nowadays about anything you put on there, post on there, or respond to or click on.”
Copyright 2022 Gray Media Group, Inc. All rights reserved.