Deepfakes first emerged in 2017, created initially by one anonymous Reddit user who gave Internet users access to the artificial intelligence (AI)-powered tools they would need to make their own deepfakes. Since then there have been a number of high-profile deepfake videos, with examples like movie director Jordan Peele’s Barack Obama public service announcement (PSA) – this intended as a warning about the dangers and convincing nature of deepfake videos – and the fake video of Facebook CEO Mark Zuckerberg telling CBS News “the truth of Facebook and who really owns the future” showcasing their power.
Deepfakes can cause significant problems for commercial organizations. There was a recent example in which an employee of a UK-based energy company was tricked into believing he was talking to the CEO of their German parent company, who convinced the employee to transfer $243,000 to a Hungarian supplier. It turned out the employee was not speaking to the real CEO but to a scam artist impersonating the CEO using a voice-altering AI tool.
A new threat? Or a new wrinkle on an old threat?
Cybersecurity experts have been predicting the rise of AI in cybercrime for a few years now, with the threat of automated cyberattacks central to that. The German CEO voice scam appears to be the first of its kind using AI, or at least the first that we have heard about in the public sphere. But AI is now being used for more sophisticated phishing attacks and to fool biometric IS scanners with things like fake fingerprints.
What is different about the potential impact of deepfakes is that they could disrupt enterprises by making them think that deepfakes might be used to deceive them. According to a report by Deeptrace Labs, there have been no actual occurrences of deepfakes used in disinformation campaigns or in ways that could affect enterprises, but that isn’t to say there won’t be in time. Deeptrace’s Henry Ajder commented, “Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake. The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”
Forrester predicts that deepfakes could end up costing businesses as much as $250 million this year.
How could deepfakes potentially damage your business? At a basic level, hacktivists could be using deepfake tech to make false claims and statements about your company in order to undermine and destabilize it. At a more sinister level, malicious agents could target senior executives in your company and put together a deepfake video in which the exec confesses to financial crimes or other offenses. Examples like these could have major consequences for your company’s brand, reputation and share price. Not to mention that they can be difficult for you to disprove and consume time and money attempting to do so. These threats could come from individuals, cybercriminal gangs or state-sponsored hackers who want to create disruption in financial markets, argues Experian.
How to combat lies
Scams using deepfake technology and AI present a new challenge for companies, since traditional cybersecurity tools designed to keep hackers and malicious agents out of corporate networks are not really designed to spot spoofed voices or doctored videos.
Cybersecurity companies are in the process of developing products to detect deepfake recordings, while big organizations like Facebook and Microsoft have begun to take deepfakes very seriously: the two companies reported that they are working with leading U.S. universities to build a large database of fake videos for research. Google, too, has put together a database of 3,000 deepfakes designed to help researchers and cybersecurity professionals develop tools to combat the fake videos.
What else can you do?
As an enterprise organization, there are other steps you can take:
1. Train your employees: As with standard cybersecurity measures, employee training and vigilance is often at the forefront. Make your workers aware of deepfakes during cybersecurity training: give examples like how they might receive an unexpected call from the company CEO asking them to perform an unexpected or uncommon task and think about putting internal security questions in place to help employees confirm a caller’s identity should they need to.
2. Review your company’s online brand and presence: You likely already monitor and measure your brand’s online output but, in the deepfake era, ensure your designated employees are aware of the existence of fake content and that they know to keep an eye out for it. If they spot something suspicious, they should seek to remove it immediately and mitigate potential damage to your organization.
3. Be transparent: It might sound counterintuitive, but if you are a victim of a deepfake attack, it can be worth your while to publicize it. Some of the best PR tactics involve getting out in front of an issue, and if you make your audience aware of the existence of a deepfake attack, they might appreciate it rather than consider it a negative. Ignoring an attack or assuming your audience doesn’t know about or hasn’t seen the deepfake video can backfire on you. Highlight that someone from your company has been the target of a deepfake attack, own the issue and you can mitigate the damage.
The dangers involved in deepfakes are very real, and you shouldn’t underestimate them: one malicious rumor can have a major negative and lasting impact on your business. It’s time to be aware of the threat and factor it into your cybersecurity thinking and strategy.
In today’s ever-expanding threat landscape you need to adopt holistic threat management to protect against the reality of continuous advanced threats coming your way. Read our six steps to effective threat management.