Will Deepfakes Threaten Corporate Security?

AI-powered deepfake technology is rapidly advancing, and it’s only a matter of time before cybercriminals find a business model they can exploit, some security experts say. Deepfakes, which have already troubled celebrities and politicians, are poised to infiltrate the corporate world, offering cybercriminals a new avenue for profit. CIOs, CISOs, and other corporate leaders must brace for AI-assisted attacks involving realistic but fake voice calls, video clips, and live video conferencing calls.

The Evolution of Deepfakes in Cybercrime

Deepfakes involving voice calls are not a recent development. Michael Hasse, a longtime cybersecurity and IT consultant, recalls presenting on the topic to asset management firms as early as 2015 after some companies in the industry fell victim to voice-based scams. However, since 2015, AI technologies for deepfakes have improved significantly and become more accessible.

Hasse notes that the primary factor preventing widespread use of deepfakes by cybercriminals is the lack of a packaged, easy-to-use tool for creating fake audio and video. However, he predicts that such a tool is imminent, likely to appear in the criminal underground before the US elections in November, targeting political campaigns initially. “Every single piece that’s needed is there,” Hasse says. “The only thing that has kept us from seeing it just flooding everything is that it takes time for the bad guys to incorporate stuff like this.”

Deepfakes as a Corporate Threat

It’s not just cybersecurity experts who are sounding the alarm about the corporate risk from deepfakes. In May, credit ratings firm Moody’s issued a warning about deepfakes, highlighting the new credit risks they create. The report detailed several attempted deepfake scams, including fake video calls targeting the financial sector in the past two years.

“Financial losses attributed to deepfake frauds are rapidly emerging as a prominent threat from this advancing technology,” the report states. “Deepfakes can be used to create fraudulent videos of bank officials, company executives, or government functionaries to direct financial transactions or carry out payment frauds.”

Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm, mentions that deepfake scams are already happening, but the extent of the problem is hard to estimate. Often, scams go unreported to save the victim’s reputation, and in other cases, victims of different types of scams may blame deepfakes as a convenient cover for their actions.

The Challenge of Detecting Deepfakes

Technological defenses against deepfakes are cumbersome and may have a limited shelf life due to rapidly advancing AI technologies. “It’s hard to measure because we don’t have effective detection tools, nor will we,” Williams, a former hacker at the US National Security Agency, explains. “It’s going to be difficult for us to keep track of over time.”

While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial. Unless a Zoom meeting is of HD or better quality, a face swap may be convincing enough to fool most people.

Real-world Deepfake Incidents

Kevin Surace, chairman of multifactor authentication vendor Token, shares a personal encounter with voice-based deepfakes. He received an email from the administrative assistant of one of Token’s investors, which he quickly identified as a phishing scam. When he called the administrative assistant to warn her, the voice on the other end sounded exactly like the employee but responded oddly to questions. It turned out that the phone number in the phishing email was one digit off from the real number, and the fake number stopped working shortly after Surace detected the problem.

“People are going to say, ‘Oh, this can’t be happening,’” Surace says. “It has now happened to a few people, and if it happened to three people, it’s going to be 300, it’s going to be 3,000, and so on.”

Potential Uses of Deepfakes in Corporate Crime

So far, deepfakes targeting the corporate world have primarily focused on tricking employees into transferring money to criminals. However, Surace envisions deepfakes being used for blackmail schemes or stock manipulation. If the blackmail amount is low enough, CEOs or other targeted individuals might opt to pay the fee rather than explain that the person in the compromising video isn’t really them.

Both Hasse and Surace foresee a wave of deepfake scams coming soon. They expect many scam attempts, like the one targeting Surace, are already in progress. “People don’t want to tell anyone it’s happening,” Surace says. “You pay 10 grand, and you just write it off and say, ‘It’s the last thing I want to tell the press about.’”

Obstacles and Solutions

While the widespread use of deepfakes may be close, some impediments remain beyond the lack of an easy-to-use deepfakes package. Convincing deepfakes can require significant computing power, which some cybercriminals may lack. Additionally, deepfake scams tend to be targeted attacks, such as whale phishing, which require time to research the target.

Potential victims are inadvertently aiding cybercriminals by sharing extensive information about their lives on social media. “The bad guys really don’t have a super-streamlined way to collect victim data and generate the deepfakes in a sufficiently automated fashion yet, but it’s coming,” Hasse warns.

Strategies for Mitigating Deepfake Threats

With more deepfake scams likely targeting the corporate world, the question is how to address this growing threat. Given the continuous improvement of deepfake technology, there are no easy answers. Hasse believes awareness and employee training are crucial. Employees and executives need to be aware of potential deepfake scams and verify any suspicious requests, even if they come via video call. Making an additional phone call or verifying the request face-to-face is an old-school but effective form of multi-factor authentication.

When the asset management industry first began falling victim to voice scams nearly a decade ago, advisors enhanced their know-your-customer approaches, starting conversations with clients about their families, hobbies, and other personal details to help verify identities.

Another potential defense is for company executives and other critical employees to intentionally lie on social media to throw off deepfake attacks. “My guess is at some point there will be certain roles within companies where that is actually required,” Hasse suggests. “If you’re in a sufficiently sensitive role in a sufficiently large corporation, there may be some kind of a level of scrutiny on the social media where a social media czar watches all the accounts.”

Technological and Procedural Measures

Surace’s company sells a wearable multi-factor authentication device based on fingerprints, which he believes can help defend against deepfake scams. Next-generation MFA products need to quickly and securely verify identities, such as every time employees log into a Zoom meeting.

Williams, however, is skeptical about the effectiveness of new technologies or employee training. Some people may resist using new authentication devices, and cybersecurity training has had limited success over time. Instead, he advocates for procedural changes, such as using secure applications for transferring large sums of money instead of email or voice calls.

The End of Voice and Image-Based Authentication

For centuries, people have relied on voices and images to authenticate each other, but that era is ending. “The reality is that using somebody’s voice or image likeness to authenticate that person has always been, if you look at it through a security perspective, inadequate,” Williams concludes. “Technology is catching up with our substandard or ineffective processes.”

As deepfake technology continues to advance, corporations must adapt and implement robust security measures to protect themselves from this emerging threat. Awareness, training, and procedural changes will be crucial in mitigating the risks associated with deepfakes. By staying vigilant and proactive, companies can better defend against the sophisticated tactics employed by cybercriminals leveraging deepfake technology.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *