Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Ad image

Losses from deepfake attacks are expected to reach $40 billion by 2027

MONews
8 Min Read

Don’t miss leaders from OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One only at VentureBeat Transform 2024. Gain essential insights into GenAI and expand your network at this exclusive 3-day event. Learn more


Deepfake-related losses, one of the fastest-growing forms of adversarial AI today, are expected to surge from $12.3 billion in 2023 to $12.3 billion in 2026. $40 billion by 2027It is recording an incredible average annual growth rate of 32%. Deloitte Deepfakes are expected to proliferate in the coming years, with banking and financial services being a prime target.

Deepfakes represent the cutting edge of adversarial AI attacks. 3,000% increase Last year alone, it was predicted that deepfake incidents would increase by 50% to 60% by 2024. 140,000-150,000 cases worldwide Predicted for this year.

The latest generation of generative AI apps, tools, and platforms give attackers the intelligence they need to create deepfake videos, spoofed audio, and fraudulent documents quickly and at extremely low cost. Pindrops2024 Voice Intelligence and Security Report They estimate that deepfake fraud targeting contact centers costs around $5 billion a year. Their report highlights how serious a threat deepfake technology poses to banking and financial services.

Bloomberg Last year, we reported that “there is already an entire cottage industry on the dark web selling fraudulent software for anywhere from $20 to thousands of dollars.” Sumsub’s Identity Fraud Report 2023 Provides a global perspective on the rapid growth of AI-based fraud.


VB Transform 2024 Countdown

Join business leaders in San Francisco July 9-11 for the premier AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register now.


source: Statista, How Dangerous Are Deepfakes and Other AI-Based Fraud? March 13, 2024

Companies are unprepared for deepfakes and adversarial AI.

Adversarial AI creates new attack vectors that no one expects, creating a more complex and nuanced threat environment that prioritizes identity-based attacks.

Unsurprisingly, one in three companies has no strategy to address the risk of adversarial AI attacks, which are likely to begin with deepfake videos of senior executives. Ivanti’A recent study found that 30% of companies have no plan to identify and defend against adversarial AI attacks.

Ivanti Cybersecurity Status Report 2024 We found that 74% of organizations surveyed are already seeing evidence of AI-based threats. A majority, 89%, believe that AI-based threats are just beginning. Most CISOs, CIOs, and IT leaders interviewed by Ivanti, 60%, fear that their organizations are not prepared for AI-based threats and attacks. The use of deepfakes as part of an organized strategy that includes phishing, software vulnerabilities, ransomware, and API-related vulnerabilities is becoming increasingly common. This is in line with the threats that security experts expect to become more dangerous with Gen AI.

Source: Ivanti 2024 State of Cybersecurity Report

Attackers focus their deepfake efforts on CEOs.

VentureBeat regularly hears from corporate software cybersecurity CEOs who prefer to remain anonymous about how deepfakes have evolved from easily identifiable fakes to legitimate-looking recent videos. Voice and video deepfakes appear to be a favorite attack strategy for industry executives who aim to scam millions out of their companies. Adding to the threat is how aggressively nation-states and large cybercrime organizations are working to develop, hire, and grow their expertise. Generative Adversarial Networks (GANs) Technology. This year alone, there have been thousands of attempts at CEO deepfakes, including CEO of the world’s largest advertising company It shows how sophisticated attackers are becoming.

Recently Technology News Briefing With The Wall Street Journal, Crowdstrike Chief Executive Officer George Kurtz While he explains how advances in AI can help cybersecurity practitioners defend their systems, he also touches on how attackers are using AI. Kurtz talks to WSJ reporter Dustin Bolz about AI, the 2024 U.S. presidential election, and the threats posed by China and Russia.

“Deepfake technology today is really good. I think that’s one of the areas that’s really concerning. Back in 2016, we were tracking this, and we saw people actually having conversations with bots. That was in 2016. They were literally having arguments or promoting their own causes, having conversational conversations, and it seemed like there was no one behind it. So I think it’s pretty easy for people to get caught up in the idea that it’s real or that there’s a narrative that we want to support, but a lot of it can and has been driven by other countries,” Kurtz said.

CrowdStrike’s intelligence team has invested significant time into understanding the nuances of what makes a convincing deepfake and figuring out where technology is headed to maximize impact for viewers.

“What we’ve seen in the past, and we’ve spent a lot of time with CrowdStrike’s intelligence team looking into this issue, is it’s like a pebble in a pond. You take a topic or you hear a topic, and everything that’s relevant to the geopolitical environment, you drop a pebble in the pond and all the ripples go out. And there’s this amplification,” Kurtz added.

CrowdStrike is known for its deep expertise in AI and machine learning (ML) and its unique single-agent model that has proven effective in driving its platform strategy. Given the company’s deep expertise, it’s understandable why the team is experimenting with deepfake technology.

“And now, in 2024, with the ability to create deepfakes, if some of our internal staff were to make some funny parody videos with me to show how scary it could be, you wouldn’t know that it wasn’t me. So I think that’s one of the things that really worries me,” Kurtz said. “There’s always concerns about infrastructure and that kind of thing. That area, a lot of it is still things like paper ballots. Some of it isn’t, but how do we create false narratives that get people to do what the state wants them to do, that’s what I’m really worried about.”

Companies must rise to the challenge.

Companies are taking risks, including: Losing the AI ​​War Unless they keep pace with the rapid pace of AI weaponization for deepfake attacks and all other forms of adversarial AI, deepfakes will become so commonplace Department of Homeland Security A guide has been published. The Growing Threat to Deepfake Identity.

Share This Article