Frontegg.ai is now available in Beta Get started
Authentication

66% of IT Teams Are Ignoring the Biggest Cyber Threat Yet: AI

Attackers are already using generative AI to bypass trust and compromise systems, turning what once felt like distant threats into present-day realities. From deepfake impersonation to rapid password cracking, these tools are making cyberattacks faster, smarter, and harder to detect. For example, in one recent case flagged by the FBI, malicious actors used AI-generated audio and texts to impersonate senior U.S. officials, attempting to lure targets into phishing schemes designed to steal login credentials.

As AI-powered attacks grow more sophisticated, security teams are under increasing pressure to adapt. To gain an insider’s view of the situation, we surveyed 1,019 IT professionals about their AI cyber threats, their authentication systems, and how their teams are handling it all. The results reveal a critical gap: The old rules of security no longer apply, and organizations need to catch up quickly.

Key Takeaways

  • 35% of IT professionals report a rise in cyberattacks over the past year, with 51% of them saying it’s due to AI.
  • Over two in five IT professionals say that generative AI has enabled deepfake impersonation (44%) and accelerated password cracking attempts (42%).
  • Over one in five IT workers faced more than 10 AI-driven cyberattacks in the past year.
  • 61% have encountered new types of cyberattacks that didn’t exist two years ago.
  • 51% view passwords as the weakest part of their authentication system against AI-powered threats.
  • 51% believe their authentication systems couldn’t withstand AI-powered attacks today.

AI-powered cyberattacks are here

AI is enhancing productivity, but it’s also accelerating cybercrime. IT professionals across industries are witnessing a clear shift in attack patterns, powered by generative adversarial networks (GANs), multimodal AI, and large language models (LLMs). These technologies are driving a rise in synthetic media — realistic fake voices, videos, and messages designed to deceive at scale.

More than one in three IT professionals (35%) said their organization experienced an increase in cyberattacks over the past 12 months, and 51% of those respondents attributed the surge to AI-driven capabilities. Cyberattack spikes were most often reported by IT professionals in government (52%), finance (48%), and healthcare (45%).

When asked how generative AI has changed the nature of cyberattacks, IT professionals most often said it has enabled convincing deepfake voice and video impersonations (44%) and accelerated password cracking attempts (42%). Many IT professionals (61%) also reported that their organization has seen new types of attacks that did not exist just two years ago.

In the past year, over one in five IT professionals experienced more than 10 AI-driven attacks, and 34% reported seeing phishing emails that used their CEO’s voice or likeness. Impersonating business leaders has become disturbingly effective.

One multinational company in Hong Kong recently lost over $25 million after a finance employee joined a video call with what looked like their CEO and CFO. The entire meeting was fake. Scammers used AI to clone the executives’ identities and staged a realistic virtual call to authorize the transfer. This wasn’t just a phishing attempt. It was a full-scale deepfake operation.

This is what the new era of cyberattacks looks like. The old playbook — check for typos, inspect links, verify the sender — no longer works when attackers can mimic your CEO on a video call. AI has rewritten the rules, and security teams need to respond with smarter defenses. That means using authentication methods that can’t be phished, adding real-time context checks to login flows, and protecting user identity as closely as you would your infrastructure. Yesterday’s red flags won’t stop today’s deepfakes. But the right controls can.

Authentication systems are scrambling to adapt

As AI-powered threats evolve, authentication systems are struggling to keep pace. Some IT teams are taking action, but outdated methods and internal resistance are slowing progress.

More than half of IT professionals (53%) said they’ve already made changes to their authentication flows due to AI threats. But a core weakness remains: 51% identified passwords as the most vulnerable part of their login stack. Despite years of awareness, passwords are still the Achilles’ heel of modern authentication.

Even so, many organizations haven’t made the leap to stronger alternatives. A majority of respondents (57%) reported delays in adopting passwordless authentication, citing complexity (34%), budget constraints (27%), and lack of internal buy-in (19%) as the biggest blockers. For many security teams, the future is clear, but the path forward is cluttered with legacy systems and stakeholder hesitancy.

Stress is high, and preparation is lagging

Despite growing awareness, many IT professionals openly admit their systems aren’t ready for what’s coming. Worse, most aren’t even preparing for it.

While some organizations are beginning to adapt, many are still stuck in a reactive mode. Only one in three IT professionals said their company has created new “red-team exercises” to simulate AI-enabled threats.

Even more concerning, 66% said their team doesn’t dedicate any time each month to reviewing or updating internal protocols or security practices related to AI-driven threats. And about half of IT professionals believe their current authentication systems wouldn’t hold up against an AI-powered cyberattack.

This lack of structured planning reveals a gap between awareness and action. Half of the respondents said AI-driven threats deserve their own category within cybersecurity frameworks, but most teams aren’t approaching them that way. Without a clear strategy tailored to these emerging risks, even well-resourced security systems may fall short.

And it’s not just systems under pressure. People are feeling it, too. Half of IT professionals said the effort to track and respond to AI threats is increasing stress across their teams.

Rethinking authentication in the age of AI

Generative AI is changing the rules of engagement in cybersecurity, and IT professionals are already seeing the impact. They’re encountering more attacks, facing smarter phishing, and contending with new tactics like real-time deepfake impersonation. Passwords, long the default defense, are no match for adversaries who can scale attacks with machine precision. While some organizations are adapting, many remain underprepared for the pace of change.

It’s time to move beyond reactive security. Organizations that modernize their login experiences and align security with usability will be better positioned to secure their users, protect sensitive data, and maintain trust in an AI-driven future.

Methodology

We surveyed 1,019 IT professionals to explore how artificial intelligence (AI) is changing cybersecurity threats and defenses. The data was collected in May 2025.

About Frontegg

Frontegg makes customer identity and access management effortless by extending controls beyond engineering. Developers are freed from routine authentication tasks, while teams like Customer Success, Product, and Infosec can manage user access, security policies, and compliance settings without relying on engineering.

By distributing ownership of identity, Frontegg reduces developer toil, strengthens security and compliance, and enhances the customer experience. Developers focus on innovation, teams move faster without bottlenecks, and businesses scale securely. The result is a win-win.

Fair Use Statement

You’re welcome to share or reference this data for noncommercial purposes. Just be sure to include proper attribution and a link back to Frontegg.

Looking to take your User Management to the next level?
Sign up. It's free