custom white shadow vectorcustom white shadow vector

GenAI used in MFA attacks

What is GenAI and How It's Weaponized

Generative AI refers to artificial intelligence models capable of generating human-like text, speech, code, images, and even behavior patterns. While GenAI has transformed productivity tools and automation, it has simultaneously opened new attack vectors for hackers.

When used maliciously, GenAI can:

  • Generate phishing content that mimics real users and corporate communications, even using deep fake videos
  • Simulate biometric data, such as voice or facial features, or full deep fake videos.
  • Replicate login behaviors to deceive anomaly detection systems
  • Craft real-time responses to challenge questions or dynamic authentication methods

Multi-Factor Authentication (MFA) Under Siege

MFA has been a cornerstone of digital security, adding layers beyond just usernames and passwords. However, GenAI’s ability to emulate human interaction is dismantling this once-reliable security measure.

Types of MFA Being Targeted

  • SMS-Based MFA: GenAI-powered bots are used in real-time phishing kits to trick users into providing OTPs (One-Time Passwords).
  • Push Notification MFA: Attackers deploy MFA fatigue techniques, overwhelming users with push requests until one is accidentally accepted.
  • Biometric MFA: Voice cloning and face-swapping tools driven by GenAI are used to fool systems relying on speech or facial verification.

The Role of GenAI in Social Engineering

Social engineering is evolving. GenAI makes phishing campaigns scalable, personalized, and real-time. Deep contextual awareness allows attackers to craft hyper-realistic emails, mimic executives’ communication styles, and hold convincing voice conversations using cloned voices.

These tactics increase phishing email open rates, encourage users to click malicious links, and lead to the leakage of authentication credentials.

Real-World MFA Attacks Using GenAI

1. Real-Time MFA Interception Attacks

Attackers set up reverse proxy phishing pages using tools like Evilginx. With GenAI integrated, these pages adapt to the victim’s behavior, offering live responses and updated prompts. When the victim enters MFA credentials, attackers capture session tokens and gain access—bypassing MFA entirely.

2. Deepfake Voice in Helpdesk Fraud

Using GenAI voice models, attackers mimic employees calling IT helpdesks, requesting password resets or MFA enrollment changes. Without adequate verification protocols, IT personnel can unknowingly grant access to the threat actor.

3. Synthetic ID Fraud for Account Takeovers

Combining GenAI with synthetic identity creation, attackers use AI-generated documentation and behavioral data to pass KYC (Know Your Customer) checks and register for MFA-protected accounts, which they control from inception.

Why Traditional MFA Solutions Are Failing

Traditional MFA solutions were built to protect against predictable, rule-based attacks. GenAI breaks this model by introducing intelligent, adaptive, and human-like behaviors that bypass logic-based detection systems.

Key weaknesses being exploited include:

  • Predictable authentication flows
  • Inadequate verification at helpdesks
  • Over-reliance on static biometric models
  • Inability to detect behavioral mimicry

Advanced Detection is the New Defense

To combat GenAI-driven MFA attacks, cybersecurity strategies must evolve from static rule-based systems to dynamic, AI-enhanced defense models. VerifiedThreat uses Agentic AI to simulate these new attack vectors.

custom vectorstar

Engage with our Team

Schedule your Demo Below

We're committed to your success!