Blog

Has AI killed authentication?... Not when it’s done right.

Authentication Fraud prevention Security Technology
Schalk Nolte, Entersekt Chief Executive Officer

Voice authentication—the idea that your unique voiceprint can act as a secure password—has long been seen as a modern, convenient solution for financial institutions. No more fumbling for a password or answering security questions. Just a quick phrase, and you’re in.

But in an age of rapidly advancing artificial intelligence (AI), this convenience has become a terrifying security risk. OpenAI CEO, Sam Altman, recently called the continued use of voice authentication in banking "crazy" and warned of a "significant impending fraud crisis" driven by AI's ability to perfectly mimic human voices.

So, what remains trustworthy?

Rethinking authentication: The classic triad

The risks we now face require a return to security fundamentals. Authentication has traditionally relied on three factors:
  • Something you know (like a password),
  • Something you have (like a device), and
  • Something you are (like a fingerprint or voice).
Security standards around the world—including PSD2's Strong Customer Authentication (SCA)—require at least two of these factors, and they must not overlap.

Unfortunately, AI is eroding one of those three pillars in dangerous ways.

The problem with voiceprints in the age of AI

The fundamental flaw in voice authentication, even when used in multi-factor authentication (MFA) combined with a knowledge-based factor, is more vulnerable than ever: A voiceprint is essentially a data template of your vocal characteristics: pitch, tone, cadence, and accent. For years, this was considered a robust security measure, as these characteristics are unique to an individual, but AI can now produce exact copies. Even with additional features like background noise or timing, AI can be trained to act human-like.

The rise of generative AI has completely changed the game. Here's why your customers are at risk:
  • Deepfake voice cloning: With as little as a few seconds of audio from a social media video, a podcast, or even a voicemail, sophisticated AI models can now create eerily realistic synthetic voices. These "deepfakes" can perfectly mimic not just a person's voice but also their intonation and speaking style. This means a fraudster can call a bank, play a deepfake audio clip, and bypass the voice authentication system designed to protect you.
  • Availability of tools: The technology to create these voice clones is no longer the exclusive domain of state-sponsored hackers. Voice-cloning-as-a-service (VCaaS) tools are becoming increasingly accessible, making sophisticated fraud attacks a reality for a wider range of malicious actors.
  • The "liveness" problem: Traditional voice biometrics analyze a sound wave, but they struggle to differentiate between a live voice and a high-quality recording or an AI-generated deepfake. While some systems are trying to incorporate "liveness" detection, they are in a constant battle to keep up with the exponential improvements in AI.
"A key point to understand is that a voiceprint can't be changed. Unlike a compromised password, you can't simply reset your voice. Once your voice data is in the hands of a fraudster, it can be used against you indefinitely."

Beyond biometrics: Alternatives for stronger authentication

The solution isn't to abandon biometrics entirely, but to move beyond vulnerable solutions like a voiceprint. And unfortunately voice is just the starting point. Generative AI means that document proof, Optical Character Recognition (OCR)-reliant enrollment, and liveness detection of selfies are no longer reliable means of identification because of rapidly evolving cloning abilities.

The future of authentication lies in a layered approach that makes it exponentially more difficult for fraudsters to gain access.

The only way to beat AI and the fraudsters that leverage it in their attacks is to implement MFA with a strong possession factor:
  • A possession factor that leverages a public-private keypair cannot be faked or intercepted.
  • Supplemented with a second factor of inherence, this should also adhere to the keypair principle, like FIDO using a phishing resistant, passwordless, physical biometric built into hardware (like your mobile or laptop) and use multiple sensors.
  • Both factors should provide cryptographic proof.

Entersekt’s perspective: Redefining trust

At Entersekt, our solutions are helping FIs worldwide to redefine trust in the age of generative AI in the following ways:
  • Our patented Device ID possession factor frictionlessly challenges the endpoint to provide proof of a cryptographic keypair.
  • This is combined with our FIDO2 certified biometrics, also leveraging cryptographic keypairs, as an inherence factor.
  • This forcefield is then enhanced with risk intelligence to identify any additional anomalies, offering our clients and theirs the strongest protection for their finances.
"One of our large digital banking clients achieved a 99.23% fraud reduction with this approach. For login, they opted for a passive method, leveraging our Device ID possession factor, resulting in a 98.2% frictionless login rate."
Financial institutions must move away from outdated solutions and embrace a multi-layered, adaptive approach to authentication. The security of our financial systems—and our money—depends on it.