For decades, cybercriminals have impersonated targets’ trusted contacts to convince them to send funds, credentials or sensitive data. Thanks to deepfake and voice cloning technology, however, security awareness training — the usual countermeasure to social engineering attacks — is arguably no longer enough.
Traditional security awareness training relies on pattern recognition: Does this email look suspicious? Does that link seem off? But highly convincing deepfake audio and video attacks mean users can no longer rely on instinct or context cues to determine if a message is legitimate.
“Recognition-based training breaks down when an employee believes they’re talking to an executive with an urgent request,” said Diana Rothfuss, director of global strategy for risk, fraud and compliance solutions at data and AI software provider SAS. “To defend against this type of threat, organizations have to get their employees to go beyond ‘does this look right?'”
The vast majority of fraud professionals — 77% — say deepfake attacks are increasing, according to the 2026 Anti-Fraud Technology Benchmarking Report, co-published by SAS and the Association of Certified Fraud Examiners (ACFE). Just 7% described their organizations as more than moderately prepared to detect or prevent deepfakes. As a result, some security experts are calling on organizations to implement and normalize proof-based systems, processes and policies to verify that people are who they say they are and short-circuit deepfake attacks.
Prove it: Separating authority from authentication
The core principle of a proof-based approach is that no single interaction, whether voice, video or text, can authorize a sensitive action on its own — what SAS’ Rothfuss described as “separating authority from authentication.” That sounds straightforward but runs against how most employees are wired to respond to executive requests.
Consider, for example, a 2024 incident in which threat actors used deepfake technology to steal $25 million from global engineering firm Arup. A finance employee, believing he was on a video conference with senior executives, wired the money at the attackers’ request.
To defend against this type of threat, organizations have to get their employees to go beyond ‘does this look right?’
Diana RothfussDirector of global strategy for risk, fraud and compliance solutions, SAS
While such highly sophisticated deepfake video attacks are still relatively rare, audio cloning is a light lift for cybercriminals. Experts say such incidents present a clear mandate for finance and IT teams to formalize processes for verifying wire transfer requests, rather than handling them on an ad hoc basis.
“Proof-based verification policies should not be that hard; frankly, they should already exist,” said Ira Winkler, field CISO at cybersecurity company Aisle. “There should now be operational procedures in place, such as email verification of a financial transfer before transferring the money, even with ‘visual’ instruction.”
Equally important, Winkler added, staff must be trained on such policies and understand that there are no exceptions — even if they receive verbal instructions from a senior executive over the phone or on Zoom. “This is not just for deepfakes, but for fraud protections in general,” he said.
Specific authentication controls that do not depend on a human user’s recognition of a voice or face include the following:
Out-of-band, two-factor verification
Before fulfilling sensitive requests — e.g., fund transfers, credential resets and privileged access changes — users require confirmation through two separate, pre-approved channels, such as an internal authentication app and a team messaging platform. Because of the rising prevalence of deepfakes and voice cloning, video calls, phone calls and voicemails do not satisfy this requirement.
“How I will contact you” protocols
Executives and IT leadership establish in advance specific channels they will use for sensitive requests. Any request arriving outside those channels triggers a mandatory hold and verification through a separate, trusted path.
“Employees can no longer rely on instinct to determine whether a message is legitimate,” said T. Frank Downs, senior director of proactive services at BlueVoyant, a cybersecurity services provider based in New York. “We need to reinforce the idea that identity is confirmed through process and verification steps.”
Pre-established verification phrases
Known only to authorized parties, these phrases confirm identity in high-stakes communications without relying on voice or video recognition.
Designated approvers
No single employee can authorize a high-risk transaction. A named secondary approver must confirm before funds move or access is granted.
The hard part: Executing consistently and under pressure
Policy design is the easier part of proof-based verification. Consistent execution under real conditions is where most programs fall short. Experts suggested the following best practices to improve governance and human follow-through:
Treat verification as a safety rail, not a judgment call
Deepfake video- and audio-based attacks, like traditional business email compromise, are designed to generate urgency at precisely the moment verification matters most.
“Verification isn’t optional,” Rothfuss said. “That means instituting proof-based controls that operate as non-negotiable safety rails, not something discretionary that employees can skip when they’re feeling pressured or rushed. As with other less sophisticated scams, pressure and urgency is precisely the point.”
Get executives on record before an incident occurs
Staff will not push back on out-of-channel requests unless leadership has made clear in advance that doing so is expected and part of the organization’s cybersecurity culture.
“That requires defining the rules well in advance, so executives understand and encourage pushback, and employees don’t feel forced to improvise under duress,” Rothfuss said.
Reinforce continuously, not just once
Staff who understand how verification controls protect the organization are more likely to adopt them, but that understanding does not make the behavior automatic.
“Under pressure, people tend to fall back into old habits, which is exactly when verification is most important,” Downs said. That makes continuous training and reinforcement a must.
Build a culture in which slowing down is the norm
Adoption ultimately depends on employees feeling confident that if they pause to verify requests, leaders will reward rather than penalize them for doing so.
“Organizations need to normalize ‘see something, say something’ behavior and make verification frictionless,” said Mika Aalto, co-founder and CEO at Hoxhunt, a Helsinki-based human risk management vendor. “The real challenge is cultural: giving employees confidence that slowing down to verify is expected, supported and reinforced through human risk management practices.”
Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.
