MomentaThreat Intelligence
Analysis / Enterprise Security / AI Fraud

The New Economics of Fraud

Identity is now cheaper to fake than to verify.
That breaks everything.

Author
Lukas Bruell
Founding Business Officer
TopicDeepfake Fraud
Read time6 min
Summary

Deepfake fraud now scales like phishing, but arrives with executive-level credibility. The economics changed permanently in the last 18 months.

Your controls activate after the decision is already made, two layers downstream of where the attack succeeds.

Training alone will not fix this. Awareness helps, but the problem is architectural — controls need to move upstream, not just people's mindsets.

The only coherent defense moves upstream to the identity layer. That means simulation and real-time detection.

The most dangerously exposed organizations are the ones that believe they're already protected.

In building detection and simulation infrastructure for enterprises facing these attacks, we have come to understand this threat in a way that differs from most industry analysis. The risk is not primarily technical. It is economic. For the first time, attacks can be both highly personalized and cheap to execute at scale.

It reframes deepfake fraud not as a media authenticity issue but as a structural vulnerability in how enterprise decisions get made. The defenses most organizations have are designed to catch fraud at the transaction layer, while the Synthetic Trust Problem operates two layers upstream.

How Deepfake Attacks Actually Work Inside OrganizationsThe specific patterns, case studies, and where controls fail.

How cheap has it become to fake someone's identity?

In 2019, a targeted deepfake attack cost thousands of dollars and required specialist skills. Today, voice cloning starts at as little as $0.01 per minute. That is a 99.99% price drop in five years.

YearCost / minuteSkill requiredChange
2019$20,000SpecialistBaseline
2022$500Moderate↓ 97%
2024$50Low↓ 99.7%
2025$0.20Anyone↓ 99.99%

But cost is only half the story. Quality crossed a threshold at the same time. Real-time face swaps now hold consistent skin tone across lighting conditions. Voice clones replicate not just pitch and accent, but the micro-patterns of speech: hesitation rhythms, filler words, the way someone trails off at the end of a sentence. Lip sync on enterprise video calls is indistinguishable from the real thing, even to people who know the target personally.

When attacks are nearly free and indistinguishable from real people, fraud stops being occasional and becomes constant.

The loss data confirms it. From 2019 to 2023, cumulative deepfake fraud losses totaled $130M across five years. In 2024 alone: $400M. By 2025: $1.56 billion. [Surfshark 2025]

Deepfake fraud losses: annual ($M)
Surfshark 2025

Why are existing enterprise defenses failing against deepfake attacks?

Most enterprise security operates at layer three. Deepfake attacks complete at layer one. By the time your controls engage, the critical judgment has already been made.

Your defenses verify transactions. Attackers manipulate decisions.

Momenta's Synthetic Trust Problem Framework
Layer 01
Identity

A synthetic trust signal is manufactured. Voice, face, context. The decision is primed before any system is involved.

Attack surface
Layer 02
Decision

A person acts on the manufactured identity. Approves the transfer. Accepts the hire. The attack has already succeeded.

Layer 03
Transaction

Payment systems, fraud detection, audit logs. Built to evaluate whether the transaction is valid, not the judgment that authorized it.

Most controls live here

If your defense starts at the transaction layer, you've already lost.

The transfer was approved before your system was consulted. The hire completed before identity was verified. The payment authorized before fraud detection had a signal to evaluate. Deepfake attacks do not breach your defenses. They finish the job before your defenses begin.

What does a real deepfake attack look like from the inside?

The mechanics are worth walking through slowly, because each step is unremarkable until you see them together.

Case walkthrough: Hong Kong, February 2024$25M lost
1

Employee receives a routine meeting invite

The CFO is listed as organizer. Nothing flags. This is normal.

2

The call begins, CFO and colleagues are on screen

They look right. They sound right. Every participant except the employee is synthetic, reconstructed from publicly available footage.

3

A financial instruction is given

Transfer funds. Right format, right authority, right context. No system has been consulted. No control has triggered.

4

The employee approves and executes the transfer

The payment system receives a legitimate instruction from an authorized user. It processes correctly. HK$200M moves.

!

Fraud detected, after the fact

No system failed. The process worked exactly as designed. The Synthetic Trust Problem had already succeeded two layers upstream.

The attack didn't defeat the system.
It finished before the system started.

Will training employees stop deepfake fraud?

Most deepfake defenses are focused on the wrong layer.

The conventional response is that training is all you need.

The implicit assumption: people fail because they don't know enough. If they knew more, they'd behave differently.

That framing is wrong, and the industry has been slow to say so directly.

The problem is not knowledge. It is architecture.

These attacks satisfy the exact signals human judgment is built to trust. No amount of awareness training changes the fact that a sufficiently convincing attack is, by design, indistinguishable from a legitimate interaction at the moment it counts.

Training raises awareness. It does not change the architecture, and architecture is where the problem lives.

24.5%Human accuracy detecting high-quality deepfake video
62%Organizations reporting deepfake attacks in 2025
46%Organizations with no mitigation plan

How large is the deepfake fraud problem, and where is it heading?

Deloitte projects US AI-enabled fraud losses will reach $40 billion by 2027, up from $12.3 billion in 2023, a 32% compound annual growth rate. [Deloitte 2024] That projection was made before the most recent cost data was available. It may prove conservative.

US AI-enabled fraud losses: actual vs. projected ($B)
Deloitte 2024
$12.3BAI fraud losses in 2023 (Deloitte)
$40BProjected US AI fraud losses by 2027

Near-zero attack cost breaks detection logic. When an attack costs $1 to attempt, the attacker can afford to fail many times. Volume-based fraud detection becomes blind to an adversary who generates a fresh, tailored attack for each target at negligible cost. The average enterprise loss per incident was nearly $500,000 in 2024 [Eftsure 2025], more likely a floor than a ceiling.

What is the most effective defense against deepfake fraud?

If the Synthetic Trust Problem operates at layer one, before any system is consulted, then there is only one coherent structural response.

Move the defense to layer one.

Step 01
Know where your layer one breaks

Simulation that delivers realistic synthetic attacks through the channels your teams actually use, and measures how they respond. Not a training completion rate — a behavioral map of where trust breaks down.

Momenta Simulation →
Step 02
Detect synthetic identity in real time

Detection that operates at layer one, analyzing audio and video streams in milliseconds, producing a risk score before a decision is acted on, and triggering security actions during the interaction itself.

Momenta Detection →

As attack cost approaches zero, volume limits disappear. As fidelity approaches human-indistinguishable, human judgment fails. As identity collapses as a reliable signal, every workflow that treats it as a constant becomes an entry point. Most security is built for layer three. The attack happens at layer one.

Every organization already has a breaking point. The question is whether you find it, or an attacker does.

Want the full analysis?

Download the complete Synthetic Trust Problem report — cost curve data, full case studies, and the complete framework.

Read more →
Next step

Find your breaking point before an attacker does.

Run a controlled simulation across your actual communication channels. Get a behavioral map of where your organization's trust breaks down, in less than 48 hours.