AI-Powered Fraud and DOJ's Expanding Enforcement Toolkit: What Businesses and Executives Need to Know Now

The intersection of artificial intelligence and white-collar crime has emerged (and will continue to emerge) as one of the more consequential enforcement frontiers in 2025. Federal prosecutors are sharpening their approach to AI-related fraud, and the implications for businesses and executives can be profound. From “AI washing" securities fraud to deepfake-enabled wire fraud, the Department of Justice is signaling that it will pursue these cases aggressively—often seeking enhanced penalties for defendants who weaponize AI to commit financial crimes.

For companies operating in high-risk sectors including fintech, healthcare, crypto, and any business making AI-related claims to investors or customers, understanding this enforcement landscape is essential.

The DOJ's New AI Enforcement Posture

In March 2024, then-Deputy Attorney General Lisa Monaco delivered a speech at the American Bar Association's National Institute on White Collar Crime, announcing that DOJ would take a “robust" approach to AI-related crimes and pursue harsher sentences for intentional AI misuse in white-collar offenses. Monaco made clear that DOJ's Criminal Division would begin assessing disruptive technology risks, including AI, when evaluating corporate compliance programs.

The wheels of justice move slowly. A year later, we saw the first criminal AI washing case when the U.S. Attorney's Office for the Southern District of New York charged Albert Saniger, founder and former CEO of Nate Inc., with securities fraud and wire fraud. The parallel SEC and DOJ actions alleged that Saniger raised over $42 million by falsely claiming his shopping app used cutting-edge AI and machine learning, when in reality the transactions were processed manually by contract workers overseas.

The Saniger case is instructive because it demonstrates DOJ's willingness to pursue criminal charges where, just months earlier, the SEC had settled a similar AI washing matter (against Presto Automation) on more modest terms with no fines. This shift from civil negligence-based enforcement to criminal fraud prosecution signals DOJ's growing confidence in building AI-related cases.

What is "AI Washing" and Why Prosecutors Care

AI washing occurs when companies make exaggerated, misleading, or outright false claims about their use of artificial intelligence. The term encompasses everything from overstating the sophistication of AI capabilities to claiming proprietary AI technology that doesn't exist to failing to disclose the extent of human intervention in supposedly "autonomous" AI systems.

Prosecutors can view AI washing as a species of securities fraud because representations about AI capabilities are often material to investor decisions. The promise of AI technology carries implications about scalability, cost-efficiency, and competitive advantage that directly impact valuations. When a company claims to have proprietary AI driving its core product, investors reasonably expect that such technology exists and functions as represented.

Acting U.S. Attorney Matthew Podolsky emphasized this point in announcing the Saniger charges, noting that such deception "not only victimizes innocent investors, it diverts capital from legitimate startups, makes investors skeptical of real breakthroughs, and ultimately impedes the progress of AI development."

The Deepfake Fraud Epidemic

Parallel to AI washing enforcement, we're witnessing an explosion in deepfake-enabled fraud. According to recent industry reports, fraud attempts involving deepfakes grew by 3,000% in 2023, with one in every 20 identity verification failures now linked to deepfake attacks. North America experienced a 1,740% regional increase in deepfake fraud. This increase is hardly surprising.

The mechanics are sophisticated yet accessible. Deepfakes use AI to create hyper-realistic synthetic audio, video, or images that convincingly imitate real people. Fraudsters have used deepfake technology to:

  • Impersonate CEOs and CFOs on video conference calls to authorize multimillion-dollar wire transfers (including a notorious Hong Kong case involving $25 million in losses)

  • Clone executive voices to trick finance departments into executing fraudulent payments

  • Create fake customer service representatives to extract banking credentials

  • Conduct elaborate "pig butchering" romance scams using synthesized celebrity personas

From a criminal defense perspective, deepfake fraud typically triggers federal wire fraud charges under 18 U.S.C. § 1343, as these schemes almost invariably involve interstate wire communications. The government may also pursue charges under 18 U.S.C. § 1028 for fraud involving identification documents, as deepfakes can constitute synthetic identification materials.

The DOJ has made clear that defendants who use AI and deepfakes to commit fraud can expect prosecutors to seek enhanced penalties. This is partly because such conduct demonstrates sophistication and planning, factors that increase a defendant's offense level under the U.S. Sentencing Guidelines. It's also because DOJ views AI-enabled fraud as particularly dangerous due to its scalability and the difficulty victims face in detecting it.

Healthcare Fraud and DOJ's AI Detection Tools

The enforcement sword cuts both ways. While DOJ prosecutes those who misuse AI to commit fraud, it's simultaneously deploying AI to detect fraud schemes. In June 2025, DOJ announced its National Health Care Fraud Takedown, charging 324 defendants in schemes totaling over $14.6 billion in intended losses. This operation marked the debut of DOJ's Health Care Fraud Data Fusion Center, which leverages AI, cloud computing, and advanced analytics to proactively identify emerging fraud patterns.

This represents a fundamental shift from reactive investigation to predictive enforcement. Using AI-powered data mining, DOJ can now isolate clusters of suspicious activity—such as a single provider billing millions in telehealth services within days or DME suppliers shipping thousands of devices to nonexistent addresses—and deploy enforcement teams accordingly.

In August 2025, DOJ announced its first healthcare fraud enforcement action under its updated Corporate Enforcement Policy, charging Troy Healthcare in a non-prosecution agreement. The case is particularly notable because Troy allegedly used AI to scale its fraud, employing an AI-based platform (Troy.ai) to facilitate unauthorized access to customer data and coordinate fraudulent billing. DOJ explicitly flagged Troy's use of AI as a "novel enforcement concern."

For healthcare companies, this case sends a signal: AI can be a compliance tool or a criminal liability, depending on how it's deployed.

Compliance Program Implications

DOJ's May 2025 memorandum on white-collar enforcement priorities emphasized that prosecutors will now evaluate whether corporate compliance programs adequately mitigate AI-specific risks. This means companies should:

Implement AI-Specific Controls: Develop policies governing AI use, including guardrails against algorithmic price-fixing, AI-augmented fraud, and AI-assisted market manipulation. Companies should prohibit rogue employees from using AI in ways that create legal exposure.

Conduct AI Risk Assessments: Identify where AI is deployed in your organization and assess the compliance risks associated with each use case. This is particularly critical for companies in regulated industries like healthcare, finance, and crypto.

Ensure Transparency in AI Claims: If your company makes public statements about AI capabilities—whether in investor presentations, marketing materials, or customer agreements—implement vetting processes to verify accuracy. Document the actual degree of AI automation versus human involvement.

Train Personnel: Employees should understand both the prohibited uses of AI and how to recognize AI-enabled fraud attempts targeting the company (such as deepfake impersonation attacks).

Strengthen Transaction Controls: Given the rise in deepfake-enabled wire fraud, companies should implement multi-factor verification for large financial transactions. This includes "call-back" protocols using independently verified contact information, dual approval requirements, and enhanced authentication for video conference calls requesting urgent payments.

Defense Considerations for AI-Related Charges

For individuals and companies facing AI-related fraud investigations, several defense strategies merit consideration:

Materiality Challenges: Following the Supreme Court's recent emphasis on materiality as a critical limit on fraud liability in cases like Kousisis v. United States, defense counsel should carefully scrutinize whether alleged AI-related misrepresentations were truly material to the relevant decision-makers. If a company delivered the promised service or product despite using different technology than represented, materiality may be contestable.

Intent and Knowledge: In AI washing cases, prosecutors must prove scienter—that the defendant knew the statements were false and intended to defraud. Companies that made good-faith efforts to develop AI technology, even if those efforts fell short, may have viable defenses based on lack of fraudulent intent.

Voluntary Disclosure and Cooperation: Under DOJ's updated Corporate Enforcement Policy, companies that voluntarily self-disclose AI-related compliance issues, fully cooperate with investigations, and implement remediation may be eligible for declinations or reduced penalties. Early intervention by experienced counsel can be critical in structuring these disclosures strategically.

Compliance Program Credit: Companies with robust, effective compliance programs that address AI risks may receive significant mitigation credit, potentially avoiding corporate monitors even in cases involving serious misconduct.

International Enforcement Coordination

AI-enabled fraud frequently involves international elements, whether through offshore data processing, foreign-based fraud operations, or cross-border victims. DOJ has signaled increased cooperation with international law enforcement, particularly for crypto and AI-related cases. Defense counsel must anticipate potential extradition issues, parallel proceedings in multiple jurisdictions, and the challenges of defending against government investigations that span continents.

Looking Ahead: Regulation Gap

One of the most challenging aspects of this enforcement landscape is that it's developing faster than comprehensive legislative frameworks. While some states have enacted targeted statutes addressing deepfakes in elections and consumer deception, federal law has not yet caught up with the pace of AI development. This creates uncertainty for both companies trying to comply and defense attorneys representing clients in novel prosecutions.

DOJ is essentially applying existing fraud statutes—wire fraud, securities fraud, mail fraud—to AI-enabled schemes. This is well within prosecutors' authority, but it also means that the contours of criminal liability are being defined through enforcement actions rather than clear statutory language. Companies must therefore adopt a conservative, risk-aware approach until more definitive guidance emerges.

Conclusion

The convergence of AI technology and white-collar crime represents the new frontier of federal enforcement. For businesses and executives, the stakes are substantial: criminal exposure, significant financial penalties, reputational damage, and potential exclusion from federal programs or contracts.

Proactive risk management is needed. Companies should reassess their compliance programs to specifically address AI risks, implement controls on AI use, and ensure that all public statements about AI capabilities are rigorously vetted for accuracy. For companies already facing government inquiries, early engagement with experienced white-collar defense counsel can be critical in navigating disclosure decisions, structuring cooperation, and developing effective defense strategies.

At Dynamis LLP, our team of distinguished former federal prosecutors has extensive experience defending clients in cutting-edge fraud investigations, including those involving emerging technologies like AI and cryptocurrency. We represent individuals, executives, and corporations in all aspects of white-collar defense, from government investigations to trial advocacy. Our experience spans DOJ, SEC, CFTC, and state regulatory matters across our offices in Boston, New York, and Miami.

If your organization operates in sectors facing heightened AI-related enforcement scrutiny, or if you're confronting a government investigation involving AI or technology-related fraud allegations, we encourage you to contact us for a confidential consultation.

Related Resources:

For confidential legal advice on AI-related enforcement matters, contact Eric Rosen at erosen@dynamisllp.com or visit www.dynamisllp.com.

Next
Next

What the Department of Justice’s New White-Collar Enforcement Policy Means for Your Business