Matt Zbrog
The rise of AI-generated content has outpaced detection. In 2025, global losses from deepfake-enabled fraud surpassed $200 million in just the first quarter. Cybersecurity firm Entrust reports that deepfake and synthetic document attacks now occur every five minutes, marking a 244 percent surge in digital forgery incidents since 2023.
These synthetic threats range from manipulated videos and voice clones to entirely fabricated text communications, and they’re increasingly being used to commit fraud, spread disinformation, or distort digital evidence.
The scale of the challenge has caught the attention of international law enforcement agencies like Europol and the U.S. Department of Homeland Security. Meanwhile, the National Institute of Standards and Technology (NIST) is developing benchmarks to evaluate deepfake detection software, noting that accuracy drops sharply when tools are tested on real-world, compressed media files. While AI-generated content is not yet pervasive in every case being investigated, its presence is rising, and a major shift in digital forensics is underway.
Forensic analysis is built on the principle that digital evidence represents reality. But AI-generated and AI-altered media undermine that assumption, forcing experts to assess not just what was recorded, but whether the event ever occurred at all. To learn more about how digital forensics experts are fighting back against AI-generated and AI-altered media, read on.
Tom Ervin
Tom Ervin has over 25 years of experience supporting federal law enforcement with cyber forensics analysis on criminal and national security investigations that have been directly responsible for numerous indictments and convictions throughout the country. He has also developed and led malware analysis and digital forensics training efforts that have been delivered internationally in over 15 countries.
Ervin is a contributor to the Wiley Encyclopedia of Forensic Science. He is an assistant professor of practice at the University of Texas San Antonio (UT San Antonio) where he currently teaches Digital Forensics Analysis, Malware Analysis, and Senior Cyber Capstone courses. He has a master’s degree from North Carolina A&T State University and a bachelor’s degree from Tuskegee University.
Jesse Varsalone
Jesse Varsalone is a collegiate associate professor of cybersecurity technology at the University of Maryland Global Campus. He has taught since 1994, including at the undergraduate and graduate levels, in middle school and high school, in corporate training, and as an instructor for the U.S. Department of Defense at the Defense Cyber Investigations Training Academy (DCITA).
Varsalone holds several certifications in the IT field, including A+, Arduino, CASP+, CEH, CISSP, Cloud+, CTT+, CYSA+, iNet+, Linux+, Net+, Pentest+, Security+, Server+. He has a master’s degree from the University of South Florida and a bachelor’s degree from George Mason University. His recent book, The Hack is Back, was published by CRC Press in 2023.
“The rise of AI-generated content is forcing digital forensic analysts to become detection specialists who are not just identifying what content exists on a device, but also validating its authenticity and provenance,” Ervin says. “It challenges the very core principle of forensics: trust in the integrity of evidence.”
AI-manipulation is appearing in an expanding range of cases: misinformation campaigns, social engineering schemes, and even fabricated evidence in criminal proceedings. In response, digital forensics is evolving into a hybrid discipline that blends investigative instincts with data science, using tools like Hive and Sensity AI to analyze pixel-level inconsistencies and compression artifacts that betray synthetic manipulation.
“Just as digital forensics evolved from manually searching files to using advanced software capable of processing millions of artifacts, AI detection will also require specialized forensic tools to keep pace,” Varsalone says.
“Current detection methods include metadata analysis to identify inconsistencies in Exchangeable Image File Format (EXIF) data, compression, or codec signatures. Photo-Response Non-Uniformity (PRNU) can verify whether an image originated from a real camera, while tools like Microsoft’s Video Authenticator and Deepware Scanner employ machine learning to flag deepfakes.”
Research shows a worrying gap between the results of detection tools in academic settings and the results in real-world settings. In-the-wild detection remains a moving target: the most advanced deepfake videos and cloned voices can evade even specialized forensic tools, and a survey from the University of Amsterdam found that human observers could distinguish high-quality deepfakes from real videos only 24.5 percent of the time.
Ervin believes the future of AI detection lies in multi-modal correlation, which analyzes how different forms of digital evidence interact. Rather than studying a single photo or audio file in isolation, investigators examine surrounding signals like metadata trails, device logs, user interactions, and IoT sensor data to identify inconsistencies that betray fabrication. The result is a more holistic forensic picture.
“In multi-modal AI detection, we don’t just analyze content in isolation,” Ervin says. “We examine the relationship between media, device metadata, and user behavior. Cross-source correlation will be vital, especially in environments where user interactions leave a broader digital footprint.”
To get ahead of the problem, some researchers and technology companies are starting to embed authenticity checks directly into media creation. Initiatives such as Adobe’s Content Authenticity Initiative (CAI) and the Coalition for Content Provenance and Authenticity (C2PA) propose attaching cryptographic metadata like timestamps, edit histories, and device identifiers to every photo or video at the moment of capture. Google’s SynthID uses invisible watermarking to tag AI-generated images. If these frameworks gain traction, forensic examiners could soon verify whether an image was altered, as well as who created it, when, and how.
“Provenance tracking and digital watermarking are at the forefront, embedding tamper-resistant identifiers into cameras, files, or even the generative models themselves so that the origin of content can be cryptographically verified,” Varsalone says.
Even though these tools and standards exist, widespread adoption is still low as of 2025, and many cameras, apps, and platforms do not yet embed provenance metadata or watermark it. Such systems, however, rely on broad industry adoption, and bad actors can always operate outside those standards. Authenticity verification remains fragmented: different camera manufacturers, AI developers, and platforms may use incompatible systems. Without universal standards, forensic investigators must triangulate truth using multi-modal evidence.
“The concept of chain of custody continues to be critical, ensuring that there is a complete record of where the evidence originated, who handled it, and how it was preserved throughout the process,” Varsalone says. “What has changed, however, is the legal and technical complexity surrounding this evidence.”
Illustrative of that change is a recent case that unfolded in Baltimore, where a falsified audio clip circulated online appeared to feature a local school principal making racist remarks. The clip, generated using an AI voice-cloning tool, spread rapidly before digital forensic analysts ultimately traced the audio’s creation through metadata, IP addresses, and email logs to prove it was AI-generated. Varsalone points to cases like this as reminders that the fundamentals still matter, particularly when paired with new AI detection software.
“This blend of traditional and modern approaches is exactly where digital forensics is headed,” Varsalone says.
As the threat landscape evolves, forensic education is adapting alongside it. Universities and certification bodies are beginning to integrate AI detection into curricula, with programs starting to include modules on multimedia authenticity, adversarial machine learning, and provenance verification.
These initiatives mirror historical shifts in digital forensics. When encryption and cloud computing first emerged, forensic examiners had to develop new competencies in live imaging and distributed evidence collection. The rise of AI presents a similar inflection point that demands both new tools and a reassertion of core forensic values: documentation, repeatability, and transparency.
“We are witnessing the birth of what I would call AI forensics,” Ervin says. “The field will move toward AI-forensic literacy: recognizing machine-generated deception, validating ‘synthetic alibis’ created with fake chat logs or deepfakes, and the ability to trace back through prompt injections, API tokens, and AI deployment logs. In the next 5 years, digital forensics professionals will need to be part investigator, part data scientist, and part prompt engineer.”
Varsalone characterizes the emerging landscape as an AI-versus-AI arms race, where forensic models must evolve alongside the generative systems they analyze. In this environment, machine learning becomes both the adversary and the ally. The success of the next generation of digital forensics experts will hinge on their ability to detect and ultimately prove the origins of the digital world’s artifacts, and the stakes are high.
“The ability to prove authenticity—something that once seemed straightforward—will soon become one of the most critical and complex problems facing both the courts and society at large,” Varsalone says.
Matt Zbrog
Matt Zbrog is a writer and researcher from Southern California. Since 2018, he’s written extensively about the increasing digitization of investigations, the growing importance of forensic science, and emerging areas of investigative practice like open source intelligence (OSINT) and blockchain forensics. His writing and research are focused on learning from those who know the subject best, including leaders and subject matter specialists from the Association of Certified Fraud Examiners (ACFE) and the American Academy of Forensic Science (AAFS). As part of the Big Employers in Forensics series, Matt has conducted detailed interviews with forensic experts at the ATF, DEA, FBI, and NCIS.