blog-banner-image

Forensic AI: Using AI & Automation in Fraud Investigation

Search For Schools

1
2
3
“As fraudsters use AI more for their own ends, investigators must keep up with how to counteract it.”Laura Harris, CFE, Research Specialist for the Association of Certified Fraud Examiners (ACFE)

Regarding fraud investigation, there are a couple of new deputies in town. Generative AI in the vein of ChatGPT can help identify and investigate sophisticated financial crimes, while ML-powered automation can improve detection and prevent reoccurrence. These new capabilities are particularly important in an era where every company is a de facto software company, with data, intellectual property, and financial assets that must be protected. But they’re sidekicks, not sheriffs. The human touch remains essential.

AI-powered tools require fraud examiners to nurture new skills; they also require careful guidance and unique considerations. Chatbots running on generative AI can be viewed as supercharged search engines: modest-looking and inert on their own but revolutionary if used correctly. Unfortunately, fraudsters have access to many of the same tools.

Read on to learn more about how AI and automation are used in fraud investigation.

The Benefits of AI & Automation in Fraud Investigation

“With a specified purpose, AI can help with processes that can save the user time and energy,” says Laura Harris, CFE, a research specialist for the Association of Certified Fraud Examiners (ACFE).

AI is best at sifting through large swathes of data. It can pull patterns and signals from noisy and unstructured data across various formats, flagging them to investigators for follow-up. This is increasingly valuable as the data terrain of investigations continues to grow: there’s more and more data to look at and interpret. The needle is getting buried in bigger and bigger haystacks.

“Imagine a transaction for a payment that is more than a designated threshold,” Harris says. “The data automatically goes into the company’s computer system. A human can go searching for it and methodically extract all the information manually. Or, AI could be used to find data such as time, geographic location, and the IP address of that transaction to connect additional elements that might indicate a pattern of fraud.”

AI can also help reduce unintentional bias in investigations. Confirmation bias is a real issue when so much extraneous information is available: an investigator might find evidence to support several different conclusions, potentially leading an investigation astray. Humans are susceptible to following their own intuition and past experience, which may not always be reflected in the objective data—and that has its own pluses and minuses—but AI can act as a counterbalance, reinforcing an analytics-driven investigation.

Perhaps the most valuable application of AI and ML-powered tools for companies and organizations is in fraud prevention, which is distinct from but related to fraud investigation. As automated services, these tools can run in the background, in sentry mode, adding a layer of vigilance that human investigators can’t. As data continues to churn through an organization, AI and ML can constantly evaluate risks and flag areas of potential fraud. Human investigators can also audit that ongoing analysis should an active investigation begin.

The Challenges of Using AI & Automation in Investigations

“AI assistance must be thoughtfully considered,” Harris says. “If you say, ‘Go find fraud,’ you must define what fraud is and isn’t. We even have a hard time teaching this to people. Because sometimes what looks normal isn’t actually normal, and what looks questionable might have a good reason why it’s occurring. AI cannot just be a plug-in solution. It will require more work on the front and back end.”

AIs are flawed. The information they provide may be well-worded but not always accurate. Their understanding of nuance is superficial. And, despite their perceived objectivity, they, too, can be biased: biased training data or misleading prompts can quickly send them astray. Novices may be awed by the initial results, but human investigators would be best served by viewing new AI tools as flatfooted assistant deputies: ambitious but subordinate, new to the field, with claims that need to be verified before taken as fact.

“An AI does not have the nuances that people do,” Harris says. “Those nuances can help or hurt an investigation and must be respected.”

As investigators gain more experience working with AI, they’ll develop a more effective working rhythm. However, unique considerations will remain around privacy and liability. On one hand, privacy laws can limit data available for an investigation; on the other, questions typed into a public server are not privacy-protected, and investigators must be careful to conduct their investigations securely.

Different regulations between different countries can complicate matters further. Legal liability, too, is murky: who is responsible for the actions of an AI that’s responsible for, or complicit in, an act of fraud?

The Future of AI & Automation in Investigations

Shifting to AI and automation requires some human investigators to update their mental software. But the future where AI and automation are a regular part of fraud investigations is not distant: in some ways, it’s already here. AI literacy could soon become as valuable as typing, email, internet search, and OSINT proficiency. New and aspiring fraud examiners would benefit from knowing how AI tools work, what logic undergirds them, and what tradeoffs they include.

“Understanding data analytics and data management will enable people to verify the work of the AI, which is not necessarily going to give you the output you think,” Harris says. “Knowing what AI can and cannot do, and how it does it, will help people better understand the dynamics at play.”

Unfortunately, the bad guys have access to the same tools as the good guys do. Already, AI is being used in the commission of fraud and crime. Spear-phishing attacks are getting more sophisticated. New malware strains are proliferating. Scripting of malicious code takes only a few clicks. Harris points out that fraudsters have an unfair advantage, too: they may only need a single missing piece that AI can offer to complete their objective, while investigators need to teach, train, and verify the use of AI tools. Keeping pace requires vigilance.

“As fraudsters use AI more for their own ends, investigators must keep up with how to counteract it,” Harris says. “Preventing fraud is always better than detecting it, but both need the appropriate amount of attention. I see a future where people become more interested and involved in AI and investigation, but without letting the AI do all the work. It is a tool, not a miracle worker.”

Matt-Zbrog
Writer

Matt Zbrog

Matt Zbrog is a writer and researcher from Southern California. Since 2018, he’s written extensively about the increasing digitization of investigations, the growing importance of forensic science, and emerging areas of investigative practice like open source intelligence (OSINT) and blockchain forensics. His writing and research are focused on learning from those who know the subject best, including leaders and subject matter specialists from the Association of Certified Fraud Examiners (ACFE) and the American Academy of Forensic Science (AAFS). As part of the Big Employers in Forensics series, Matt has conducted detailed interviews with forensic experts at the ATF, DEA, FBI, and NCIS.