As digital scammers and con artists become more sophisticated, so do the technological tools they employ to carry out their cybercrimes. These types of crimes can include using malicious software, phishing schemes, social engineering, and fraudulent communications to obtain access to sensitive information and/or funds. The recent rise of Generative Artificial Intelligence (GenAI) has created an even greater need for vigilance when it comes to cybersecurity.
GenAI—a type of artificial intelligence that can generate or alter content in a sophisticated manner—has the capability to synthesize highly realistic content mimicking real people, places, and voices. Often referred to as “deepfakes,” these types of media can take many forms, such as photos, video, and voice recordings that simulate those of real people. Because deepfakes are often difficult to distinguish from the real thing, they can easily become a weapon for bad actors to fabricate documents or impersonate others’ identities.
Regulatory authorities are hoping to thwart some of these illicit actors by warning companies about trends that are emerging in this area. On November 13, 2024, the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert on “Fraud Schemes Involving Deepfake Media Targeting Financial Institutions.” The alert’s aim is to help financial institutions recognize when deepfake media is potentially being used to execute a fraudulent scheme.
Unfortunately, there have already been victims of deepfake financial fraud schemes. In February 2024, an employee at a multi-national company paid $200 million Hong Kong dollars ($25.6 million USD) to scammers after they faked a video conference call with whom the employee thought was the company’s CFO but was actually an AI rendering. The supposed CFO and other “staff members” on the video call were all deepfake creations simulating the appearance and voices of colleagues the employee recognized. These were convincing enough to prompt the employee to make the money transfer.
With regard to financial institutions specifically, the FinCEN alert notes that deepfake media are often used to synthesize fraudulent identification documents and circumvent identity verification in order to open accounts for money laundering. The alert notes nine “red flag indicators” that financial institutions should heed as evidence of potential suspicious activity with regard to identity information, summarized below:
Scams meant to defraud individuals and financial institutions are already a high-priority issue in the cybersecurity space. Adding the aggravating factor of AI-generated deepfakes makes the issue more complicated and difficult to detect. While many AI tool creators have or are currently working on implementing safeguards against the abuse of their tools by bad actors, cybercriminals are often able to work around these.
FinCEN recommends means to reduce the risks associated with deepfake fraud. These include “multifactor authentication (MFA), including phishing-resistant MFA, and live verification checks in which a customer is prompted to confirm their identity through audio or video… Although illicit actors may be able to respond to live verification prompts or access tools that generate synthetic audio and video responses on their behalf, their responses may reveal inconsistencies in the deepfake identity. Consequently, malign actors using deepfake identities may attempt to avoid or circumvent live verification checks.”
Financial institutions should take the red flags indicated by FinCEN as a guide to implementing processes for fraud detection that may be perpetrated through or aided by deepfake media. These processes will likely include both AI and/or fraud detection software, as well as updated employee training and policies meant to identify suspicious activity.
If you have any further questions regarding cybersecurity, artificial intelligence, and best practices for compliance and fraud detection, please contact Kenneth Rashbaum.