How AI Deepfakes Are Helping Scammers Target Financial Institutions

Jan 6, 2025 | Blog
Partner

As digital scammers and con artists become more sophisticated, so do the technological tools they employ to carry out their cybercrimes. These types of crimes can include using malicious software, phishing schemes, social engineering, and fraudulent communications to obtain access to sensitive information and/or funds. The recent rise of Generative Artificial Intelligence (GenAI) has created an even greater need for vigilance when it comes to cybersecurity.

GenAI—a type of artificial intelligence that can generate or alter content in a sophisticated manner—has the capability to synthesize highly realistic content mimicking real people, places, and voices. Often referred to as “deepfakes,” these types of media can take many forms, such as photos, video, and voice recordings that simulate those of real people. Because deepfakes are often difficult to distinguish from the real thing, they can easily become a weapon for bad actors to fabricate documents or impersonate others’ identities.

Regulatory authorities are hoping to thwart some of these illicit actors by warning companies about trends that are emerging in this area. On November 13, 2024, the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert on “Fraud Schemes Involving Deepfake Media Targeting Financial Institutions.” The alert’s aim is to help financial institutions recognize when deepfake media is potentially being used to execute a fraudulent scheme.

Unfortunately, there have already been victims of deepfake financial fraud schemes. In February 2024, an employee at a multi-national company paid $200 million Hong Kong dollars ($25.6 million USD) to scammers after they faked a video conference call with whom the employee thought was the company’s CFO but was actually an AI rendering. The supposed CFO and other “staff members” on the video call were all deepfake creations simulating the appearance and voices of colleagues the employee recognized. These were convincing enough to prompt the employee to make the money transfer.

With regard to financial institutions specifically, the FinCEN alert notes that deepfake media are often used to synthesize fraudulent identification documents and circumvent identity verification in order to open accounts for money laundering. The alert notes nine “red flag indicators” that financial institutions should heed as evidence of potential suspicious activity with regard to identity information, summarized below:

  1. The customer provides photos on identification documents that show signs of alteration or are inconsistent with the person’s other identifying information.
    • Example: A photo on a driver’s license has an unnaturally glossy texture, inconsistent lighting/shadows, or distorted features.
    • Example: A date of birth on a passport is incongruous with the apparent age of the person in the photo.
  1. The customer’s details across multiple identification documents are inconsistent with each other.
  2. In the case of a live verification check, a customer uses a webcam plugin or claims to be having recurring technological glitches.
    • Example: The customer asks to switch video conferencing platforms or asks for an audio-only verification because of persistent “technical difficulties” with their webcam.
  1. A customer refuses to use multi-factor authentication to verify their identity.
  2. A customer’s photo matches one found online through a reverse-image web search or a search of an AI-generated image gallery.
    • Example: A reverse-image search on Google Images shows that the customer’s photo is one of several downloadable AI-generated faces on a free photo-sharing site.
  1. A customer’s image is flagged by GenAI-detection software.
  2. A customer’s profile or responses to prompts are flagged by GenAI-detection software.
  3. A customer’s geographic or device data is inconsistent with their identification documents.
    • Example: A customer has a U.S. address listed in their profile but consistently logs into their account from an IP address located outside of the United States.
  1. A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.
    • Example: A newly formed account immediately has a flurry of transactions with a cryptocurrency exchange.

Scams meant to defraud individuals and financial institutions are already a high-priority issue in the cybersecurity space. Adding the aggravating factor of AI-generated deepfakes makes the issue more complicated and difficult to detect. While many AI tool creators have or are currently working on implementing safeguards against the abuse of their tools by bad actors, cybercriminals are often able to work around these.

FinCEN recommends means to reduce the risks associated with deepfake fraud. These include “multifactor authentication (MFA), including phishing-resistant MFA, and live verification checks in which a customer is prompted to confirm their identity through audio or video… Although illicit actors may be able to respond to live verification prompts or access tools that generate synthetic audio and video responses on their behalf, their responses may reveal inconsistencies in the deepfake identity. Consequently, malign actors using deepfake identities may attempt to avoid or circumvent live verification checks.”

Financial institutions should take the red flags indicated by FinCEN as a guide to implementing processes for fraud detection that may be perpetrated through or aided by deepfake media. These processes will likely include both AI and/or fraud detection software, as well as updated employee training and policies meant to identify suspicious activity.

If you have any further questions regarding cybersecurity, artificial intelligence, and best practices for compliance and fraud detection, please contact Kenneth Rashbaum.

Barton LLP
Privacy Overview

Our website uses certain cookies to enhance site navigation, analyze website usage, and assist in marketing efforts that may collect your personal information. You can accept or reject these cookies.