Five federal banking regulatory agencies are gathering information and comments on financial institutions’ use of artificial intelligence (AI), including machine learning. On March 29, the Federal Reserve Board, the Consumer Financial Protection Bureau, the Federal Deposit Insurance Corporation, the National Credit Union Administration, and the Office of the Comptroller of the Currency issued a request for information (RFI) seeking information on the following topics:

  • financial institutions’ risk management practices related to the use of AI;
  • barriers or challenges facing financial institutions when developing, adopting, and managing AI and its risks;
  • benefits to financial institutions and their customers from the use of AI; and
  • whether any clarifications from the agencies would be helpful for financial institutions’ use of AI in a safe and sound manner and in compliance with applicable laws.

The RFI notes that financial institutions have been and are exploring AI-based applications for a variety of purposes. For example, financial institutions use chatbots and virtual assistants to mimic live employees and automate routine customer interactions. AI also can inform credit decisions by analyzing traditional data (i.e., data typically found in a consumers’ credit files) and alternative data. Financial institutions may use cybersecurity applications to detect threats and malicious activity, to conduct real-time investigations of potential attacks, and to block ransomware and other attacks.

Not surprisingly, regulators are paying close attention to the presence of AI in the financial services industry, as the industry’s use of AI shows no signs of slowing down. In October 2020, Mastercard introduced an AI-powered suite of tools that allows banks to assess cyber risk and prevent potential breaches. In February of this year, Google Cloud and European-based BBVA announced a strategic partnership that includes an agreement to collaborate in the development of new AI and machine learning models to prevent cyberattacks. Jumio, a California-based Junio provider of AI-powered identify verification and “know your customer” solutions, closed a $150 million round of funding just last month. A few days later, California-based Feedzia raised $200 million for its AI-based ID verification and anti-money laundering platform.

Although the potential benefits of AI are apparent, the RFI cautions that financial institutions should implement processes for identifying and managing potential risks, especially those that could affect an institution’s safety and soundness. Such risks include potential “operational vulnerabilities, such as internal process or control breakdowns, cyber threats, information technology lapses, risks associated with the use of third parties, and model risk.” The RFI also warns of certain consumer protection risks, such as unlawful discrimination, UDAAP and UDAP violations, and privacy concerns.

The RFI’s broad inquiry into financial institutions’ use of AI may give rise to some trepidation by financial institutions, but they should consider using the RFI as an opportunity to educate regulators about the benefits of AI and to seek clarification on how their use of AI could raise compliance concerns when using AI in their respective businesses.

Comments are due 60 days after publication in the Federal Register.