On November 29, House Financial Services Committee Chairwoman Maxine Waters (D-CA) and committee member Bill Foster (D-IL) sent a letter to the leaders of multiple federal regulators, asking them to monitor technological development in the financial services industry to ensure that algorithmic bias does not occur. The letter was sent to the following individuals:

  • Jerome Powell, Chairman, Board of Governors of the Federal Reserve System (the Fed)
  • Todd Harper, Chairman, National Credit Union Administration (NCUA)
  • Rohit Chopra, Director, Consumer Financial Protection Bureau (CFPB)
  • Jelena McWilliams, Chairman, Federal Deposit Insurance Corporation (FDIC)
  • Michael Hsu, Acting Comptroller, Office of the Comptroller of the Currency (OCC)

Last Congress, the committee convened the Task Force on Artificial Intelligence — headed by Foster — to examine, among other things, how to reduce algorithmic bias. The task force held three hearing on artificial intelligence (AI) and machine learning (ML) and machine learning in 2021. The first, held in May, focused on the use of AI/ML and explored how human-centered AI can build equitable algorithms and address systemic racism in housing and financial services. The second, held in July, examined how financial institutions rely on AI to create and authenticate digital customer identities. The third, held in October, focused on governments, industry, and society and the need for these groups to develop better AI ethical frameworks.

The letter argues that historical data used as inputs for AI and ML may contain longstanding biases that could potentially create models that discriminate against protected classes — such as race or sex — or proxies of such variables. For example, the letter notes that the use of ZIP codes in loan applications and related lending processes can lead to disparate racial lending outcomes, even though ZIP codes appear to be neutral, because ZIP codes may act as a proxy for race or ethnicity.

The letter asks the regulators to prioritize the following areas in their oversight of AI use:

  • Transparency and Explainability. The letter advocates for human review of automated decision systems rather than a “black box” approach, and it encourages the regulators to develop guidelines and potential rulemaking to make financial institutions disclose pertinent information on their AI modeling, data sets, and methodologies.
  • Oversight and Enforceability. The letter states that regulators must ensure that financial institutions are following all consumer, investor, and housing laws, and it advocates for the use of “regtech” monitoring and compliance systems by financial institutions.
  • Safeguarding Consumer Privacy. The letter provides that financial institutions must safeguard consumer information in their use of AI and must not share such information with third parties without consent.
  • Promoting Fairness and Equity in AI Usage. The letter states that financial institutions using AI must be extra vigilant in proactively addressing algorithmic bias, and they must be encouraged to do more to promote racial and gender equity.

The letter is yet another reminder that both regulators and legislators are keenly watching both the use of AI and ML in financial services and the concept of equity more generally.