As previously discussed, financial services regulators are increasingly focused on how businesses use artificial intelligence (AI) and machine learning (ML) in underwriting and pricing consumer finance products. Although algorithms provide opportunities for financial services companies to offer innovative products that expand access to credit, some regulators have expressed concern that the complexity of AI/ML technology, particularly so-called “black box” algorithms, may perpetuate disparate outcomes. Companies that use AI/ML in underwriting and pricing loans must therefore have a robust fair lending compliance program and be prepared to explain how their models work.
Regulators Turn Up the Pressure
In recent months, the Consumer Financial Protection Bureau (CFPB) has issued a series of public statements indicating that it is closely monitoring how firms use AI/ML as part of credit decision making. For example, in an October 2021 press conference, Director Rohit Chopra stated, “while machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening.” In November 2021, Deputy Director Zixta Martinez commented, “we also know the dangers technology can foster, like black box algorithms perpetuating … discrimination in mortgage underwriting.”
The Equal Credit Opportunity Act (ECOA) prohibits creditors from discriminating on the basis of certain prohibited characteristics (e.g., race, religion, marital status). In addition, under Regulation B, the ECOA’s implementing regulation, a lender must provide a statement of specific reasons when taking adverse action against a loan applicant (for example, when a lender decides not to provide a loan). In a July 2020 blog post, the CFPB recognized the use of AI/ML could present certain challenges in providing specific adverse action, and attempted to reduce regulatory uncertainty by describing examples of flexibility under Reg B’s adverse action requirements.
However, on May 26, 2022, the CFPB published a compliance circular that walked back the comments in the 2020 blog post and will complicate creditors’ use of AI/ML models.
Specifically, in Circular 2022-03, the CFPB states that “ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions.” The Circular further states that “[c]reditors who use complex algorithms, including artificial intelligence or machine learning, in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action.” Following issuance of the Circular, companies that make credit decisions based on complex models that make it difficult or impossible to accurately identify the specific reasons for adverse action—so-called “black box” algorithms—will at a minimum be subject to CFPB scrutiny, and may be at risk of regulatory or enforcement action.
The CFPB Scraps No Action Letter Protections
The CFPB previously issued three policies intended to promote innovation, facilitate compliance and provide increased regulatory certainty for companies offering innovative products, including a No-Action Letter (NAL) Policy. An NAL provides a form of protection for CFPB-regulated companies: in exchange for the CFPB’s ability to review a company’s practices, the company would be shielded from certain supervisory or enforcement actions. The CFPB issued a total of six approvals to a variety of market participants, including large banks, fintech startups, and housing counseling agencies, including a 2017 NAL for a credit program involving underwriting algorithms.
However, on May 24, the CFPB announced that it is revamping the office that has issued NAL letters after determining the policies and NAL letters were “ineffective.” And on June 8, 2022, the CFPB announced it was terminating, at a company’s request, an NAL that involved the company’s use of AI/ML models in credit decisions. The CFPB’s order states that the company planned to make significant changes to its underwriting and pricing model, and the company ultimately requested that the CFPB terminate the NAL so it could make such changes without waiting for CFPB approval. Following the termination of the NAL, the company reiterated that it remains committed to fair lending. Nevertheless, termination of the NAL means the company will no longer enjoy the limited protection afforded by participation in the NAL program.
Build Up an Internal AI Compliance Program Before the Regulators Come Calling
The use of AI/ML and algorithms is clearly of growing concern to regulators. As such, companies must build and maintain robust internal compliance programs in conjunction with the development of AI programs. In particular, business, legal and risk teams need to understand how an algorithm is developed, what training data is used, and how an algorithm is evolving over time. Mathematical modeling tools can be used to “check the work” of an algorithm (not to mention providing information that can be included in required notices to consumers). Models can also approximate what the outputs should be; therefore, if an algorithm’s actual results are vastly different, companies should conduct additional studies to make sure the algorithm has not developed unintended biases.
Scrambling to collect information only when a regulator comes calling carries significant risk—how an algorithm currently functions is only a piece of the puzzle; how an algorithm was developed and trained is also vital information that takes time and effort to document. For this reason, companies should enhance their compliance programs now, so they are in a stronger position to respond to any regulatory investigations and stave off potential enforcement actions.
RELATED ARTICLES
The FTC Urges Companies to Confront Potential AI Bias … or Else