The feds may be hoping for a plot twist. Recently, several federal agencies jointly encouraged banks to consider developing new technologies, particularly AI technologies, in order to help protect the financial system against money laundering and terrorist financing. Banks are now encouraged to “consider, evaluate, and, where appropriate, responsibly implement innovative approaches to meet their Bank Secrecy Act/Anti-Money Laundering compliance obligations” by agencies including the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency. This encouragement to develop artificial intelligence technology comes on the heels of a recent presentation by a member of the Board of Governors of the Federal Reserve System about the importance of artificial intelligence in financial services.
Artificial intelligence takes many forms, but one form known as “machine learning” is of particular interest in this space. Machine learning allows a computer to process large amounts of data and recognize patterns within it, then “learn” from the data to adapt its own algorithms over the data set accordingly. The result is a program that appears to have predictive powers, and that may be well-suited for use in compliance and risk mitigation by banks.
Currently, a bank is required to have a compliance program commensurate with its respective risk profile for Bank Secrecy Act and Anti-Money Laundering responsibilities. The bank must collect and analyze sizeable amounts of data to perform customer due diligence and to monitor transactions. Where red flags are identified, banks also must file Suspicious Activity Reports to the Financial Crimes Enforcement Network, an agency of the Department of the Treasury. These efforts often consume large amounts of staff resources, and often generate many time-consuming “false positive” alerts. And the work is costly; for example, small banks’ compliance costs average 7% of their noninterest expenses, with most of this cost represented by personnel, then data processing, accounting, legal and consulting expenses.
To encourage private-sector innovation in this area, the federal agencies appear willing to mitigate some risk for the banks who take on the challenge. They assure banks that artificial intelligence pilot programs will not themselves subject the banks to supervisory criticism, even if the pilot programs “expose gaps” in the banks’ existing compliance programs. “For example,” explains Under Secretary Sigal Mandelker in a recent U.S. Treasury press release, “when banks test or implement artificial intelligence-based transaction monitoring systems and identify suspicious activity that would not otherwise have been identified under existing processes, the Agencies will not automatically assume that the banks’ existing processes are deficient.”
Moreover, no penalties will be applied to banks if the pilot programs ultimately prove to be unsuccessful. As for whether the artificial intelligence pilot programs can substitute for existing compliance programs altogether, the agencies inform banks that they should evaluate their pilot programs with a consideration of not only money laundering issues, but also with an eye for information security concerns, third-party risk management, and compliance with laws and regulations relating to customer notifications and privacy—then talk with their regulators.
Whether banks will take up this invitation remains to be seen. One challenge is likely to be the nature of the data at stake. This is because AI programs require access to vast amounts of data to “learn” and work well, and in the banking context, the data set is likely to include sensitive customer information. Artificial intelligence adds a complicating factor; after a certain point, the algorithms are created by the data. Who owns the resulting algorithms? Will the bank be given access to those (potentially) proprietary algorithms, and if not, then how can it ensure that it is appropriately managing its risk commensurate with its risk profile? (As a side note, the director of the U.S. Patent and Trademark Office has indicated that he believes algorithms using artificial intelligence are patentable as a general proposition, although case law regarding computer-related patent eligibility is far from clear on this point.)
Finally, an interesting potential dilemma is noted in the Board of Governors’ presentation: banks must comply with the Equal Credit Opportunity Act and the Fair Credit Reporting Act, but the results of an AI program may unintentionally suggest violations of those statutes. For example, news reports stated that Amazon recently developed an AI tool for hiring, which it trained by feeding the program résumés from past successful hires. From this data set, the artificial intelligence program “learned” to screen out female applicants altogether. The potential for similar issues exists where banks may not consider gender, race or marital status when deciding whether to grant credit or set the terms of credit. Any artificial intelligence compliance program will need to be designed to heed the requirements of these statutes as well, even if it goes against the patterns taught by the data.
On Ozark, the Byrde family has jumped through rings of fire (or at least fields of fire), but one wonders how they would fare if they were also faced with an algorithm that had examined years of their banking records, predicted what is expected of them, and flagged unusual circumstances for banks that could then close their accounts. Now that federal agencies are encouraging banks to develop artificial intelligence as a means of identifying and reporting illicit financial activity in real life, I’ll be watching for this plot line next season.