We’ve previously touched on some of the issues caused by AI bias. We’ve described how facial recognition technology may result in discriminatory outcomes, and more recently, we’ve addressed a parade of “algorithmic horror shows” such as flash stock market crashes, failed photographic technology, and egregious law enforcement errors. As uses of AI technology burgeons, so, too, do the risks. In this post, we explore ways to allocate the risks caused by AI bias in contracts between developers/licensors of the products and the customers purchasing the AI systems. Drafting a contract that incentivizes the AI provider to implement non-biased techniques may be a means to limit legal liability for AI bias.
AI bias risk can be managed through contractual clauses by including:
- clear descriptions of the AI system specifications including non-discriminatory features and practices;
- representations and warranties that shift the burden of proving that discrimination did not occur to the AI provider; and
- indemnification obligations requiring the AI provider to cover claims that the AI system caused discrimination.
Depending on the nature of the agreement and risks presented by the AI system, the agreement may not need to include all three of these methods. A description of each of these different contracting options follows.
Clear Product Specifications
All agreements benefit from a clear and detailed description of the product or services to be provided. Specifications in the contract for the AI system that clearly designate controls and anti-bias measures can help customers avoid inadvertently employing an inherently biased product. Examples of product specifications that may reduce the risk of AI bias could include:
- The development team will comprise a diverse group of individuals responsible for designing the logic rules.
- There will be clear indicators of “fairness” taken into consideration in the design of the AI system. Providing examples of these indicators in the contract can be beneficial.
- The datasets used in the AI system will be diverse. Providing examples of datasets used and how datasets are selected is advisable and can help clarify what qualifies as “diverse.”
- Any automated results of the AI system will be actively monitored by an employee of the AI provider, and the AI provider’s personnel will have the ability to approve, modify, or override automated decisions to take into consideration the “fairness” indicators.
- Real representative data will be made available and used to monitor the performance of the AI system. If the customer has the ability to contribute to, or provide approval of the representative data, it would add yet another beneficial layer of control on the inputs.
- Industry standard improvements and fixes for potential discrimination will be implemented at no additional charge to the client.
The above specifications may aid in reducing the risk of AI bias occurring. These clauses create a contractual commitment by the AI provider to incorporate built-in safeguards and measures for addressing AI bias.
Representations and Warranties
A stronger approach to utilize in managing AI bias risk in contracts is for the customer to pose many of the above product specifications as representations or warranties. For example, a contract may state that the AI provider represents and warrants that the datasets used in the AI system are diverse. Because representations and warranties shift the burden of proving an event has occurred to the party who made the representation, representations and warranties have the added benefit of putting the obligation to prove compliance on the AI provider.
Examples of representations and warranties may include the following:
- The AI provider represents and warrants that the AI system results will not be deceptive or misleading;
- The AI provider represents that the data used to train the AI system did not disproportionately rely on elements specific to protected or vulnerable classes or prohibited variables (alternatively, depending on the customer’s intended use of the AI system, the AI provider may represent that the data used to train the system was diverse); or
- The AI provider represents and warrants that the AI system is free of bias and discrimination, including as defined by any applicable law.
More general representations and warranties may also be included, such as ones specifying that the data sets will be periodically reviewed and updated, that the AI system will be maintained in accordance with industry standards, and that the AI provider’s relevant personnel will be adequately trained to identify bias.
Indemnification
Requiring AI providers to indemnify the customer if a third party claims the customer’s use of the AI system results in bias or discrimination may also effectively allocate the risk of AI bias. For example, if Company A used an AI system provided by Company B to judge and classify interview candidates, and a candidate sued Company A when they were not hired, alleging the AI system was biased in its analysis, Company B would be required to take on the financial responsibilities of defending the claim or reimbursing for the damages claimed against Company A.
Such an indemnity might take the following form: “Company B will indemnify, defend, and hold harmless Company A for any third-party claims that Company A’s use of the AI software resulted in discrimination or bias.” That said, an indemnity such as the previous example may not be the most straightforward means to protect customers. Given the indemnity above, an AI provider may argue that its AI system outputs are not biased, but rather it is how Company A uses the results in practice that leads to the bias. As a result, when implementing such indemnity provisions, it is important to explore (and document in the agreement) the parameters and limitations around the indemnity that focuses specifically on the AI system results.
Alternatives to Contractual Provisions
Not all partnerships require express contractual requirements as outlined above. An important part of managing AI bias in any AI system procurement is to have open conversations with the AI provider to discuss their efforts to avoid AI bias. Generally understanding how the AI technology functions (including if inherent bias exists), whether the AI provider is or is not aware of potential bias, or if the AI provider has not fully accounted for potential bias, allows customers to implement their own compliance mechanisms to close the gaps themselves, and make more informed decisions to reduce or avoid the possibility of bias.
Conclusion
AI technology is an incredibly powerful tool with many potential benefits to organizations that deploy it. But, as the cliché saying goes, with great power, comes great responsibility. Responsible stewardship starts with careful design and construction of the AI system, and ends with users not blindly leveraging the outputs. Contract language is, at its core, a mechanism for allocating the obligations and risks of a relationship. While the above measures tend to weigh the balance of legal responsibility more heavily on the AI provider, customers of AI systems should feel an obligation nonetheless to ensure their use of AI technology is as fair and unbiased as possible.
RELATED ARTICLES
Facial Recognition, Racial Recognition and the Clear and Present Issues with AI Bias
Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence
The Bias in the Machine: Facial Recognition Has Arrived, but Its Flaws Remain