But, to paraphrase another origin story, with great potential comes great pitfalls. AI introduces a host of issues for companies—not only must they ensure that their AI generally performs as expected, but they must also specifically monitor and manage data in order to guard against algorithmic bias. There is the potential need for due diligence of third-party datasets, as well as guarding against inadvertent disclosure of corporate data in order to maintain a competitive advantage. There are also growing concerns that these systems, when adopted without the proper controls and policies, perpetuate and deepen racial and economic inequalities.
What’s in the Box?
We’ve written before about the black box problem and how to best approach this issue of auditing and understanding the ways that machine-learning algorithms internalize data. Implementing potential solutions to the black box problem will help companies remain competitive and use AI to its fullest potential while minimizing the potential risks associated with the technology.
Potential Solutions
- Take inventory. In order to tackle the changing nature of AI and the law, the best way to implement possible solutions is to begin with an AI applications inventory so that your company can develop a risk management strategy and governing procedures regarding policies, training and responsibilities.
- Have a plan. Your company should strive for a data governance plan that establishes policies, procedures and compliance infrastructure; determines data rights; and maintains oversight of data used during AI training.
- Communicate to your stakeholders. Your company should implement standardized disclosure practices to keep your company’s stakeholders in the loop when decisions that affect them are being made using these algorithms and technology.
- Stay weak to stay strong. Managing your company’s black box algorithms to establish controls is the best way to preemptively solve black box issues. Ensure that your company implements AI involving weak black boxes, such as decision-making processes that can be more readily probed in order to analyze the AI’s outputs. Ensuring that the black boxes that your company has implemented are not entirely opaque to humans lessens the issues associated with intent and causation, which are legal doctrines currently evolving alongside the growth of AI. More specifically, establish controls for your company by creating checks for data gathering, preparation, model selection, training, evaluation, validation and deployment.
- Don’t look away. After implementing these controls and reviewing your company’s black box algorithms, maintain surveillance processes to monitor the outcomes of these changes and audit the results to ensure that the baseline parameters are valid, secure and optimized. Having a regular risk assessment will allow for proper evaluation of AI models and algorithms.
- Transparency works. Lastly, ensure that the data your company collects and utilizes to train AI algorithms and applications is obtained in a transparent method, as the FTC has homed in on consumer deception regarding this data.
Ultimately, no one disputes the extent to which artificial intelligence has been and will continue to be transformative across a wide swathe of industries. And while even the largest companies will occasionally find themselves put off balance or otherwise scrambling to keep or catch up with the latest wrinkles in application, there are steps they can take to help ensure the AI upon which they rely doesn’t make the transformational experience a negative one.
RELATED ARTICLES
The FTC Urges Companies to Confront Potential AI Bias … or Else