Albert Einstein famously said, “If you can’t explain it simply, you don’t understand it well enough.” When applying this maxim to the risk management dilemma for artificial intelligence (AI) models, the answer lies in the concept of Explainable AI and then charting that to the elements of potential legal claims.
The propensity of AI models to err, sometimes seriously, is well known, but legal accountability for hallucinations and other AI mistakes is very much a work in progress. The technology has evolved much faster than the law can keep up, and there is almost no precedent for assessing responsibility when an AI model errs and harm ensues. The elements of claims as they relate to damages arising from operations of AI models may resemble those in more traditional causes of action, but their application to a technology often called a “black box” has not been established.
In the face of so much uncertainty, how can businesses plan for safe and efficient adoption of AI? Most technology lawyers advise their clients to start by creating a framework of questions to be asked during risk assessments and performance evaluation when considering acquisition and implementation of an AI model, either generative (creation of text, images, or data from patterns in the pertinent databases) or predictive (positing trends and making predictions arising from patterns the tool finds in the pertinent databases).
Such a framework for assessing risk and evaluating safe performance within the bounds of the law is difficult when the contours of liability and responsibility are not yet fully known. The board or a senior executive may ask, How are we to plan? Since the full extent of the legal exposure risks is unknown, how can we devise a plan to mitigate these risks and use these business tools without fear-generated impedance?
A good place to start is the concept of Explainable AI which begins with asking, and answering, the following questions and communicating the answers in language a nontechnical person can understand:
Claims arising from the use of AI models may include bias and discrimination, defamation, products liability, professional liability, intellectual property infringement, trade secret misappropriation, or breach of contract between AI developer and customer or between consumers and businesses.
Complicating factors (in litigation “known unknowns,” as former Defense Secretary Donald Rumsfeld once posited) include whether a vicarious liability for the actions of the model and whether content created by a nonhuman can meet the elements of a defamation claim or even be admitted into evidence given long-standing evidentiary foundation rules.
States such as California, New York, New Jersey, and others have passed or are considering laws that regulate or restrict implementation of AI models, particularly in the business-to-consumer context. Regulatory proceedings may arise from: misrepresentation of what the AI model does; bias and discrimination, especially in the employment, banking, and housing areas; impacts of AI models on children; state department of health quality of care investigations with regard to healthcare AI; and proceedings under the False Claims Act.
Creative lawyers will undoubtedly come up with additional causes of action, similar to the product liability and promotion/marketing claims underlying the pending litigation against Meta, TikTok, Snap, and YouTube alleging that their algorithms are addictive to children and have directly caused mental health difficulties. These innovative causes of action were themselves inspired by Big Tobacco litigation and class actions against gun manufacturers arising from the 2012 Sandy Hook school shooting.
The best defense or, for that matter, offense in AI liability litigation or governmental investigations begins with the basics: understand your AI model and be prepared to explain it. This type of proactive due diligence will pay dividends in the event that an AI error gives rise to a cause of action.
If you have any further questions regarding artificial intelligence liability and best practices, please contact Kenneth Rashbaum.