As technological innovations grow ever more sophisticated and widespread, there is a parallel increase in the complexity of challenges regarding the admission of digital evidence in the courts. This is especially true for evidence created or enhanced by artificial intelligence (AI). This type of evidence might rely on functions performed by AI, such as pattern recognition, data aggregation and analysis, document production, predictive evaluations, photo/video enhancement, face/voice recognition, or auto-generated transcripts.
In order for AI or other digital evidence to be admissible, the technology has to be reliable, explainable, relevant, and secure. AI/digital evidence providers and cloud services providers can be excellent resources for smoothing the road toward admissibility.
In order to meet the evidentiary foundation, AI or other digital evidence must be deemed more probative than prejudicial, while also being reliable and accurate. The National Institute of Standards and Technology (NIST), which provides guidance and best practices regarding risk management, equates “reliability” with “trustworthiness.” For AI tools (and their resulting evidence) to be considered trustworthy, attorneys should focus on clearly communicating how the technology functions and demonstrating that the output is accurate. When faced with a Daubert or Frye challenge to AI evidence, the key is making your evidence understandable.
Lawyers must be able to explain to the court how the technology works before the court will permit the evidence to reach a jury. The judge must also be convinced of the reliability of the evidence if it is to be used to support or oppose a motion. Many, if not most, judges are biased against or at least skeptical of AI, so the explainability aspect is crucial.
The U.S. District Court for the District of Nevada has published guidance for its judges on AI admissibility considerations that are a good benchmark for attorneys looking to admit AI evidence. Lawyers should be able to explain: i) The type of AI that is involved, ii) what the purpose (design objective) of the tool is and how the tool meets that objective, iii) how the tool or model was trained, and iv) how the quality of the tool’s output is monitored and evaluated.
How Providers Can Help
The ability to explain AI cogently is a team effort that should start at the very beginning of the provider and user’s relationship. The negotiation of the service or data share agreement should establish parameters for training and support, and this support should include availability for assistance in dispute resolution in order to take advantage of the provider’s expertise and experience with the tool.
The provider of the tool can also teach or train the attorney on the use of the tool, its business purpose, and how it fulfills that purpose. Providers often have resources and demonstration materials that can be helpful in getting a court to understand and trust the technology. If live testimony is required, the attorney should prepare a witness from the provider for the challenges of cross examination and potential questions from the bench.
Another component contributing to an argument for an AI tool’s reliability is the integrity of its data, which seeks to ensure the accuracy of its outputs. AI models and tools are subject to cyberattacks, and their outputs can be manipulated through introduction of malware into a prompt (inquiry) or source code. AI itself can also be used to train malicious tools or create its own zero-day attacks. Without the proper security controls in place, these risks may cast doubt on the integrity of the AI output and place admission of the resulting evidence at risk.
How Providers Can Help
AI providers can help by documenting security controls and monitoring the performance of the controls. These safeguards should include access limitations, forensic review of the training data, routine security testing, adherence to the NIST AI Risk Management Framework, and adversarial training (i.e., training the AI model to recognize and resist poisoned data.)
Cloud services providers can also assist with establishing security and integrity of data by documenting chain of custody, implementing data center security protocols, providing proof of training for personnel with access to the subject data, demonstrating regular training and testing of the security controls, and meeting certain legal obligations through service agreement provisions. Such testimony or affirmations from cloud services representatives can assist in meeting Federal Rules of Evidence self-authentication thresholds using comparison of proffered data with data uploaded to the provider and certification by experts.
If you have any further questions regarding the admissibility of AI or digital evidence, please contact Ken Rashbaum.