AI Use Alone Can Compromise Privacy and Cybersecurity: What Law Firms Need to Know

Sep 12, 2023 | Blog
Partner

The rapid evolution of artificial intelligence technology is an exciting prospect for many companies looking to use these powerful new tools to drive efficiency internally and externally. Many organizations are insisting that their law firms become adroit in the use of AI tools so that the organizations can, they believe, reduce legal spend. Accordingly, law firms are also taking notice of the capabilities AI has to offer, and many are experimenting with different models to assist with tasks such as e-discovery, due diligence, and contract drafting or review.

Large AI models are able to fulfill these functions by collecting and storing vast amounts of data in order to train themselves and improve their algorithms’ accuracy. However, the aggregation of so much data in one place should rightfully bring up concerns for attorneys about confidentiality of client information within the requests for a generative product (“prompts”) and how this data is stored and secured from unauthorized access.

When using a public tool (such as ChatGPT, DALL-E 2, Google Bard, Jasper, Lumen5, Copy.ai, etc.) rather than a tool with a private instance, the user’s input essentially becomes public as well. All generative AI tools are public unless one opts for a private environment, which is rarely available for free. Even if an AI system allows users to opt out of having their data stored, that data could very well still be used to train the AI system or could be viewed by other humans helping to train the AI.

For attorneys, this data could include the personal identifiable information of a client, privileged information pertaining to an attorney’s representation of a client, or the sensitive proprietary information of a client (e.g., pending mergers, defense contracts, patent application formulation, etc.).

Client information is therefore at risk in multiple ways. It could be unintentionally exposed through a generated response that uses the prompt of one user to help answer the query of another user. In January 2023, Business Insider reported that Amazon had banned its employees from using ChatGPT to help solve coding issues or any other issues where the prompts contained confidential company information. This came after data closely resembling Amazon’s internal proprietary code was allegedly seen in ChatGPT’s responses to other queries.

Client information could also be stolen and misused if hackers are able to breach an AI platform’s weak security safeguards to access or manipulate data. Among other things, the divulgement of such sensitive information can make firms and their clients easier targets for phishing attacks. For example, if a bad actor were to gain access to a client’s identity and the private details of their case, the scammer could use those details to pose as the client’s attorney and create a very believable email requesting payment.

Making sensitive information available to third-party providers of AI tools (and potentially the public) not only makes this data vulnerable but may also violate attorney ethics confidentiality rules and provisions found in contracts and laws. For instance, any matter involving the data of European Union residents would be subject to the EU’s General Data Protection Regulation (GDPR). So if an attorney represented an employer that did business in Europe and stored employees’ and customers’ information, the input of this type of data into an AI model could violate GDPR standards. Additionally, the EU is moving forward on its AI Act which, when finalized, may comprise restrictions on uses of certain categories of personal information in training algorithms and may provide enhanced notification requirements on uses of certain AI tools.

For legal work, it is generally advisable to carefully qualify uses of generic, consumer-facing products whose inputs and outputs may be available to the general public. For the firms that do choose to experiment with AI, it’s imperative to evaluate the type of work AI may be used for (such as drafting routinely-used contract clauses or obtaining and summarizing articles and treatises on a complex factual or legal issue), set limits on the type of information that can be fed into these models, and refrain from inputting any information that could be used to identify the client or their matter in AI prompts or if it were to show up in a generative output.

It’s less risky to use platforms specifically designed for the legal industry that meet the heightened cybersecurity standards and duty of confidentiality imposed upon lawyers by the ABA Rules of Professional Conduct. Firms should also ensure that the platform to be used has a private environment backed up by representations and warranties on privacy and confidentiality by the platform provider. A paid, private environment may limit the data available to the algorithm though, so the firm should also consider whether use of AI in such a limited environment is advisable from a time and cost perspective.

But even when using legal-centric platforms, it’s still critical that firms take steps to mitigate the risks associated with AI and data breaches. To begin with, firms should prepare an inventory of information in their systems and map the flows of that data. As good cyber hygiene requires, firms should also maintain control of how this data is stored, processed, and accessed. Before using an AI tool, a firm should also confirm that the platform incorporates adequate security features (e.g., encryption) and protections against indirect prompt injection attacks, which can be used to access or manipulate data. Regular testing and audits, whether carried out by the firm itself or a third party, can also serve to inform the firm that an AI system is performing effectively and hasn’t been compromised. Finally, a documented policy regarding the use of AI within the firm accompanied by privacy and cybersecurity training for firm personnel can help decrease the risk of a data breach through human error or social engineering.

Given the rapidly changing privacy and security threat landscape, firms may wish to consider getting consent from their clients for the use of certain AI tools. As with many arrows in the firms’ quivers, the potential cost savings of AI use–on which many clients insist for legal spend savings—must be balanced against risk. Lawyers need to learn about these risks and benefits and communicate them clearly so that a client’s consent is truly informed.

If you have any further questions regarding your firm’s use of AI tools and the accompanying data privacy concerns, please contact Kenneth Rashbaum.