New York State Bar Association Weighs in on Artificial Intelligence, Providing Recommendations

May 31, 2024 | Blog
Partner

In response to the growing impact of artificial intelligence on various sectors, the New York State Bar Association’s (NYSBA’s) Task Force on Artificial Intelligence released a Report and Recommendations to the NYSBA House of Delegates in April. This report addresses the transformative potential of AI in the legal profession, highlighting both opportunities and challenges. The report acknowledges the many benefits of AI, including its ability to quickly perform labor-intensive repetitive tasks, reduce human-caused errors, and augment human intelligence to generally improve quality of life.

However, AI’s application in the legal field still has much further to go before it can be completely relied upon as a legal tool on its own. The report cites a study from Stanford University that found that several popular AI chatbots were often inaccurate when addressing legal questions. For example, these chatbots frequently misinterpreted legal statutes and provided incorrect legal advice.

Additionally, AI has the known potential to generate “hallucinations” (i.e., fictitious content that may look and sound very real) —such as fabricated case law or non-existent legal precedents—and to incorporate or exacerbate unintended bias into its outputs. An instance of this is when AI systems reflect biases present in their training data, potentially leading to discriminatory outcomes in legal decisions.

The report also acknowledges concerns related to AI models’ ability to aggregate and use large amounts of data to train itself, which conjures up issues such as privacy invasion and data vulnerability.

In order to help avoid some of the current pitfalls associated with AI and to provide guidance for attorneys who may encounter AI during the course of their work, the report provides four primary recommendations for the NYSBA:

(1) Adopt the AI guidelines outlined in the Task Force’s report and create periodic updates for these guidelines. These guidelines include practices, such as understanding the risk and benefits of using certain AI tools; notifying a client that an AI tool is being used in their representation; taking measures to protect clients’ confidential data; and not allowing AI-generated results to replace professional judgment. For instance, a lawyer might include a clause in their engagement letter stating how AI tools will be used in their case management, ensuring transparency and informed consent.

(2) Focus on educating judges, lawyers, law students, and regulators on how AI technology works so that it can be applied, regulated, and monitored correctly. The report also suggests expanding Comment 8 of Rule 1.1 of the New York Rules of Professional Conduct (which deals with an attorney’s duty of competence) to include an understanding of AI tools and technology. For example, implementing regular training sessions and workshops on AI applications in legal practice can enhance the competency of legal professionals.

(3) Identify risks that are not currently addressed by existing laws. The report suggests that once these risks have been identified, new laws and regulations can be created to address these issues specifically. An example could be creating legislation that addresses AI’s potential biases and mandates regular audits of AI systems used in legal settings to ensure fairness and accuracy.

(4) Examine the function of law in AI governance. Further to this point, the report recommends that the governance of AI should focus on the technology’s real-life effects on people and society, not just the technology itself. The report also recommends that AI governance should be tailored and proportionate to the risks it creates; should consider whether regulation might be in the form of an overarching framework or smaller, more specific frameworks (e.g., by industry); and should take into account the degree of jurisdictional (and even global) cooperation that may be possible. For example, an industry-specific framework could address AI use in healthcare differently from AI use in legal practices, recognizing each sector’s unique risks and requirements.

As the end of the report states, “This report offers no ‘conclusions.’ As AI continues to evolve, so will the work of NYSBA and the groups tasked with ongoing monitoring.” It’s important to note that the development of AI technology is still very much in flux, as are the guidance and regulations pertaining to it.

If you have further questions on best practices and ethical uses of artificial intelligence, please contact a member of Barton’s artificial intelligence practice group.

Barton LLP
Privacy Overview

Our website uses certain cookies to enhance site navigation, analyze website usage, and assist in marketing efforts that may collect your personal information. You can accept or reject these cookies.