How Artificial Intelligence is Being Used in Law

by Dan A. Baron, Baron Law LLC

When you have questions about artificial intelligence (AI), where better to start than by asking AI itself? So, I did just that and asked ChatGPT to tell me “how is AI being used in the legal field.” Within seconds, the leading artificial intelligence platform spits out a litany of answers ranging from document review and due diligence to virtual assistance to e-discovery and bias detection.

In theory, AI could be used to replicate actions that legal and other professionals complete on a day-to-day basis, but in reality, the applications are not always realistic. Perhaps more importantly, how far do we have to go before anyone would take the word of a robot over an experienced attorney? Here are some of the ways AI can be used in the practice of law and other ways they should not be used at all.

One of the most obvious and non-controversial ways AI can be put to use in the legal field, or any business setting for that matter, is routine task automation. A huge part of the day-to-day workflow in any legal office is taking information from one location and moving it to another, whether from hand-written documents to a computer application, or from a digital form to a file on a server, etc. AI can analyze documents of all kinds for information that can be used for digital tagging and organization. According to the American Bar Association, AI has already been used for years and has become widely accepted as an e-discovery tool to survey documents for relevant data.

Of course, how AI is used by an attorney would be heavily influenced by their practice area. Trial lawyers could use AI to analyze precedent-setting cases and compare statutes with their case’s facts to support or oppose their litigation strategy. On the other hand, a tax attorney could use AI to review calculations for accuracy, or a family law attorney may use AI to help draft tactful and compassionate correspondence to clients in a difficult scenario.

However, the current capabilities of AI leave a lot to be desired in terms of accuracy and accountability. If you’re using AI to fact-check findings, who is fact-checking the AI? In one notable instance, Steven Scwartz, a Manhattan attorney, used ChatGPT for legal research which resulted in finding six precedent-setting cases to use as evidence in court. The only problem? None of these cases actually existed. As backlash mounted from the judge and other legal professionals alike, the lawyer shared that he had never used ChatGPT before and didn’t know it wasn’t a search engine. He had only heard about it from his college-aged children as a potential research tool.

As his colleague succinctly put it, “Mr. Schwartz and the firm have already become the poster children for the perils of dabbling with new technology; their lesson has been learned.”

Another case of a misused AI application is the growth of AI as a criminal sentencing tool. As we’ve discussed, AI is only as good as its input, so when the human influence on the data is flawed, the outcomes inherently are as well. A Tulane University study found that AI models used for sentence recommendations actually did a good job at reducing gender bias in sentencing outcomes, but when it came to race, the results were mixed. In cases where judges sentenced offenders based on their AI-generated reoffending risk scores alone, the judgements against white and African American offenders were statistically equal. However, in cases where AI recommended probation for low-risk offenders, judges disproportionately chose to incarcerate African-American defendants anyway. This brings up one of the great lingering questions of the insertion of AI into the legal field: how do we strike a balance between accepting the work that AI produces and figuring out when and where human intervention is necessary?

These examples lead into some of the mounting ethical concerns of AI’s place in an attorney’s toolbox. As William Eskridge, professor of Yale’s Artificial Intelligence, the Legal Profession, and Procedure class, put it, “ours is a course where ethics issues have come up constantly.”

Attorneys seeking guidance on the appropriate use of AI in their legal practice can turn to the ABA’s Model Rules of Professional Conduct, supplemented by state-specific regulations. The ABA rules regarding compliance with competence (Rule 1.1) and confidentiality of information (Rule 1.6) are crucial, with a focus on staying informed about technology and protecting client data. Furthermore, adherence to fundamental legal confidentiality standards extends to the use of AI, emphasizing the importance of scrutinizing terms of service to safeguard client information stored or shared by third-party AI vendors.

Despite concerns, there is no doubt that AI will continue to gain traction as an everyday tool for more and more people, including lawyers. As with any new technology, it will be important to continuously evaluate the best ways it can be implemented for good and ensure that professionals everywhere are weary of its pitfalls.

Are you on board with the bolstering of technological resources in the legal profession, or are you skeptical that computers can compete with the human mind? Let’s wrap up the discussion with one question: can you tell which paragraph in this article was written by ChatGPT?

At Baron Law, we use AI to streamline task automation, not experienced legal advice. To schedule an appointment with an attorney (not a robot), contact us at 216-573-3723.

Dan A. Baron, Baron Law LLC

Sponsored By

Baron Law LLC
Crowne Centre, Suite #600
5005 Rockside Road
Independence, Ohio 44131

Opinions and claims expressed above are those of the author and do not necessarily reflect those of ScripType Publishing.