Illustrations by Mike Austin
Rise of the Machines
How AI is Reshaping the Legal Profession
By Jenny Hontz
“We typically use well-established online research databases for our legal research,” said the founder of the firm Coviello Weber & Dahill. But he wanted to test what the new large language model AI chatbots could do, so Coviello typed some legal research questions into ChatGPT and Google’s chatbot, Bard. He encountered similar unsettling results.
“Some of these tools would spit out answers to your research question that looked great,” Coviello says. “I even went so far as to ask it to provide me with the text of a particular legal opinion, and then I checked a well-established research database and found that the case was fake. I asked, ‘Is this legal decision real or did you make it up?’ And the thing would deny it and say, ‘No, this is real.’ Sure enough, it wasn’t.”
Friend or Foe?
Generative AI has been hailed for its ability to rapidly generate humanlike writing and images, but given these AI “hallucinations,” Coviello believes attorneys should treat early-stage AI chatbots with extreme caution. “Obviously, it’s not a replacement for exercising the care and skill that you need to exercise as a lawyer,” he says. “You have to double-check these things. It’s clearly a work in progress.”
“I asked, ‘Is this legal decision real or did you make it up?’
And the thing would deny it and say, ‘No, this is real.’ Sure enough, it wasn’t.”
Jeffrey Coviello ’03
Nevertheless, Coviello and other Fordham Law alumni believe AI—when used properly, with checks in place—has the potential to revolutionize the legal field and help firms like his. “Our approach to technology has been to embrace it,” Coviello says. “As a small firm, we have leveraged technology on the administrative side with our practice management, document management, and billing systems. I think there are a lot of positives and efficiencies that can be gained.”
Those two impulses—to celebrate or to fear the potential of AI—have been playing out on a global scale since OpenAI publicly released ChatGPT in late 2022. Tension between so-called “techno-optimists,” who believe AI will free people from drudge work, and “doomers,” who fear AI will replace jobs and turn against humans, contributed to the temporary ousting of OpenAI CEO Sam Altman and the subsequent naming of a new board in November. Heeding concerns over the potential threats AI poses, policymakers are focusing on regulating the new technology. President Biden issued an executive order on AI safety and security in October 2023, setting a framework to protect national security, consumer privacy, equity, and workers’ rights, and European Union officials agreed in December 2023 to the AI Act, a sweeping new law regulating AI that could become a global standard.
Companies are rapidly developing AI tools to help lawyers, which could expand access to legal services for people who otherwise can’t afford it. Law schools are also rushing to adapt. As soon as ChatGPT went live, Fordham Law instituted new rules and took precautions to preserve academic integrity. But since then, the focus has been on curricular and extracurricular enhancements to ensure graduates are well prepared for the new jobs created by the tech revolution and can contribute to conversations around the responsible use of AI in the future.
“Fordham Law School is a service institution, and we care deeply about ethics and professional responsibility,” says Joseph Landau, incoming dean and associate dean for academic affairs. “So, anytime there’s a concern about the negative potential uses and abuses of tools like generative AI, it’s important to us to be part of the conversation about the implications of that technology. We’re rolling up our sleeves to be part of that discussion and engage in problem-solving.”
Taking a Cautious Approach
Much like Coviello, Sharon McCarthy ’89, a partner at the firm Kostelanetz LLP, foresees multiple AI minefields that lawyers should avoid. Typing client information into ChatGPT, she says, would be a breach of confidentiality, and AI may make it easier for bad actors to generate phony contracts, signatures, photos, and other evidence.
“That’s all very frightening,” McCarthy adds. “I think it’s going to create a lot of problems in our justice system if we don’t get it under control and figure out how to determine if things are authentic or not.”
“The purpose is not to replace lawyers, but to expand the ability of attorneys who are doing very difficult work
Even so, she believes AI has the potential to help firms with laborious tasks like document review. “That’s sort of a soul-crushing part of the work,” McCarthy says. “To the extent that there’s a way to free lawyers up to be thinking and strategizing and conducting real research, [that] is a welcome development, frankly.” But, she adds, “right now, we’re mostly cautioning people.”
When it comes to legal research, McCarthy will continue to rely on Westlaw and Lexis-Nexis “until somebody tells me that I can rely on something else—and it’s been tested and proven, I’ve tried it, and I’m confident in it.”
However, all of the legal-information vendors, including LexisNexis and Westlaw, are currently building generative AI-based research and writing tools, says Todd Melnick, clinical associate professor and director of the Maloney Library at Fordham Law. And they’re working to solve the “hallucination issue” by using retrieval-augmented generation (RAG). In theory, RAG minimizes hallucination by grounding the large language models in a set of reliable data—the cases, statutes, regulations, and secondary sources that major legal information vendors have collected and organized for generations. “However, what we have come to call hallucination is part of what makes generative AI possible, so this is a difficult problem,” says Melnick. “We have yet to see to what extent RAG solves it. Unlike ChatGPT, these tools are designed to provide generative content that is supported by citation to actual, verifiable information.”
Even so, he believes lawyers should remain wary. “We have to be careful not to buy into hype,” he says. “These tools look like game changers, but they will need extensive independent testing and evaluation.”
Expanding Access
“It has the potential to do a lot of really good things, but it also has the potential to really entrench existing problems that are very, very bad in our current legal system,” he warns.
“Workers can input information about hours they’ve worked, their employer, and their pay, and then it will calculate unpaid wages, unpaid overtime, and any sort of damages. Then it will generate a demand letter and a Department of Labor complaint that those workers can use to see if they can get the money back,” Kearl says. “It is a system designed to help workers access justice, either through private negotiation or a formal complaint to an administrative enforcement agency. The point of the tool is to expand access to legal resources.”
Despite the benefits, ¡Reclamo! is costly to maintain and raises several issues. Nonprofits using legal tech tools need to make sure they don’t run afoul of state bar ethics rules preventing those who are not lawyers from providing legal services. “There is the risk of having this sort of second-tier access to justice for folks,” Kearl says. “But at the moment, we have a crisis of inaccessibility of legal services. The purpose is not to replace lawyers, but to expand the ability of attorneys who are doing very difficult work to reach an even larger number of people and get outsized impact.”
The bigger problem, Kearl says, is how to make sure AI systems don’t merely perpetuate systems of injustice. “We see that happen with facial recognition technology designed by a bunch of white people, and then those systems are unable to respond to faces that are not white with the same level of accuracy. Technological development itself isn’t inherently biased or problematic, but you have to recognize that the people who are building those tools might have biases or problems. And if they’re not willing to interrupt those biases and recognize them in the course of developing these tools, they’re just going to replicate them.”
Likewise, expanding access to legal services doesn’t solve the issue of underfunded regulatory agencies and courts needed to handle complaints. “I think they’re going to run into a bottleneck in the courts where cases take forever to get moving,” Kearl says.
Chatting ChatGPT
Preparing Students
The clinic will foster partnerships with nonprofits to use emerging technologies to assist underserved communities, advocate before administrative bodies on policy matters related to AI, and help improve ways people navigate legal systems. Students will also explore the development of tools that could empower people from underserved communities to address everyday legal challenges—from landlord-tenant disputes to policing issues.
The Law School also plans to co-host the inaugural Trust and Safety Cyber 9/12 competition with the Atlantic Council on May 13–14. Modeled on the Atlantic Council’s Cyber 9/12 competitions, the event will bring student teams from around the world to Fordham Law to compete and solve simulated tech-based legal problems.
As with many other professions, there’s fear in the legal field that AI legal tools will render young lawyers obsolete and deprive them of vital experience needed to build skills. Nevertheless, Landau sees reasons for optimism. He believes the school’s investment in law and technology will give its students an edge.
“What’s happening in the tech world is highly relevant for the legal profession,” says Landau. “Whether it’s matters involving cybersecurity, intellectual property, or privacy, many students are going to be doing work that involves tech-based legal problems. Lawyers are also going to have access to an array of features to enhance the diligence process, document discovery, and litigation drafting. Training law students to understand the proper and improper use of these tools is a critical part of ensuring they have the right skills to be leaders in the profession.”
While Landau believes students need to lean in and learn the new technology, he says that law schools will continue focusing on fundamental lawyering skills that AI can’t replace. “There are things technologies can’t do—whether it’s analytical skills, deep reflection, demonstrating emotional intelligence with clients, exercising professionalism, leadership skills, empathy, and the list goes on,” Landau says.
Whether AI ushers in a new era of efficiency or poses threats society has never seen, Fordham will do its best to ensure students are ready. “It’s important that we take a proactive approach,” Landau says. “We will continue to ensure that our graduates stand out through their unique preparation for the new demands they face in the profession.”
Faculty Perspectives
Policy Must Keep Pace
“Many applications of AI are likely to make many people’s lives better. The challenges are in establishing durable legal, ethical, and professional standards that cultivate social responsibility. Policymakers must be as sober as ever. For foreseeable harms: Policymakers have already started imposing clear limitations—how or whether companies may collect, store, and market consumers’ personal information. They have started imposing bright-line restrictions on some AI systems, like facial recognition, that disproportionately harm historically marginalized people. Policymakers are also concerned about overhyped claims. For unforeseeable uses or harms: The best policymakers can ask is that businesses and engineers employ regular risk assessments in their development and marketing of AI. This is what the White House and European regulators have proposed.”
Professor of Law and former Senior Advisor to the Chair of the Federal Trade Commission (2021–2023)
The Threats Are Real
Clinical Associate Professor and Director of the Maloney Library
A New World for Patents
Professor of Law
Creativity and Copyright
Clinical Associate Professor of Law
Codes, Computing, and Caution
Professor of Law and Edward & Marilyn Bellet Chair in Legal Ethics, Morality, and Religion