Illustrations by Mike Austin

Rise of the Machines

How AI is Reshaping the Legal Profession

Innovations in AI are transforming life as we know it, revolutionizing everything from the arts to manufacturing to the way business is done across industries. As Fordham Law alumni cautiously navigate this brave new world—embracing AI’s potential while girding against possible harms—faculty are gauging its impact and preparing students for a future that’s already here.

By Jenny Hontz

headshot of Jeffrey Coviello
After two New York lawyers were sanctioned by a federal judge for submitting a legal brief with six fictitious cases generated by OpenAI’s artificial intelligence chatbot ChatGPT, Jeffrey Coviello ’03 decided “out of curiosity” to check out the generative AI technology for himself.

“We typically use well-established online research databases for our legal research,” said the founder of the firm Coviello Weber & Dahill. But he wanted to test what the new large language model AI chatbots could do, so Coviello typed some legal research questions into ChatGPT and Google’s chatbot, Bard. He encountered similar unsettling results.

“Some of these tools would spit out answers to your research question that looked great,” Coviello says. “I even went so far as to ask it to provide me with the text of a particular legal opinion, and then I checked a well-established research database and found that the case was fake. I asked, ‘Is this legal decision real or did you make it up?’ And the thing would deny it and say, ‘No, this is real.’ Sure enough, it wasn’t.”

Friend or Foe?

Generative AI has been hailed for its ability to rapidly generate humanlike writing and images, but given these AI “hallucinations,” Coviello believes attorneys should treat early-stage AI chatbots with extreme caution. “Obviously, it’s not a replacement for exercising the care and skill that you need to exercise as a lawyer,” he says. “You have to double-check these things. It’s clearly a work in progress.”

“I asked, ‘Is this legal decision real or did you make it up?’

And the thing would deny it and say, ‘No, this is real.’ Sure enough, it wasn’t.”

Jeffrey Coviello ’03

Nevertheless, Coviello and other Fordham Law alumni believe AI—when used properly, with checks in place—has the potential to revolutionize the legal field and help firms like his. “Our approach to technology has been to embrace it,” Coviello says. “As a small firm, we have leveraged technology on the administrative side with our practice management, document management, and billing systems. I think there are a lot of positives and efficiencies that can be gained.”

Those two impulses—to celebrate or to fear the potential of AI—have been playing out on a global scale since OpenAI publicly released ChatGPT in late 2022. Tension between so-called “techno-optimists,” who believe AI will free people from drudge work, and “doomers,” who fear AI will replace jobs and turn against humans, contributed to the temporary ousting of OpenAI CEO Sam Altman and the subsequent naming of a new board in November. Heeding concerns over the potential threats AI poses, policymakers are focusing on regulating the new technology. President Biden issued an executive order on AI safety and security in October 2023, setting a framework to protect national security, consumer privacy, equity, and workers’ rights, and European Union officials agreed in December 2023 to the AI Act, a sweeping new law regulating AI that could become a global standard.

Companies are rapidly developing AI tools to help lawyers, which could expand access to legal services for people who otherwise can’t afford it. Law schools are also rushing to adapt. As soon as ChatGPT went live, Fordham Law instituted new rules and took precautions to preserve academic integrity. But since then, the focus has been on curricular and extracurricular enhancements to ensure graduates are well prepared for the new jobs created by the tech revolution and can contribute to conversations around the responsible use of AI in the future.

headshot of Joseph Landau

“Fordham Law School is a service institution, and we care deeply about ethics and professional responsibility,” says Joseph Landau, incoming dean and associate dean for academic affairs. “So, anytime there’s a concern about the negative potential uses and abuses of tools like generative AI, it’s important to us to be part of the conversation about the implications of that technology. We’re rolling up our sleeves to be part of that discussion and engage in problem-solving.”

Taking a Cautious Approach

headshot of Sharon McCarthy

Much like Coviello, Sharon McCarthy ’89, a partner at the firm Kostelanetz LLP, foresees multiple AI minefields that lawyers should avoid. Typing client information into ChatGPT, she says, would be a breach of confidentiality, and AI may make it easier for bad actors to generate phony contracts, signatures, photos, and other evidence.

“That’s all very frightening,” McCarthy adds. “I think it’s going to create a lot of problems in our justice system if we don’t get it under control and figure out how to determine if things are authentic or not.”

“The purpose is not to replace lawyers, but to expand the ability of attorneys who are doing very difficult work

to reach an even larger number of people and get outsized impact.”
Frank Kearl ’18

Even so, she believes AI has the potential to help firms with laborious tasks like document review. “That’s sort of a soul-crushing part of the work,” McCarthy says. “To the extent that there’s a way to free lawyers up to be thinking and strategizing and conducting real research, [that] is a welcome development, frankly.” But, she adds, “right now, we’re mostly cautioning people.”

When it comes to legal research, McCarthy will continue to rely on Westlaw and Lexis-Nexis “until somebody tells me that I can rely on something else—and it’s been tested and proven, I’ve tried it, and I’m confident in it.”

headshot of Todd Melnick

However, all of the legal-information vendors, including LexisNexis and Westlaw, are currently building generative AI-based research and writing tools, says Todd Melnick, clinical associate professor and director of the Maloney Library at Fordham Law. And they’re working to solve the “hallucination issue” by using retrieval-augmented generation (RAG). In theory, RAG minimizes hallucination by grounding the large language models in a set of reliable data—the cases, statutes, regulations, and secondary sources that major legal information vendors have collected and organized for generations. “However, what we have come to call hallucination is part of what makes generative AI possible, so this is a difficult problem,” says Melnick. “We have yet to see to what extent RAG solves it. Unlike ChatGPT, these tools are designed to provide generative content that is supported by citation to actual, verifiable information.”

Even so, he believes lawyers should remain wary. “We have to be careful not to buy into hype,” he says. “These tools look like game changers, but they will need extensive independent testing and evaluation.”

headshot of Daren Orzechowski
Global firm Allen & Overy is diving in headfirst. In 2022, it became the first firm in the nation to deploy GPT-4 AI with a legal tool called Harvey. Daren Orzechowski ’99, partner and global head of technology at the firm, told a Fordham Law symposium on “The New AI” in November 2023 (see “Chatting ChatGPT”) that some 700 unique users across the globe perform around 4,700 queries a day using Harvey. “We’re looking at research, drafting, review, and analysis assisted by generative AI. It’s a really exciting time for the legal industry and beyond, and I’m looking forward to seeing what’s next,” says Orzechowski.

Expanding Access

headshot of Frank Kearl
Attorney Frank Kearl ’18, director of the Justice Partnership Project at the Center for Popular Democracy, is excited by the prospect of AI expanding access to legal services in underrepresented communities. However, he also worries that AI may replicate existing biases and reinforce inequities.

“It has the potential to do a lot of really good things, but it also has the potential to really entrench existing problems that are very, very bad in our current legal system,” he warns.

headshot of Elizabeth Joynes Jordan
Before joining the Center for Popular Democracy, Kearl worked with fellow graduate Elizabeth Joynes Jordan ’10 at Make the Road New York, a nonprofit that provides free legal services to immigrants. Together, they helped develop ¡Reclamo!, an innovative not-for-profit digital legal tool to screen and file wage-theft complaints.

“Workers can input information about hours they’ve worked, their employer, and their pay, and then it will calculate unpaid wages, unpaid overtime, and any sort of damages. Then it will generate a demand letter and a Department of Labor complaint that those workers can use to see if they can get the money back,” Kearl says. “It is a system designed to help workers access justice, either through private negotiation or a formal complaint to an administrative enforcement agency. The point of the tool is to expand access to legal resources.”

Despite the benefits, ¡Reclamo! is costly to maintain and raises several issues. Nonprofits using legal tech tools need to make sure they don’t run afoul of state bar ethics rules preventing those who are not lawyers from providing legal services. “There is the risk of having this sort of second-tier access to justice for folks,” Kearl says. “But at the moment, we have a crisis of inaccessibility of legal services. The purpose is not to replace lawyers, but to expand the ability of attorneys who are doing very difficult work to reach an even larger number of people and get outsized impact.”

The bigger problem, Kearl says, is how to make sure AI systems don’t merely perpetuate systems of injustice. “We see that happen with facial recognition technology designed by a bunch of white people, and then those systems are unable to respond to faces that are not white with the same level of accuracy. Technological development itself isn’t inherently biased or problematic, but you have to recognize that the people who are building those tools might have biases or problems. And if they’re not willing to interrupt those biases and recognize them in the course of developing these tools, they’re just going to replicate them.”

Likewise, expanding access to legal services doesn’t solve the issue of underfunded regulatory agencies and courts needed to handle complaints. “I think they’re going to run into a bottleneck in the courts where cases take forever to get moving,” Kearl says.

five professors, and scientists sit at a table at the head of the symposium room

Chatting ChatGPT

“The New AI: The Legal and Ethical Implications of ChatGPT and Other Emerging Technologies,” a symposium co-sponsored by the Fordham Law Review and Fordham’s Neuroscience and Law Center, brought together attorneys, judges, professors, and scientists in November 2023 to explore the opportunities and risks presented by AI. “Neuroscience, which seeks to study the human brain, is central to developing the next generation of these technologies,” said Professor Deborah Denno, founding director of the Neuroscience and Law Center. “Thanks to the latest neuroimaging devices, modern neuroscience has revealed deep insights into human reasoning and cognition. As AI devices are used for more complex tasks, they stand to benefit from even more nuanced understandings of human reasoning.”

Preparing Students

headshot of Ron Lazebnik
In June 2023, Landau and Clinical Associate Professor of Law Ron Lazebnik co-authored an op-ed article published in the National Law Journal, “Law Schools Must Embrace AI,” calling upon the academic community to leverage AI and “equip our students with the skills necessary to thrive in a technology-driven legal profession.” Fordham Law has moved quickly to make good on that promise in the months since. In January, the school launched the new Law and Technology Clinic, led by Lazebnik, to give students real-world experience with issues at the intersection of law, technology, and social justice. The goal is to ensure that they are proficient with AI and other digital tools and know how to examine their implications, benefits, and risks.
“Training law students to understand the proper and improper use of these tools is a critical part of ensuring they have the
right skills to be leaders in the profession.”
Ron Lazebnik

The clinic will foster partnerships with nonprofits to use emerging technologies to assist underserved communities, advocate before administrative bodies on policy matters related to AI, and help improve ways people navigate legal systems. Students will also explore the development of tools that could empower people from underserved communities to address everyday legal challenges—from landlord-tenant disputes to policing issues.

The Law School also plans to co-host the inaugural Trust and Safety Cyber 9/12 competition with the Atlantic Council on May 13–14. Modeled on the Atlantic Council’s Cyber 9/12 competitions, the event will bring student teams from around the world to Fordham Law to compete and solve simulated tech-based legal problems.

As with many other professions, there’s fear in the legal field that AI legal tools will render young lawyers obsolete and deprive them of vital experience needed to build skills. Nevertheless, Landau sees reasons for optimism. He believes the school’s investment in law and technology will give its students an edge.

“What’s happening in the tech world is highly relevant for the legal profession,” says Landau. “Whether it’s matters involving cybersecurity, intellectual property, or privacy, many students are going to be doing work that involves tech-based legal problems. Lawyers are also going to have access to an array of features to enhance the diligence process, document discovery, and litigation drafting. Training law students to understand the proper and improper use of these tools is a critical part of ensuring they have the right skills to be leaders in the profession.”

While Landau believes students need to lean in and learn the new technology, he says that law schools will continue focusing on fundamental lawyering skills that AI can’t replace. “There are things technologies can’t do—whether it’s analytical skills, deep reflection, demonstrating emotional intelligence with clients, exercising professionalism, leadership skills, empathy, and the list goes on,” Landau says.

Whether AI ushers in a new era of efficiency or poses threats society has never seen, Fordham will do its best to ensure students are ready. “It’s important that we take a proactive approach,” Landau says. “We will continue to ensure that our graduates stand out through their unique preparation for the new demands they face in the profession.”

Faculty Perspectives

two black silhouettes stand on either side of the Capitol building dome, each holding a large right angle ruler, binary numbers float in the ruler frame around the Capitol dome

Policy Must Keep Pace

“Many applications of AI are likely to make many people’s lives better. The challenges are in establishing durable legal, ethical, and professional standards that cultivate social responsibility. Policymakers must be as sober as ever. For foreseeable harms: Policymakers have already started imposing clear limitations—how or whether companies may collect, store, and market consumers’ personal information. They have started imposing bright-line restrictions on some AI systems, like facial recognition, that disproportionately harm historically marginalized people. Policymakers are also concerned about overhyped claims. For unforeseeable uses or harms: The best policymakers can ask is that businesses and engineers employ regular risk assessments in their development and marketing of AI. This is what the White House and European regulators have proposed.”

headshot of Olivier Sylvain
Olivier Sylvain
Professor of Law and former Senior Advisor to the Chair of the Federal Trade Commission (2021–2023)
three black silhouettes stand beneath an umbrella, above them a large red anvil falls proceeding a shower of binary numbers

The Threats Are Real

“I am concerned about job loss. I am concerned about the loss of skills like composition, summarization, and analysis as more and more people rely on AI. I am concerned about the loss of meaning when more and more cultural content is created by machines. I am worried about the alignment problem—what happens when artificial intelligence is superior to human intelligence and not aligned with human goals. I am very worried about misinformation and fraud enabled by AI.”
headshot of Todd Melnick
Todd Melnick
Clinical Associate Professor and Director of the Maloney Library
three black silhouettes stand beneath an umbrella, above them a large red anvil falls proceeding a shower of binary numbers
graphic depicting two black silhouettes standing beneath a menorah shaped rainbow colored machine, webs of pixels fall into the top tubes of the machine and coalesce into a icon labeled P in the black figure's hands

A New World for Patents

“AI is making patents more important and relevant. There are currently efforts to design software that can read and synthesize the existing universe of patents—almost 12 million documents in the U.S. alone—and help interested parties make decisions based on that information. It used to be that patents were mostly ignored, simply because there were so many of them that it was hard to find the relevant ones. But now, AI can read them and give you a report that might help companies design around existing patents when launching a product.”
headshot of Janet Freilich
Janet Freilich
Professor of Law
two groups of black silhouettes stand separated, from one group the blue C of the copyright symbol is dialogued from the other group the red circle of the copyright symbol is dialogued

Creativity and Copyright

“The implications of AI in intellectual property are still being debated. On the one hand, the use of AI will make it easier for more people to contribute to the creative and technical arts. On the other hand, if contributing to these arts becomes too easy, we may need to reconsider whether there remains a justification for providing people with exclusive rights through copyrights and patents when the contribution is more machine-based than a result of human creativity or ingenuity.”
headshot of Ron Lazebnik
Ron Lazebnik
Clinical Associate Professor of Law
two groups of black silhouettes stand separated, from one group the blue C of the copyright symbol is dialogued from the other group the red circle of the copyright symbol is dialogued
three black silhouettes stand grouped together, one holds a red surf board as they all look up at an oncoming wave made of pixels and binary numbers

Codes, Computing, and Caution

“AI-based services are anticipated to reach $80 billion to $100 billion by the end of the decade and are wholly unregulated beyond market forces. Lawyers have for decades used algorithm-based approaches, including legal search and electronic discovery, without understanding how these searches are performed and whether they satisfy the duty of competence, protect clients’ confidentiality, and prevent conflicts. Law students and lawyers do not need to know how to code, although they may want to, and all law students and lawyers need to be able to competently advise clients on which AI-based legal service products are relevant and valuable.”
headshot of Russell G. Pearce
Russell G. Pearce
Professor of Law and Edward & Marilyn Bellet Chair in Legal Ethics, Morality, and Religion