The Long Arm of AI title
In the fall issue of the Fordham Law Review, Professor Deborah Denno curated a collection of writings from a symposium that took place earlier this year. The event, “Rise of the Machines: Artificial Intelligence, Robotics, and the Reprogramming of Law,” was hosted by the Fordham Law Review and co-sponsored by the School’s Neuroscience and Law Center. The issue makes a powerful case that AI often carries forth many of the same biases that have dogged us for centuries, along with creating a host of new challenges. Here, in a Q&A with Fordham Lawyer, Denno talks tech, law, neuroscience, and why it’s crucial for lawyers to be as nimble as the technology we live with every day.
By Paula Derrow
Fordham Lawyer: Why did you decide to put together a Fordham Law Review issue on tech, AI, and the law?

Deborah Denno: As the founding director of the Neuroscience and Law Center at Fordham Law School, one of the points I wanted to make is that neuroscience—the study of the human brain—is really the foundation for artificial intelligence and robotics. Both AI and robotics try to mimic how human beings think and process information. That connection with human cognition sometimes gets lost when we as a society think about new technologies. The idea of the issue was to try to explain and spotlight that connection and some of the dangers that come with it.

FL: What kind of dangers?

DD: With artificial intelligence and robotics, it’s easy to feel as though human beings aren’t involved. But even as machines are now making decisions based on algorithms and data, human beings are behind them all, with the same prejudices and flaws in their thinking that they’ve always had. For example, one focus of the Fordham Law Review issue was the use of robotics and AI in the financial sector. I think initially the public belief was that because robo-­investing and fintech rely so heavily on digital decision-making, there wouldn’t be the same biased assumptions that bankers in traditional brick and mortar institutions have always used to gauge a customer’s financial worthiness—whether about a customer’s gender or race or general appearance.

FL: And that’s not the case?

DD: It’s not. Often, when companies begin to create algorithms that determine whom to make a loan to or which customer will be the best bet for being able to pay their mortgage, they start gathering data about where these prospective clients live and do their banking and where their kids go to school and all these other proxies meant to help assess if, say, someone is a good credit risk. And these proxies are built into a system that ends up creating an algorithm that can be incredibly biasing, as Tom Lin writes in “Artificial Intelligence, Finance, and the Law.”

FL: Then there’s the case of robots and liability. Iria Giuffrida’s “Liability for AI Decision-Making: Some Legal and Ethical Considerations” explores questions like who or what is to blame when a robot makes an error or injures someone.

Deborah Denno headshot
Deborah W. Denno, Arthur A. McGivney Professor of Law; Founding Director, Neuroscience and Law Center
DD: Absolutely. Suddenly, the legal system is pondering questions like: Can you sue a robot? Traditionally, for any device we buy, whether it be a coffeemaker or a car, there has been a chain of liability, from the manufacturer to the marketer to the salesperson and so on. But when you are starting to talk about who is responsible for creating a robot or a drone or any device that may get out of control or injure someone or invade their privacy, you have to multiply that chain of liability substantially. The increasing opportunities for liability also bring more opportunities for confusion. And our current legal framework of causation—in tort law, contract law, etc.—can become unworkable when we start talking about these complex AI creations.

FL: Does that mean the law has to change? Or that lawyers have to change?

DD: Both the law and individual lawyers have to change. AI and robotics will be altering the entire practice of law. Many of the authors in the Fordham Law Review issue discuss new statutes and case law that were created this year, so the focus is incredibly recent. New laws have to be created because our old laws aren’t sufficiently complex or accommodating. But lawyers are also going to have to be better prepared to handle these challenges. We’re starting to see fact patterns and scenarios that people haven’t even considered before.

FL: Do you feel optimistic about the future of AI—or more like we’ve let the proverbial monster out of the box?

DD: That divide is a great way to capture the conundrum of new technology. On the one hand, no one would have dreamed that we’d have devices that are so fast and efficient and able to eliminate some of the more burdensome aspects of life, as well as create new jobs.

Suddenly,
the legal system is pondering questions like:
Can you sue a robot?
FL: And on the other hand?

DD: On the other hand, as a society and as lawyers, we really have to think more quickly and anticipate the problems this new technology brings, because we have not been fast enough. What’s happened with Facebook and fake ads, etc., is a good example of not being sufficiently prepared. Also, one of the articles in the Fordham Law Review issue examines Google’s efforts to create a “smart city” in Toronto [“Urbanism Under Google: Lessons from Sidewalk Toronto” by Ellen P. Goodman & Julia Powles]. The complications stemming from Sidewalk Toronto are a good example of government leaders taking on what looked to be a beneficial high-tech project very quickly without really involving the community or thinking about the implications for individual privacy, or too much else.

FL: It’s interesting because it seems as if lawyers have to think more quickly, yet also be more deliberative.

DD: Exactly. And if lawyers are able to be both quick and deliberative, they can use these past mistakes to get the legal system moving so that legal actors can anticipate what’s coming down the road in terms of new technology. Right now, the only certainty we have is that AI and robotics are going to become faster and potentially more dangerous. That said, the legal system’s biggest concern should be the human beings who are running the machines because they pose the most serious threat of all.