Deborah Denno: As the founding director of the Neuroscience and Law Center at Fordham Law School, one of the points I wanted to make is that neuroscience—the study of the human brain—is really the foundation for artificial intelligence and robotics. Both AI and robotics try to mimic how human beings think and process information. That connection with human cognition sometimes gets lost when we as a society think about new technologies. The idea of the issue was to try to explain and spotlight that connection and some of the dangers that come with it.
FL: What kind of dangers?
FL: And that’s not the case?
DD: It’s not. Often, when companies begin to create algorithms that determine whom to make a loan to or which customer will be the best bet for being able to pay their mortgage, they start gathering data about where these prospective clients live and do their banking and where their kids go to school and all these other proxies meant to help assess if, say, someone is a good credit risk. And these proxies are built into a system that ends up creating an algorithm that can be incredibly biasing, as Tom Lin writes in “Artificial Intelligence, Finance, and the Law.”
FL: Then there’s the case of robots and liability. Iria Giuffrida’s “Liability for AI Decision-Making: Some Legal and Ethical Considerations” explores questions like who or what is to blame when a robot makes an error or injures someone.
FL: Does that mean the law has to change? Or that lawyers have to change?
DD: Both the law and individual lawyers have to change. AI and robotics will be altering the entire practice of law. Many of the authors in the Fordham Law Review issue discuss new statutes and case law that were created this year, so the focus is incredibly recent. New laws have to be created because our old laws aren’t sufficiently complex or accommodating. But lawyers are also going to have to be better prepared to handle these challenges. We’re starting to see fact patterns and scenarios that people haven’t even considered before.
DD: That divide is a great way to capture the conundrum of new technology. On the one hand, no one would have dreamed that we’d have devices that are so fast and efficient and able to eliminate some of the more burdensome aspects of life, as well as create new jobs.
the legal system is pondering questions like:
DD: On the other hand, as a society and as lawyers, we really have to think more quickly and anticipate the problems this new technology brings, because we have not been fast enough. What’s happened with Facebook and fake ads, etc., is a good example of not being sufficiently prepared. Also, one of the articles in the Fordham Law Review issue examines Google’s efforts to create a “smart city” in Toronto [“Urbanism Under Google: Lessons from Sidewalk Toronto” by Ellen P. Goodman & Julia Powles]. The complications stemming from Sidewalk Toronto are a good example of government leaders taking on what looked to be a beneficial high-tech project very quickly without really involving the community or thinking about the implications for individual privacy, or too much else.
FL: It’s interesting because it seems as if lawyers have to think more quickly, yet also be more deliberative.
DD: Exactly. And if lawyers are able to be both quick and deliberative, they can use these past mistakes to get the legal system moving so that legal actors can anticipate what’s coming down the road in terms of new technology. Right now, the only certainty we have is that AI and robotics are going to become faster and potentially more dangerous. That said, the legal system’s biggest concern should be the human beings who are running the machines because they pose the most serious threat of all.