Wish you were here
Is Siri a Victim of Sexism?
It’s no coincidence that Siri and Alexa are both “female.” At a panel on AI and gender bias at Fordham Law Women’s Second Annual Symposium, a group of real-life women explain why.
By Pamela Kaufman

Visiting Professor Shlomit Yanisky-Ravid always changes the female voice on Siri to male. That’s what she confided during a panel titled “International Perspectives on Artificial Intelligence & Gender Bias” at Fordham Law Women’s Second Annual Symposium, a collaboration with all six of Fordham Law School’s journals. Alexa and Siri may not have been there in the flesh, but they were there in spirit—and they wanted to know why they were stuck in dead-end secretarial jobs straight out of the Eisenhower era. Because the message at the symposium was clear: Despite the great hope that artificial intelligence (AI) would free us from the gender bias that’s all too present in human decision-making, the reality is that we are not quite there yet.

This year’s symposium, which took place on September 27, brought together lawyers, researchers, writers, activists, and academics from around the world to address topics concerning public policy’s gender impact, with panels including environmental justice and the intersection of gender, race, and reproductive rights.

Yanisky-Ravid, head of the IP-AI & Blockchain Project at the Fordham Law Center on Law and Information Policy (CLIP), introduced her panel by describing Amazon’s recent cancellation of a multiyear initiative to develop an AI-powered tool for recruiting employees. Amazon allegedly scuttled the project when the discovery was made that, despite efforts to eliminate bias, the algorithm was more likely to reject applicants for technical roles if their résumés included the word “women’s” (as in “women’s chess-club champion”).

“It’s important to note that machine learning isn’t new—computer science began in the 1950s,” said panelist Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute of New York University and author of Artificial Unintelligence: How Computers Misunderstand the World. “Modern machine-learning systems all incorporate social assumptions from the 1950s that were common among the white men in power in the scientific community at that time. These assumptions are perpetuated inside today’s machine-learning models because of the way knowledge works in math and computational sciences. It’s cumulative: Things get reified, and then you build on top of them.”

Gender bias is far from the only issue. “You can’t really separate gender from race or class or ability [involving bias against individuals with disabilities], and there are also certain gender biases that affect not only women but also trans people,” said panelist Sarah Myers West, Ph.D., researcher at the AI Now Institute at New York University. “You need to think of these forms of discrimination as intersectional and compounding.”

For lawyers, AI discrimination can be tricky to prove, added panelist Katja Langenbucher, a visiting faculty member at Fordham Law. “The question is, how do you measure outcomes?” she said. If AI tools lead a bank to deny credit to a woman, for example, you could say, sure, it’s discriminatory. “But that’s not the end of the story for a lawyer, because the lawyer is going to control for variables.” If factors beyond gender come into play in the bank’s decision, she explains, a lawyer will have a harder time proving unlawful discrimination.

“We can push back against ‘techno chauvinism,’ the idea that technology is superior to other solutions.” — Meredith Broussard
So how do we fix our flawed technology? “We can burn it all down, we can destroy the patriarchy—those are my go-tos,” Broussard deadpanned. “We can also push back against what I call techno chauvinism, that idea that technology is superior to other solutions. Another thing we can do is assume that discrimination is the default in automated systems. Starting with that frame of reference allows you to look for the blind spots.”

The problem is big and complex, the panelists agreed, and there are no easy solutions. “We really need to come at it from a critical perspective, from a mathematics perspective, from a legal perspective,” Broussard exhorted. “We need a full-court press.”

From left to right: Visiting faculty member Katja Langenbucher; Meredith Broussard, associate professor, New York University; Sarah Myers West, Ph.D., researcher, the AI Now Institute; and Visiting Professor Shlomit Yanisky-Ravid PhD., head of the IP-AI & Blockchain Project at the Fordham Law CLIP
Photo by Dana Maxson
All women panel at Fordham Law Women's Second Annual Symposium
Wish you were here
Is Siri a Victim of Sexism?
It’s no coincidence that Siri and Alexa are both “female.” At a panel on AI and gender bias at Fordham Law Women’s Second Annual Symposium, a group of real-life women explain why.
By Pamela Kaufman

Visiting Professor Shlomit Yanisky-Ravid always changes the female voice on Siri to male. That’s what she confided during a panel titled “International Perspectives on Artificial Intelligence & Gender Bias” at Fordham Law Women’s Second Annual Symposium, a collaboration with all six of Fordham Law School’s journals. Alexa and Siri may not have been there in the flesh, but they were there in spirit—and they wanted to know why they were stuck in dead-end secretarial jobs straight out of the Eisenhower era. Because the message at the symposium was clear: Despite the great hope that artificial intelligence (AI) would free us from the gender bias that’s all too present in human decision-making, the reality is that we are not quite there yet.

This year’s symposium, which took place on September 27, brought together lawyers, researchers, writers, activists, and academics from around the world to address topics concerning public policy’s gender impact, with panels including environmental justice and the intersection of gender, race, and reproductive rights.

Yanisky-Ravid, head of the IP-AI & Blockchain Project at the Fordham Law Center on Law and Information Policy (CLIP), introduced her panel by describing Amazon’s recent cancellation of a multiyear initiative to develop an AI-powered tool for recruiting employees. Amazon allegedly scuttled the project when the discovery was made that, despite efforts to eliminate bias, the algorithm was more likely to reject applicants for technical roles if their résumés included the word “women’s” (as in “women’s chess-club champion”).

“It’s important to note that machine learning isn’t new—computer science began in the 1950s,” said panelist Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute of New York University and author of Artificial Unintelligence: How Computers Misunderstand the World. “Modern machine-learning systems all incorporate social assumptions from the 1950s that were common among the white men in power in the scientific community at that time. These assumptions are perpetuated inside today’s machine-learning models because of the way knowledge works in math and computational sciences. It’s cumulative: Things get reified, and then you build on top of them.”

Gender bias is far from the only issue. “You can’t really separate gender from race or class or ability [involving bias against individuals with disabilities], and there are also certain gender biases that affect not only women but also trans people,” said panelist Sarah Myers West, Ph.D., researcher at the AI Now Institute at New York University. “You need to think of these forms of discrimination as intersectional and compounding.”

For lawyers, AI discrimination can be tricky to prove, added panelist Katja Langenbucher, a visiting faculty member at Fordham Law. “The question is, how do you measure outcomes?” she said. If AI tools lead a bank to deny credit to a woman, for example, you could say, sure, it’s discriminatory. “But that’s not the end of the story for a lawyer, because the lawyer is going to control for variables.” If factors beyond gender come into play in the bank’s decision, she explains, a lawyer will have a harder time proving unlawful discrimination.

“We can push back against ‘techno chauvinism,’ the idea that technology is superior to other solutions.” — Meredith Broussard

So how do we fix our flawed technology? “We can burn it all down, we can destroy the patriarchy—those are my go-tos,” Broussard deadpanned. “We can also push back against what I call techno chauvinism, that idea that technology is superior to other solutions. Another thing we can do is assume that discrimination is the default in automated systems. Starting with that frame of reference allows you to look for the blind spots.”

The problem is big and complex, the panelists agreed, and there are no easy solutions. “We really need to come at it from a critical perspective, from a mathematics perspective, from a legal perspective,” Broussard exhorted. “We need a full-court press.”

From left to right: Visiting faculty member Katja Langenbucher; Meredith Broussard, associate professor, New York University; Sarah Myers West, Ph.D., researcher, the AI Now Institute; and Visiting Professor Shlomit Yanisky-Ravid PhD., head of the IP-AI & Blockchain Project at the Fordham Law CLIP
Photo by Dana Maxson