A Haverford Professor’s search for bias in algorithms
By Audra Devoto
“I like this algorithm; it’s clever” Sorelle Friedler said to her Haverford College class, tilting her head back and admiring what looked to be a tangle of dots on the screen. The algorithm she was referring to could discern which two dots out of millions were closest to each other in the blink of an eye—clever indeed.
Algorithms, or computer programs designed to solve problems, are gradually becoming so sophisticated that referring to them with human qualities is not unwarranted. Now used to make decisions ranging from the advertisements we see to more sinister outcomes, such as the sentencing order a judge might hand down in court, algorithms are quietly and constantly affecting our daily lives.
Friedler is well aware of the power of algorithms. She has purposefully embedded them in her life by studying them in an academic context.
In an interview sandwiched between classes, labs, and meeting with thesis students, Friedler talked easily and with obvious ardor about her research on algorithms.
After graduating from Swarthmore College, she attended graduate school at the University of Maryland, where she studied the algorithms that can be used to describe objects in motion. Then she left academia seeking a different kind of challenge: Google.
“It was a lot of fun to get to see inside the belly of the beast for a while,” she recalled almost wistfully. But she said doesn’t miss the corporate atmosphere.
“It doesn’t give the leeway necessarily to work on what you are interested in”, she said, “or to go off on a tangent that might not be related to the task at hand”.
Friedler worked in a semi-secret division of Google called simply ‘X’, on a project aiming to provide universal internet access through weather balloons. If it sounds crazy, well, that’s kind of the point.
“[X’s] goal is really to tackle moonshot problems” Friedler said. But ultimately, Friedler said, “I liked the autonomy of being a researcher in an academic environment.”
Friedler’s current work is a reflection of something that she cares about deeply: discrimination and bias.
“I have a longstanding personal interest in discrimination,” Friedler told me. “Certainly discrimination within tech but more broadly in society”.
It turns out that algorithms, just like the rest of us, can be exceptionally and unintentionally biased in their decisions—a problem Friedler is trying to solve.
The reasons for algorithm bias are many and complex, but essentially it is because they typically rely on past data to make decisions—past data that was collected and generated by biased humans—thus perpetuating any biases in that data.
“I think it’s one of those things where if you understand data mining and machine learning, and you understand how biases are replicated in society, the fact of the problem is sort of obvious” Dr. Friedler said. “You don’t spend very long at all talking about whether there might be a problem but more jump immediately to what you can do about it algorithmically.”
Friedler came to Haverford in 2014 as an assistant professor of Computer Science. For her, the suburban Philadelphia college is the ideal place to conduct her research.
“Especially with this research, it’s really useful to have an understanding of the societal impacts and the societal context and I think that Haverford students, because of the liberal arts environment, are far more likely to have that,” she said. “I think that’s a real strength of the college and doing the research here.”
Although Friedler describes happening upon her current research project as an accidental event—it was born of a serendipitous lunch with a fellow researcher who had the same interests—her path to Haverford cannot be considered as such.
“Haverford is not actually an unusual atmosphere for me,” Friedler explained. “It’s an atmosphere I am very comfortable in and liked a lot in college.”
“I always sort of suspected I wanted to come back and get to do some teaching”, she added.
As both an undergrad and a graduate student Friedler took classes on college education theory, something she recommends for anyone considering a career in higher education.
Ultimately, research and teaching are closely related for Friedler. She structures her classes as a series of questions rather than answers, something she claims to have picked up from her education theory courses. With the help of thesis students and other collaborators, Friedler has used her research to produce an algorithm that can check other algorithms for bias, sort of like an unbiased algorithm police. Although her algorithm is not yet widely implemented, she has been presenting her work around the country and hopes it will be adopted by companies who use algorithms for decision making.
Recently, the interest in algorithm biases among both the public and researchers has exploded, something Friedler views as a wonderful thing.
There is a potential downside, however.
“I’ve spent a lot of time talking to reporters,” she said, “which is sort of a bizarre thing I never thought I’d do.”