HMC CS Researchers Publish Chapter on Algorithmic Biases

Share story

Harvey Mudd College computer science professor George Montañez and his students Daniel Bashir ’20 and Julius Lauw ’20 have published the chapter “Trading Bias for Expressivity in Artificial Learning” in ICAART 2020: Agents and Artificial Intelligence, part of the Lecture Notes in Computer Science book series (Springer, Cham).

The HMC researchers’ chapter about how bias relates to algorithm flexibility (expressivity) was an expanded and completely rewritten version of the lab’s award-winning 2020 paper for the International Conference on Agents and Artificial Intelligence (ICAART).

Montañez, Bashir and Lauw expanded their original paper by beginning with a definition of the term “bias.”

“The word ‘bias’ is a loaded term in machine learning and statistics, with at least four different uses,” says Montañez. “We added a section differentiating the meanings of the term and showing how our particular notion of bias, ‘algorithmic bias,’ is not equivalent to the prejudicial biases we rightly try to eliminate in data science. While all prejudicial biases create algorithmic bias, not all algorithmic biases are prejudicial.”

The authors also took advantage of having more time with their research to improve their presentation of the paper’s core ideas. “Often when you present a paper, in having to communicate the ideas simply to an audience, you stumble upon a much better way of presenting your work,” Montañez says.

“Although all of the theorems and definitions are equivalent between the original paper and book chapter,” he explains, “the extended version in the book introduces all of the key concepts around a geometric idea called inductive orientation, which is basically a direction an algorithm ‘points towards’ in high-dimensional space. The degree to which it points somewhere away from the baseline direction is the degree to which it can be algorithmically biased—we’re basically measuring how well-aligned an algorithm is with regard to a particular situation we care about. Furthermore, pointing towards one direction means pointing away from other directions, so we see that no algorithm can be well-aligned with all situations. This geometric idea of alignment paints a better intuitive picture of what we mean by algorithmic biases.”

The original paper, “The Bias-Expressivity Trade-off,” co-authored by Montañez, Lauw, Dominique Macias ’19, Akshay Trikha ’21 and Julia Vendemiatti ’21, won the Best Paper award at ICAART 2020.

“This chapter stands essentially as a new paper, which builds on the content of the original conference publication, but improves it in many ways,” says Montañez. “We were also fortunate to have Daniel Bashir join us as a co-author; he was responsible for many of the improvements in the new work, including the new section on different biases in artificial learning.”

This publication marks Bashir’s third and Lauw’s fifth with Montañez’s AMISTAD Lab. In 2020, Lauw received a student researcher award from the Computer Science Department. Bashir was a 2020 CRA Outstanding Undergraduate Researcher honorable mention.

“The chapter will likely be used by machine learning and AI practitioners who are interested in new ways of looking at and measuring biases in artificial learning systems,” says Montañez. “Hopefully it inspires greater transparency concerning the biases present in all learning algorithms.”