The growing power of artificial intelligence (AI), and especially machine learning (ML), raises ethical and policy issues that might not be apparent. The fact that these issues are not immediately obvious does not make them any less important to address. Solon Barocas, assistant professor in the Department of Information Science at Cornell, wants to shine a light on some of the ethical concerns that arise from human reliance on AI. He is building a research program based on addressing questions of privacy, fairness, accountability, and transparency in ML.
“The pattern recognition capabilities of ML can be used to make all kinds of predictions about people,” explains Barocas. “What are the privacy implications of these predictions? People are good at navigating the human world where we can, for the most part, understand the inferences people are making about us. But we do not understand the inferences ML makes so therefore we can’t manage them.”
Barocas’s path to the Department of Information Science at Cornell started when he was thirteen and his family got its first computer. “That was life-changing for me,” says Barocas. “I got sucked right in to the world of computers and graphic design and media production. I also got interested in some of the social questions introduced by new technologies.” Barocas earned his undergraduate degree from Brown University where he studied both International Relations and Modern Culture and Media.
Barocas received his MSc in International Relations from the London School of Economics (LSE) and went to work at the Russell Sage Foundation in New York City, where the value of fundamental social science research was driven home for Barocas.
In order to decide where to pursue doctoral studies, Barocas asked himself a question: “If I had complete freedom to choose how I spend my time, what would I do?” the answer he gave himself was “I would probably spend much of it reading about interesting policy questions that grow out of our uses of technology.” Barocas found a program at New York University (NYU) that fit his interests exactly. It was in the Department of Media, Culture, and Communication, a place that valued asking social and ethical questions about technology.
At NYU, Barocas intended to deepen and broaden the work he did on his Master’s thesis at LSE, where he had reflected critically on the use of data mining in counterterrorism. But in his first year at NYU he realized he didn’t really understand the technical foundations of data mining. So he took classes in data mining and machine learning, which transformed his course of study. “I hit on the style of work I like to do while I was at NYU,” says Barocas. “I like to understand deeply how a technology works and then tease out the ethical issues that follow from that. I love collaborating with Computer Science people. That makes Info Science at Cornell the perfect place for me.”
Barocas joined the faculty of Cornell in the summer of 2017 partly on the strength of the interactions he had with faculty during the interview process. “I had a remarkable series of deep, thought-provoking conversations,” says Barocas during a recent conversation in the Gates Hall coffee shop. “Faculty here are engaged and fluent with each other’s work, despite coming from very different backgrounds. I have an enormous amount I can learn from my colleagues here.”
At Cornell, Barocas plans to continue his exploration of ethical and policy issues in artificial intelligence. In addition to questioning how people can manage the effects of inferences made by machine learning algorithms, Barocas is also tackling issues of bias and inequality in the ways people train and use machine learning algorithms. “So many of the datasets we use to train ML programs have their own biases built right in,” says Barocas. “So I am curious about how existing laws and policies around discrimination apply to ML decision-making.”
Barocas is a bit skeptical of the claims people often make about the ability of ML to make unbiased decisions. With his research, he hopes to bring some needed caution to the enthusiasm many decision-makers are showing for ML.
Two recent additions to Barocas’s research are:
· a project looking at how the creation of more and more data on people’s movements and actions in brick-and-mortar retail stores affects corporate decision-making about employees, and
· a study of how the collection of Big Data in the agricultural sector affects the privacy and livelihood of farmers.