IBM researcher Kush Varshney ’04 seemed to veer off topic as he offered this question up to viewers at the beginning of a recent seminar on “What’s Next in AI.” In support of the assumed affirmative answer, Varshney noted that Kipchoge, the world’s fastest marathoner, is highly competent and extremely reliable, having won nearly every marathon he has entered. Kipchoge is also, Varshney noted, very open about his motivation and methods for running at such a high level and also notably selfless in his efforts to support his community.
These attributes are the same qualities that Varshney, the founding codirector of IBM Science for Social Good, an initiative focused on artificial intelligence for social impact, is determined to build into machine learning systems that power computers, software, and robots.
"These attributes map precisely to what we want out of AI systems," he said.
Competence maps to accuracy, Varshney explained, and this is what most researchers have focused on to date. But he believes that trustworthiness is about more than eliminating errors and bias. Other qualities are just as — if not more — important.
For example, reliability maps to distributional robustness and fairness, which is what allows systems to work well in different conditions. Openness maps to explainability, uncertainty quantification, transparency, and value alignment, which enables communication and understanding between machines and their users. The fourth attribute — selflessness — is the one that may have the most impact as AI development proceeds.
"The fourth attribute is about using AI for social good and social impact applications," Varshney said. "And not just that but empowering all people around the world to use AI to meet their own goals."
Varshney’s work to infuse AI with the same competence, reliability, openness, and selflessness as Eliud Kipchoge is — as the saying goes — more of a marathon than a sprint. And in many ways, it’s a journey he began as an undergraduate at Cornell.
Inspiration Sourced in Cornell ECE
Varshney joined Cornell Engineering’s School of Electrical and Computer Engineering (ECE) at the dawn of the new millennium, a time when new technologies like machine learning, artificial intelligence, and the internet itself were still emerging. “The first time I really had high speed internet was in the dorm,” Varshney said. He was drawn to the newness of what was happening in the field of ECE.
“Bridges and tunnels have been around for hundreds of years,” he said, “whereas what's happening in ECE is strictly stuff that has not been done before.”
Varshney worked on research projects with the late Professor Thomas Parks and Professor Sheila Hemami, who served as his advisor. One summer was spent collaborating with Lockheed in Owego, NY, where he worked with machines designed to analyze images of postal mail, envelopes and magazines, to distinguish the address from the rest of the image.
This work on image segmentation and analysis did not involve machine learning at the time, but it started Varshney on a path that led there. “When you're looking at an image, you’re distinguishing the foreground from the background, two dimensions,” Varshney explained. “But it turns out that machine learning is actually also trying to partition space.”
Part of Varshey’s Ph.D. thesis at MIT took image segmentation approaches and applied them to a more generic machine learning problem. “So instead of doing foreground and background, you're analyzing two classes that are just abstract. So that was the connection.”
Varshney started thinking about how bias enters these systems around the time that NBA referees were being accused of calling fouls more frequently on Black basketball players. “I started thinking that detection theory is all about the same tasks that a basketball referee is doing.”
Humans tend to categorize people into groups, such as groups of high foulers and low foulers. But referees can’t possibly remember every player’s actual probability of committing a foul, so those kinds of groupings are prone to error. “If you put this sort of constraint into an optimal decision-making formulation, you end up with unfairness,” Varshney said. “And it does predict the sort of bias that was observed.”
Varshney set out to better understand how a mismatch between true prior probability and erroneous assumptions causes a degradation in decision making performance. He recalled observing that many examples his Cornell professors would use had some sort of societal aspect to them, and that understanding the impact of your work was part of the ethos of Cornell Engineering.
“Engineering isn't just a math problem,” he said. “In anything we do, we should be looking at the end users who will be affected and taking more of a participatory design sort of approach. We do engineering to come up with solutions to help people, but we need to hear from those people.”
Building the Future
In 2010, when Varshney came to IBM Research, he saw that the group he joined was very much ahead of the game when it came to machine learning, still a very new technology.
“They were applying machine learning to things like human capital management and health care,” he recalled. “It was very enlightening and eye-opening for me that machine learning could be used for many human-centric applications.”
Despite offers from other companies, including defense-oriented research labs, Varshney chose IBM because it was cultivating this intersection of societal concerns with machine learning. “There was always this idea that there is a human decision-maker that we're supporting. And so, we always had to make our models understandable by people, and that led us to do more research on how to make those models that humans can understand more accurate.”
Soon Varshney began volunteering with a group called Data Without Borders, which later became the digital activism organization DataKind. With support from his manager at IBM, Varshney began work on projects through DataKind designed to connect practicing data scientists with nonprofit organizations looking to improve and optimize their services.
“Once I finished a couple of projects,” Varshney said, “I invited my manager to come and see our final presentations, and she got really excited as well. So, in 2015, we sat down and said, let's start something like this at IBM.”
Their IBM researchers immediately responded with enthusiasm. They created a fellowship program, brought on summer interns, and over the past six years they have conducted at least 35 different projects with various nonprofit organizations as part of IBM Science for Social Good initiative within IBM Research.
Varshney and his team created a comprehensive open-source tool kit called AI Fairness 360, which helps detect and reduce unwanted bias in data sets and machine-learning models. He’s also written a book to be published next year, “Trustworthy Machine Learning,” which outlines those four principles of trustworthiness that Varshney believes are the key to developing AI systems optimized for fairness.
Looking back, Varshney sees the origins of engineering for social impact in his experiences at Cornell. “While I was there, this group called Engineers Without Borders got launched, so I could see what was happening,” he said. “Now what we're working towards is bringing in people with diverse lived experiences to help inform what problems get worked on and how they're judged and evaluated.”
Words from former Cornell President Jeffrey S. Lehman’s 2004 commencement address still resonate with Varshney: “May you enjoy the special pleasures of profession; the added satisfaction of knowing that your efforts promote a larger public good.”
“I think that's a really good guide for some of the things that I've ended up doing,” Varshney said.
Erica Pratt's doctoral work in Dr. Brian Kirby's lab focused on investigating circulating tumor cells (CTCs) in the peripheral blood system of patients with solid tumors and how these cells can be used as a noninvasive tumor surrogate, and as prognostic biomarkers for survival in advanced disease.
Read more about Erica D. Pratt, Ph.D. 2015
Now attending medical school at the University of Central Florida College of Medicine, Ashley aspires to be a surgeon. Currently, her research focuses on a novel surgical operation to reduce lymphedema after axial lymph node dissection.
Read more about Ashley Pekmezian, B.S. 2019
While pursing his M.Eng., Elijah Mekalin Karr Cathey-Li worked on the development of an acoustic scanner for custom ultrasound devices and the utilization of ultrasound to reduce DVTs.
Read more about Elijah Cathey-Li M.Eng. 2011