Stanford’s Fei-Fei Li is pushing the tech industry to build humanity into AI models

By Mark Sullivan

 

Years before generative AI became the buzziest term in tech, there was a smaller wave of interest that hit around 2012, back when the industry was achatter about AI image classifiers (that is, models that could recognize and label images). That year, a neural network called AlexNet beat out existing methods of classifying images by a wide margin. AlexNet and its successors were only possible because someone built a massive image dataset to teach them—that was ImageNet, a project started back in 2006 by Fei-Fei Li, then an assistant professor at Princeton. Suddenly everyone was talking about “deep learning.”

 

Li is now considered one of the brightest minds in AI, mentioned in the same sentence as Geoffrey Hinton, Yann LeCun, and the like. After developing ImageNet at Princeton, Li moved on to Stanford University in 2009, where she would later become director of the school’s Artificial Intelligence Lab. In January 2017, she took a 22-month sabbatical to serve as chief AI/ML scientist at Google Cloud, returning to Stanford in fall 2018 as codirector of its influential Institute for Human-Centered Artificial Intelligence (HAI), where she remains to this day.

“Deep from the bottom of my heart, I’m a scientist slash technologist,” says Li. “So it is still building the technology I love, especially now with my students, that really is the source of my energy. I’m still so curious about AI; it’s such an awesome field and there’s so many unanswered questions.”  

As the name suggests, HAI is all about working closely with the tech industry to champion healthy values of openness, transparency, safety, and explainability in the creation of AI. Promoting such considerations as central (and early) features of the AI research and development process has grown more urgent with the rapid advance of the technology over the past two years, as the short-term and long-term risks become better understood. 

 

“The concerns are real. . . . So I’m not delusional about this at all,” she says. “It’s a very, very powerful technology, just like the inflection points that humanity has experienced in our civilization’s history, whether it’s fire or electricity or the PC—this is that scale and depth.”

Yet, Li is in no sense an AI doomer. She doesn’t advocate halting research on large models. Rather, her cause is keeping humans, and human values, at the center of the process—an ideal that doesn’t always take root in the profit-hungry world of Silicon Valley. “I don’t know where we collectively are going to come out,” she says, “but I think it’s so important to focus our energy on human-centered AI.”


This story is part of AI 20, our monthlong series of profiles spotlighting the most influential people building, designing, regulating, and litigating AI today.

Fast Company

(11)