An artificial intelligence scholar urges technologists to embrace humility

By Fei-Fei Li

January 15, 2021

I’m happy to report my mother continues to persevere, but her resilience hasn’t been the only silver lining to this ordeal. Years spent in the company of nurses and doctors—unfailingly committed, but perpetually overworked and often sleep deprived—convinced me that the power of AI could radically elevate the way care is delivered. Intelligent sensors could keep tireless watch over patients, automate time-consuming tasks like charting and transcription, and identify lapses in safety protocols as they happen. After all, if AI can safely guide cars along freeways at 70 miles per hour, I wondered, why can’t it help caregivers keep up with the chaos of the healthcare environment?

At the heart of this idea was an obstacle, however. I was proposing research that extended beyond the limits of computer science and into an entirely different field, with decades of literature and traditions stretching back generations. It was clear I needed a collaborator—not just an authority in healthcare, but one with the patience and open-mindedness to help an outsider bring something new to the table. For the first time in my career, success would depend on more than the merits of my work; it would require the humility of researchers like me to recognize the boundaries of our knowledge, and the graciousness of experts in another discipline to help us overcome them.

Thankfully, luck was on my side. In 2012, a colleague introduced me to Arnie Milstein, a Stanford Medical School professor and member of the National Academy of Medicine with an interest in both the policy and the technology that drives healthcare. Our first real conversation on the topic turned a casual lunch at a Vietnamese pho restaurant into an impromptu, hours-long brainstorming session. The exuberance of that day never wore off, as we convened a coalition of researchers to explore the automated tracking of surgical tools during operations, privacy-preserving monitors that ensure the safety of high-risk patients and vulnerable seniors, and networks of smart sensors that help hospital staff maintain hand hygiene throughout their shifts. Finally, in September, after years of experimentation, refinements, and presentations at conferences all over the world, our research was published in Nature. And now, with the help of legal scholars, bioethicists, and even a philosopher, we’re partnering with select hospitals and senior homes to pilot its use in the hands of real caregivers.

The success of my collaboration with Professor Milstein demonstrates an important idea: AI’s applications are vast, but technology will represent only part of any given breakthrough. The remainder will be found in the contributions—even leadership—of experts from a growing list of fields, of which healthcare is only one example. Similar partnerships await as AI intersects with economics, energy, environmental science, public health, education, and even the humanities.

For instance, it’s hard to talk about any application of technology in 2020 without addressing the coronavirus pandemic. This was among the motivating factors behind the launch of AI Cures, an MIT initiative that brings together researchers in machine learning and life sciences to accelerate the speed with which antivirals can be identified, evaluated, and ultimately deployed. Its applications in the face of COVID-19 are obvious, but its broader goal of elevating our defense against pathogens of all kinds will remain relevant long after the challenges of the present moment are behind us. In addition to its core research mission, the group has organized impressively inclusive events in recent months, providing a venue for presenters with backgrounds in computer science, infectious disease, cardiology, synthetic biology, and many others.

 Similarly encouraging is the work of my colleague, Stanford law professor Dan Ho. His lab has published extensively on the utility of AI in the public sector, and is now working with the EPA to use machine learning to dramatically improve the tracking of ecological contamination at a national scale. The underlying technology is transformative, but it’s the involvement of legal scholars, policymakers, and government representatives that truly makes it applicable in the real world.

These stories are a testament to the power of humility, but the sheer scale of the challenges that remain calls for a more organized response. It was with this in mind that I partnered with Stanford professor of philosophy and former provost John Etchemendy to co-found the Stanford Institute for Human-Centered Artificial Intelligence, or HAI, in 2018. Its ongoing mission is to reframe the pursuit of AI in unequivocally human terms, to reflect its dependence on interdisciplinary alliances, and to ensure ethics, compassion, and societal responsibility are baked in from the earliest stages of our work—whether it’s an algorithm, a commercial product, or even legislation.

HAI’s reach as an institution is helping to cross new divides as well, beyond those that separate academic worlds. Partnerships with corporations, governments, and NGOs, for instance, will be essential in building a larger community around these values. Already, for example, they’ve helped us organize cross-disciplinary workshops that bring ethical, philosophical, and legal expertise to bear on contentious technologies like facial recognition, with audiences of executives and legislators at both the state and federal level. And our relationships with tech leaders like Google and Amazon allow us to offer powerful cloud computing access—a foundational but often cost-prohibitive resource for modern AI research—to young, innovative thinkers in the form of grants.

 Ultimately, however, this appreciation for the power of humility—openness, transparency, and a reverence for the expertise of others—can’t be mandated from the top down. It must be built up from a cultural level, and thus requires an investment in educational efforts to instill them in the next generation of AI practitioners. Here at Stanford, political science professor Rob Reich co-created a course in the computer science department entitled Computers, Ethics and Public Policy, intended to augment the education of engineers with an awareness of their impact on people and communities, while Harvard computer science professor Barbara Grosz explores similar issues in a course called Embedded Ethics. These are encouraging signs of a shift in the way we educate not just tomorrow’s technologists, but business leaders, social scientists, and politicians. It’s my hope that universities across the world will be inspired to follow suit.

The excitement and anxiety surrounding AI can lend it a fatalistic tone, with aggressive language like “revolution,” “tectonic shift,” and “force for change” all too common. But while it might seem inevitable that AI will reshape the world, collaborations like these are a chance for the world—in all its messy, complicated vibrancy—to reshape AI in turn. So although I’m continually excited by what we’re learning about intelligent machines, I’m even more excited by what we can learn from each other. All it takes is the willingness to ask, and that great, understated strength—our humility.

Dr. Fei-Fei Li is the Sequoia Professor, Computer Science Department, and Denning Codirector, Stanford Institute for Human-Centered Artificial Intelligence, Stanford University. She is an elected Member of the National Academy of Engineering, and the National Academy of Medicine.

Fast Company , Read Full Story

(35)