Mozilla’s ambitious plan to teach ethics in the age of evil tech

By Katharine Schwab

October 10, 2018

For decades, popular wisdom held that technology was undeniably a good thing. But in the last few years, it’s become increasingly clear that’s not the case: Poorly designed technology can fuel the spread of misinformation, entrench systematic racism and sexism, erode personal privacy, and aid divisiveness and extremism.

 
 

How do you address such monumental societal changes? For Mitchell Baker, the founder and chairwoman of the Mozilla Foundation, it starts with education.

Today, Mozilla, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, is launching a competition for professors and educators to effectively integrate ethics into computer science education at the undergraduate level. The context, called the Responsible Computer Science Challenge, will award up to $3.5 million over the next two years to proposals focused on how to make ethics relevant to young technologists.

“You can’t take an ethics course from 50 or even 25 years ago and drop it in the middle of a computer science program and expect it to grab people or be particularly applicable,” Baker says. “We are looking to encourage ways of teaching ethics that make sense in a computer science program, that make sense today, and that make sense in understanding questions of data.”

Mozilla’s ambitious plan to teach ethics in the age of evil tech | DeviceDaily.com
[Photo: Arif Riyanto/Unsplash]

What might that look like? The competition is encouraging professors to propose changes to class material, like integrating a reading assignment on ethics to go with each project, or to methodology, like having computer science sections co-taught with teaching assistants from the ethics department. The first stage of the challenge will award these proposals up to $150,000 to try out their ideas firsthand, likely at the university where the educator teaches. The second stage will take the best of the pilots and grant them $200,000 to help them scale to other universities. Each idea will be judged by an independent panel of experts from academia and tech companies.

Baker hopes that the competition–and its prize money–will yield practical ideas that are both substantial and relevant. It shouldn’t be a required course that students take just to check a box before graduating, she says. Instead of being overly philosophical, the coursework should use hypotheses and logic to underpin ideas. The goal? To create a new way of talking about technology, one that incorporates more humanistic principles.

“Ideally we’d start to build a language for how you talk about ethics or how you think about the impact of technology,” she says. “Nothing as crisp as a mathematical formula, but you would have concepts that the next generation of technologists would understand and be able to talk about.”

 

Fundamentally, Baker believes that reinvigorating the standards for technical education can help change the set of underlying assumptions we have about technology. That’s what is currently happening with the assumption that algorithms are neutral. But in the last five years, this idea has been challenged. Now when you talk about algorithmic accountability, there’s greater understanding than there was before, even if it hasn’t penetrated very far yet. New York City even has a new law to investigate its own algorithms and hold them accountable.

Mozilla’s ambitious plan to teach ethics in the age of evil tech | DeviceDaily.com
[Photo: Mimi Thian/Unsplash]

Baker finds the assumption that STEM education is always good similarly dubious. “Of course tech education is good, but STEM without any understanding of humanity is going to breed a set of technologists who don’t know, even if they want to, how to build positive things for humanity,” Baker says. “The technologists, the founders, the MBA-types building businesses–do they even have the frameworks to think about a set of issues other than speed and performance?”

There’s already a burgeoning movement to integrate ethics into the computer science classroom. Harvard and MIT have launched a joint class on the ethics of AI. UT Austin has an ethics class for computer science majors that it plans to eventually make a requirement. Stanford similarly is developing an ethics class within its computer science department. But many of these are one-off initiatives, and a national challenge of this type will provide the resources and incentive for more universities to think about these questions–and theoretically help the best ideas scale across the country.

Still, Baker says she’s sometimes cynical about how much impact ethics classes will have without broader social change. “There’s a lot of power and institutional pressure and wealth” in making decisions that are good for business, but might be bad for humanity, Baker says. “The fact you had some classes in ethics isn’t going to overcome all that and make things perfect. People have many motivations.”

Even so, teaching young people how to think about tech’s implications with nuance could help to combat some of those other motivations–primarily, money. The conversation shouldn’t be as binary as code; it should acknowledge typical ways data is used and help young technologists talk and think about the difference between providing value and being invasive.

“We’re at the end of the spectrum where we don’t have the tools or the ways of thinking for those who are able and want to think about this and build it in–we just don’t have any tools,” Baker says. “And so educating the next generation of technologists about how to think about this and talk about this and how to respond to it is maybe not the first step, but it is an important foundational step.”

 
 

(17)