‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts

 

By Wilfred Chan

Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord. 

But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.

“It’s disappointing to see this autumn-years redemption tour from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)

“I didn’t see any solidarity or any action when there were people really trying to organize and do something about the harms that are happening now,” she says.

Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color. 

“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”

Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. “White supremacist and misogynistic, ageist, etc. views are overrepresented in the training data . . . [and] setting up models trained on these datasets to further amplify biases and harms,” the paper noted, could quickly lead to a “feedback loop.” The paper also pointed out the engines’ outsize carbon emissions, something that “doubly punishes marginalized communities” in the path of climate change.  

Months after Gebru’s firing, Mitchell was also given the axe after using an automated script to search her company email for examples of discrimination against Gebru, an act that Google said violated company policies. An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them. 

Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday, for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

To Whittaker, Hinton’s comment spoke volumes. “I think it’s stunning that someone would say that the harms [from AI] that are happening now—which are felt most acutely by people who have been historically minoritized: Black people, women, disabled people, precarious workers, et cetera—that those harms aren’t existential,” she says. 

advertisement

“What I hear in that is, ‘Those aren’t existential to me. I have millions of dollars, I am invested in many, many AI startups, and none of this affects my existence. But what could affect my existence is if a sci-fi fantasy came to life and AI were actually super intelligent, and suddenly men like me would not be the most powerful entities in the world, and that would affect my business.’” (Hinton did not respond to Fast Company’s request for comment.) 

Emily M. Bender, a University of Washington computational linguist and large language model expert who coauthored the paper with Gebru, said Hinton’s comments were “completely disconnected from the harms that are going on and have been documented by many people,” and reflected a long-standing bias in tech around whose voices matter. 

“If you’re speaking from a position of marginalization, then society treats your voice as compromised, because you can’t be objective, because you don’t sit in that power position that supposedly has a view from nowhere,” Bender says. “When Hinton says that Timnit’s concerns are less existential than his, I think what he’s saying is that his are about all of humanity. But they’re also made up, and doing that switch basically denies the existential risk, the risk to life and health and livelihood that people are experiencing now from these systems.”

While Hinton says he quit Google to speak more freely, he has denied any animus toward the company: “Google has acted very responsibly,” he tweeted. But Whittaker says understanding the real risks of large language models requires scrutinizing the handful of corporations that control them. “If we follow these corporations’ interests,” she says, “we have a pretty good sense of who will use [the technology], how it will be used, and where we can resist to prevent the actual harms that are occurring today and [are] likely to occur.”

Fast Company

(14)