A huge study of Meta’s impact on the 2020 election offers no easy fix for political polarization

 

By Issie Lapowsky

A group of more than a dozen researchers have released the first findings in a mega-study of Facebook and Instagram’s impact on democracy, which was conducted with Meta’s participation before and after the U.S. presidential election in 2020. In a series of four papers, they found that switching users to a chronological News Feed, suppressing viral reshared content, and breaking Facebook users out of their echo chambers had no discernable impact on people’s political beliefs or behavior. These interventions did, however, influence the amount of time people spent on the platform and the amount of untrustworthy or like-minded content they saw.

The findings, conducted as part of an unprecedented partnership between Meta and 17 independent academics, complicate popular beliefs about how to address online polarization and appear to undermine a number of recent proposals that have been put forward by lawmakers and even former employees at the company, who have suggested that tweaking Meta’s algorithms could be the key to alleviating some of the online vitriol surrounding politics.

“What we have shown here is that . . . for a three-month period of study, in the heart of a period of time when we think lots and lots of people are paying attention to politics, that these kinds of changes that have been proposed as a way of ameliorating some of the issues that are facing us as a society today, do not seem to have had much of an impact on the attitudes about which so many people are so concerned,” Josh Tucker, codirector of the New York University Center for Social Media and Politics, and one of the coleads of the so-called U.S. 2020 Facebook and Instagram Election Study, said on a call with reporters this week.

The researchers’ findings were published Thursday in the form of four studies, one in the journal Nature and the other three in a special section of the journal Science. They are the first four in what will be 16 total reports to come out of the study, which launched in the summer of 2020. 

At the time, Facebook announced an ambitious plan to give more than a dozen independent researchers a rare peek behind the curtain in the run-up to the U.S. presidential election. For a company that has sought to keep user data under lock and key, has openly battled with academics who find workarounds to study it, and has, in the past, caught serious flak for conducting experiments on users, it was a significant step toward transparency. The findings were initially set to be released a year later in the summer of 2021, but were delayed multiple times after the lead researchers realized their task was “significantly more time-consuming” than they’d predicted. The January 6th riot also extended the researchers’ originally intended period of study.

The researchers were granted vast access to aggregate user data from Facebook and Instagram, working with Facebook—now Meta’s—own staff to study the experiences of the entire U.S. population of active Facebook users. In some cases, where users consented, they were even able to tinker with people’s feeds and alter what they were exposed to on Facebook and Instagram to measure the effects of different interventions. 

The researchers had full control over the work and final say on the papers, though Meta did draw bright lines around any areas of study that would compromise user privacy or violate the company’s legal obligations. In a blog post, Meta’s president of global affairs Nick Clegg appeared pleased, even relieved, by the researchers’ findings. “The research published in these papers won’t settle every debate about social media and democracy, but we hope and expect it will advance society’s understanding of these issues,” Clegg wrote.

The studies address four key questions. Perhaps the most interesting one is what impact switching from an algorithmically ranked feed to a chronological one has on Instagram and Facebook users’ political attitudes and behaviors. This was one of the ideas endorsed by Facebook whistleblower Frances Haugen back in 2021 that gained traction with lawmakers at the time. To test the impact of such a switch, the researchers conducted an experiment with over 23,000 participants on Facebook and over 21,000 on Instagram, who opted into the study. For three months ending in December of 2020, a random subset of that group began seeing posts in their feeds in reverse chronological order, instead of a personalized feed. 

The researchers found that the users with chronological feeds spent less time on Facebook and Instagram and were exposed to less like-minded content on Facebook. But they also saw a substantial increase in posts from what the company deems to be “untrustworthy” sources. And yet, these changes had no impact on polarization, knowledge of politics, or people’s political participation offline, according to the paper, which was published in Science. “These findings should give all of us pause, including policymakers, about any simple sort of solution that if you just switch it to chronological feed, this will solve all of these ills that we see happening,” Natalie Jomini Stroud, director of the Center for Media Engagement at the University of Texas at Austin, said on the call. Stroud was also a colead on the 2020 election study. “The solutions aren’t so simple,” Stroud said. 

A similar question the researchers asked, which is the subject of another paper in Science, is whether limiting reshares on Facebook—the feature that allows people to post content that other people have created—might change people’s political beliefs and opinions. The theory was: because reshares tend to drive viral content, and viral content is often misleading, such a targeted change might make a difference. To study it, the researchers once again drafted around 23,000 participants who opted in on Facebook. For a portion of those participants, the researchers suppressed reshared posts from appearing in their feeds. 

These results were also a mixed bag. Removing reshares dramatically decreased the amount of content from untrustworthy sources that people saw in their feeds, but it also reduced the amount of political news they saw in general. Not only that, but it reduced their overall political knowledge, all while having no impact on polarization. “When you take those reshared posts out of people’s feeds, that does mean that they’re seeing less virality prone and potentially misleading content, but it also means that they’re seeing less content from trustworthy sources as well,” Andrew Guess, assistant professor of politics and public affairs at Princeton University, and the lead author on the reshare and chronological feed studies, told reporters.  

The third experiment, the findings of which are published in Nature, looks at the impact of reducing the amount of content people see from like-minded sources on Facebook. This study took aim at concerns about online echo chambers—the idea that people may be driven into extreme or radical beliefs because algorithms mostly serve them content they already agree with. 

The researchers looked at the entire U.S. active user population on Facebook and found that, while like-minded content does make up the majority of what users see, only about a fifth of users are in what the researchers call “extreme echo chambers,” meaning more than 75% of what they see on Facebook is coming from sources they agree with politically. Of course, one-fifth of 231 million active U.S. Facebook users is still many millions of people.

 

The researchers then examined the effects of breaking people out of those bubbles by limiting their exposure to posts from friends, pages, and groups they’re aligned with politically. They conducted this experiment with nearly 25,000 consenting Facebook users, a portion of whom had posts from like-minded sources reduced in their feeds by one-third. Once again, the experiment had no effect on polarization or the extremity of people’s views. But it did reduce the overall amount of like-minded content that people saw. Then again, when users in the experimental group did see posts from like-minded sources, they were even more likely to engage with it, as if being deprived of that content made them even more hungry for it. 

“This is a reminder that it is difficult to override the psychological preferences that people have, in this case, for engaging with ideas with which they agree, just by algorithm alone,” Jaime Settle, a professor of government at William & Mary, and a coauthor on that paper, said on the call.

The final paper, published in Science, may be the most aligned with popular beliefs about social media polarization. It looked at the behavior of 208 million U.S. adults on Facebook between September 2020 and February 2021 to see whether conservatives and liberals consume different news on Facebook. As expected, the researchers found that, indeed, they do. Not only that, but they found that conservatives are more likely to view news rated false by Facebook fact-checkers. Overall, there were also far more URLs viewed exclusively by conservatives than viewed exclusively by liberals, suggesting that the conservative corner of the news ecosystem on Facebook is far bigger than the liberal corner. Pages and groups played a key role in driving people to these segregated news sources.

There are, of course, many caveats to the set of studies. The researchers acknowledged that while three months may be a long time to run an experiment, it’s a short amount of time to change a person’s—let alone many thousands of people’s—beliefs. That’s particularly true during the height of a presidential election when those beliefs may be most animated. There’s no telling how these same experiments would play out if they took place over a longer duration of time, in another country, or even in the U.S. outside of an election cycle. The researchers aren’t asserting that social media has had no impact on people’s politics, just that changing those beliefs with technical tweaks, even dramatic ones, isn’t as simple as it seems. “This finding cannot tell us what the world would have been like if we hadn’t had social media around for the last 10 to 15 years to 20 years,” Tucker said.

Still, the four papers constitute a substantial contribution to the public understanding of Meta’s much-debated impact on politics. But what’s equally noteworthy is the fact that these studies exist at all. Meta researchers have tested these types of changes in the past, of course. In one particularly infamous experiment, they tested whether it was possible to influence people’s emotions by tinkering with what users saw in their Facebook feeds. It was—and it blew up in Facebook’s face. But even then, it was up to Facebook’s own researchers to conduct the experiment and report their findings. 

This project was the first where researchers actually got to change the way Facebook and Instagram worked, study the results in real-time, and report their findings on their own terms. To track how well that process worked, the research project itself was documented by an independent rapporteur, a University of Wisconsin-Madison journalism professor named Michael Wagner. Wagner’s own findings are also published in Science and offer a sort of meta-study (no pun intended) on the successes and failures of this unusual approach. 

They also offer a window into how much control Meta was really willing to give up in the process. In one particularly telling detail, Wagner notes that as the articles neared publication, “Meta researchers wanted to be able to expressly disagree with lead author interpretations of findings or other matters in articles they coauthored,” a request that Stroud and Tucker, the coleaders of the independent research team, apparently rejected. 

It’s a juicy bit of behind-the-scenes intrigue, the likes of which rarely make it into academic papers. But it speaks to a broader point Wagner makes, which is that research collaborations of this kind will always be fraught as long as they require private companies’ voluntary buy-in. “The collaboration resulted in independent research,” he writes, “but it was independence by permission from Meta.”

The coleads of the project don’t seem to disagree. As much as Tucker hopes other social platforms and researchers are able to learn from and replicate the model laid out in these and other forthcoming papers, he said it’s crucial for lawmakers and regulators to learn from them, too. “Our hope is that what we’ve done here will serve as a model for how this kind of research can be done,” he said. “We also hope that the outcome of this research project and all we’re able to learn from this project — about unpacking the black box of algorithms, about political ideological segregation on the platform, about the effect of these algorithms on people’s attitudes — will be that regulators in the United States and beyond will get involved in requiring this kind of research.”

Fast Company

(11)