A third of popular cancer articles on social media contain misinformation, study finds

By Ruth Reader

July 23, 2021

Health misinformation runs rampant on the internet, but it’s not limited to COVID-19. New research published in The Journal of The National Cancer Institute found that a third of the most popular articles on social media concerning treatment for common cancers contain factual inaccuracies.

“A lot of the misinformation we identified were claims that the current cancer treatments that we have are ineffective or more toxic than they actually are, as well as statements that there are other ‘cures’ that are basically unproven or disproven that include extreme diets or herbal remedies, folk remedies,” says Dr. Skyler Johnson, Huntsman Cancer Institute physician-scientist and assistant professor of radiation oncology at the University of Utah, who led the study.

Using a webscraping tool, Skyler and a group of researchers pulled 200 of the most popular articles on lung, breast, prostate, and colorectal cancer found on Twitter, Facebook, Pinterest, and Reddit between January 2018 and December 2019. Two domain experts from National Comprehensive Cancer Network reviewed the posts and assessed them for misinformation and potential for harm. Of the articles identified as misinformation, 83% contained harmful content. Engagement with potentially harmful content, which predominantly took place on Facebook, was also higher than articles deemed safe or benign. Johnson says that most of the articles containing harmful content originated from new age websites and not from reputable news media.

“One of the things that I saw, which might be because it was really topical, was a lot of ‘cannabis cures cancer,’” he says. “That was a recurring theme and it wasn’t just one type of cancer, it was universal.” He says it might have been because of the timeframe. Between 2018 and 2019, there were a number of discussions at the state and federal level around decriminalizing marijuana.

Health misinformation has long plagued social media sites like the ones mentioned in this study, but the spread of false information around COVID-19 has brought attention to the phenomenon. Early on the in pandemic, the World Health Organization dubbed the proliferation of COVID-19 myths an “infodemic.” Since then, social sites like Facebook, Twitter, and YouTube have instituted bans on COVID-19 misinformation, appended labels to questionable information on the virus, taken down pseudoscience accounts, and invested in promoting credible organizations and sites. But misinformation on social media networks has continued to proliferate and hamper the on-going effort to vaccinate the global populous. These efforts have also not extended to all health misinformation.

Johnson’s hope is that the information in this study can help inform future policy around health misinformation on social media. But, he also plans to use this and forthcoming studies to design software that can help patients identify misinformation when they see it. This may take the form of a plug-in or browser extension. “It would be nice if when they pull up an article on Facebook there was a pop-up or something that says, ‘there is a high likelihood that this is misinformation,’” he says.

The study calls for future research that attempts understand who is most susceptible to this misinformation and assesses its impact. Johnson says his colleague Laura Scherer, a misinformation communications researcher at the university, has already published research finding that people who believe in misinformation about one topic are likely to believe misinformation about another topic. That kind of work will ultimately help define how to target solutions to at-risk populations.

Johnson is currently working on a study focused on finding predictors that someone will believe online cancer misinformation. In another upcoming study, Johnson says he and his colleagues will attempt to identify characteristics of misinformation, data that will ultimately assist with the software he’s hoping to build. Whether or not social platforms can effectively silence misinformation, patients are inevitably going to encounter it, he says, and there have to be easy ways for them to parse it.

“They’re the vulnerable population, they’re the ones who are fearful and scared or looking for hope,” he says. “We have to help them find ways to navigate the murky waters of the internet.”

(31)