Journalists tend to temper, not exaggerate, scientific claims, UM study finds

While splashy clickbait headlines touting the power of chocolate to cure everything from acne to cancer certainly grab attention, these articles may not be commonplace in science communication.

A large-scale University of Michigan study of uncertainty in science communications indicates that journalists tend to temper — not exaggerate — science claims.

New research by UM School of Information researchers Jiaxin Pei and David Jurgens examined how scientific uncertainty is communicated in news articles and tested whether scientific claims are exaggerated. They also wanted to see how scientific claims in the news might differ between well-respected, peer-reviewed journals and less rigorous publications.

“I feel like when we talk about the potential for journalists exaggerating claims, it’s always these extreme cases,” said Jurgens, assistant professor of information. “We wanted to see if there was a difference when we lined up what the scientist said and what the reporter said for the same article.”

Overall, Pei and Jurgens found positive news about science communication.

“Our results suggest that journalists are actually quite cautious when reporting science,” Pei said, adding that some communicators — not journalists — reduce the certainty of scientific claims.

“Journalists have a tough job,” said Jurgens, who recognizes the skill it takes to translate scientific findings to a wide audience. “It’s nice to see that journalists are really trying to contextualize and temper scientific findings into a larger space.”

For their study, the researchers focused on certainty, which can be expressed in subtle ways.

“There are a lot of words that will indicate how confident you are,” Jurgens said. “It’s a ghost.”

For example, adding words like “suggest”, “about”, or “could” tend to increase uncertainty, while using a specific number in measurements indicates greater certainty.

Pei and Jurgens pulled news data from Altmetrics, a company that tracks mentions of scientific papers in news reports. They collected nearly 129,000 news stories mentioning specific scientific papers for their analysis.

In each of the news reports and scientific articles, they analyzed all sentences containing discovery words, such as “find” or “conclude”, to see how the journalists and scientists backed up the claims in the article. A group of human annotators scoured scientific papers and news reports, noting levels of certainty in more than 1,500 scientific findings.

“We took claims in the abstract and tried to match them with claims found in the news,” Jurgens said. “So we said, ‘OK, here are two different people – scientists and journalists – trying to describe the same thing, but to two different audiences. What do we see in terms of certainty? »

The researchers then built a computer model to see if they could replicate the levels of certainty reported by human readers. Their model was strongly correlated with human ratings of the certainty of a claim.

“The performance of the model is good enough for large-scale analysis, but not perfect,” said Pei, a UMSI doctoral student and first author of the paper, who explained that there is a gap between human judgment and machine predictions, mainly due to subjectivity. .

“When identifying uncertainty in text, people’s perceptions can be diverse, making it very difficult to compare model predictions and human judgments. Humans can be very much at odds sometimes.

Pei says research translation can get murkier when it comes to the quality of the review, or what the researchers call the impact factors of the review. Some science news writers report similar levels of certainty in the news, regardless of where the original study is published.

“This can be problematic given that the impact factor of the journal is an important indicator of research quality,” he said. “If journalists report research that has appeared in Nature or Science and some unknown journals with the same degree of certainty, the public may not know which conclusion is more reliable.”

Altogether, the researchers see this work as an important step in better understanding uncertainty in scientific news. They created a software package that allows scientists and journalists to calculate uncertainty in research and reporting.

While journalists can benefit from certainty control over their work, Jurgens says the tool could also be useful to readers.

“It’s easy to get frustrated with uncertainty,” he said. “I think providing a tool like this could have a calming effect to some extent. This work is not the silver bullet, but I think this tool could contribute to holistic understanding for readers.”

The work has been published in the proceedings of the 2021 conference on empirical methods in natural language processing.

Written by Sarah Derouin, School of Information

More information:

Comments are closed.