By Chris Adams

Journalists don’t need a Ph.D. in statistics to dig into the numbers that undergird scientific news.

A math professor from George Mason University shows how even reporters on deadline can do a better job of understanding the statistics in papers about, for example, the latest clinical trials of cancer treatments.

In a session with National Press Foundation fellows, Rebecca Goldin (bio, Twitter) said it’s vitally important for journalists who cover medical news to get it right, since what they report forms the basis for the public’s understanding of diseases and treatments.

And if members of the media are shaky on statistics, the public is doubly so.

Goldin is also director of the organization STATS, a joint program between Sense about Science USA and the American Statistical Association that seeks to improve statistical literacy among journalists, academic journal editors and researchers. The organization includes STATScheck, which can pair journalists with statisticians to help them parse the data in the latest medical journal article.

Goldin started with the basics that journalists generally know (but often forget), such as why median and mean averages can give very different pictures of the world, or what happens when people confuse causation and correlation. She also led fellows through case studies, from clinical trial to press release to news story, illustrating how marginal scientific results can often get trumpeted into hype.

Among her suggestions for journalists seeking to make sense of a scientific study on deadline: Be sure to pay attention to the summary, abstract and the conclusion. Many writers ignore the conclusions. Also realize that the abstract will tell you the results but hardly ever hints at any limitations in the study. Don’t be afraid to ask scientists about potential biases in their trials. All studies contain biases and limitations that should be discussed.

Perhaps most important: Recognize the difference between statistical significance and clinical significance. Something may be statistically significant – meaning it didn’t happen by chance – but it might have such a tiny impact it really won’t help patients in a meaningful way.

She also gave a set of guidelines for what reporters writing on the latest study could do if they had one hour, a few hours or more time than that.

For those with 60 minutes: Read the summary and the abstract, avoid most conclusions of causality, describe the clinical importance of the study and be sure to cite your sources.

For those with a couple hours: Read the conclusion or discussion portions of the paper, seek other experts’ opinions and quantify the risks and benefits in ways readers can understand.

And for those with more time than that: Find out how subjects were recruited into the study, describe how data were collected and determine if the results were clinically significant – not just statistically significant. “I think statistical significance is a really low threshold of being interesting,” she said.

Finally, Goldin discussed cancer clusters and how journalists can cover them, noting that there is no commonly accepted threshold for a cluster. Although people generally look for an environmental cause that may have caused a particular pocket of cancers, she noted that “cancer clusters can occur by chance.”

“There will be places with more cancers than expected just due to variability – even if nothing is causing them,” she said. There are various measures for how many cancers would be expected in a particular area, and journalists can talk with experts about what those numbers are.