Part A: Peer review
Where did the information come from? Has it come from peer-reviewed research, a presentation at a conference, or is it from someone trying to sell something?
Watch this 25-minute video. This includes a discussion with Lyndal Byford, media manager with the Australian Science Media Centre, and Paul Willis, Director of the Royal Institute of Australia (and previously a reporter with Catalyst on ABC).
Exercise
After watching the video, answer the following questions:
- What is peer review and why is it so important when reporting on science and medical research?
- What is ‘manufactured conflict’? How can you avoid this when reporting on science?
- What place do media or press releases have in reporting on science?
- What is an embargo? What attention should you pay to them as a journalist?
- Why is science never ‘absolute’? What does that mean for your reporting?
What is peer review?
Peer review is a form of scientific quality control. Before research can be published in a reputable journal, peer review requires scientists to open their research to the scrutiny of other experts in the field.
It is a process which is almost unique to science, in which scientists review and criticise each others’ work before they make it public. The reviewers’ comments are anonymous and unpaid.
Crucially, it points you to work that is credible, but doesn’t necessarily tell you if it is right.
Peer review is by no means perfect and there are still examples of sloppy research or fraudulent research being published, and of good stuff being rejected.
Has the research been peer reviewed?
If the research has not been through this peer review process, ask ‘why not?’
Did they have so little confidence in their methods and results that they would not submit it to their peers? These people are setting themselves outside the usual scientific process—and you have to wonder why this is.
Extraordinary claims require extraordinary scrutiny.
Which journal?
There are thousands of scientific journals and they cover every topic you can think of.
Journals can be ranked according to various means to measures their quality. For example, impact factor is one way to rank journals that describes how often an ‘average article’ in a journal has been cited by other scientists in a particular year or period.
A quick Google search of the journal name with the words ‘impact factor’ should give you this value. Journals with higher impact factors are generally deemed to be more important than those with lower ones
Generally, the more prestigious the journal the more robust and more significant the results will be, because publication in top journals is incredibly competitive.
Some journals require researchers to pay for publication and provide the journal to readers for free, while others are free to publish in but rely on subscription fees from readers of the journals.
Exercise
What is the impact factor of Nature compared to PLOS ONE? What does that mean for reporting of the papers published in these journals?
Which expert?
Look for evidence of peer-reviewed publication in the relevant area of science and health as the basis for defining the expertise of scientists. Scientists’ publications can be found in databases such as Thomson Reuters Research ID.
Exercise
List a credible Australian expert on the following topics. Why do you think they are experts? List also a spokesperson on these issues who should not be deemed an expert. Explain your answers.
- climate change
- skin cancer
- water management
- coal seam gas extraction
- childhood vaccination.
How did they come to that conclusion?
How big was the study? Bigger sample sizes are generally better. A study of 12 children is usually less convincing than a study of 12,000.
Check the statistics. It may sound daunting but a quick glance at the main result can often tell you a lot. Scientists use something called a p-value to say how likely it is that something happened by chance: the bigger the number (up to 1) the more likely it happened by chance. 0.05 is normally the cut-off, which means that there was a one in 20 chance (5%) of this result happening just by chance. Many studies will have p- values of <0.001.
Run the study past another independent expert in the field. Your key questions might be:
- How does this study compare with others that have come before?
- How does it add to or contradict existing scientific views?
- Correlation versus causation—Did A actually cause B, or are A and B connected for reasons we don’t fully understand?
- How large is this study? What was the sample size?
- Was the study well designed? Have the findings been replicated? Will they need to be to gain widespread acceptance?
- Are the results compelling enough to recommend a change in our current behaviour/treatment/ regulations?
- What would be the effect of such changes versus keeping things as they are?
From: ‘Desktop Guide for Covering Science’, New Zealand Science Media Centre
For further reading/investigation
A desktop guide for covering science – NZ Science Media Centre
Briefing notes for journalists – UK Science Media Centre
10 best practice guidelines for reporting science & health stories – UK Science Media Centre
Science for journalists – benchpressproject UK
Part B: The scientific method and papers
Exercise
Explain how the scientific method works compared to a journalists’ news cycle. What are the key differences? What are the similarities?
Types of scientific study
- Case control: Case-control studies compare subjects who have a condition or disease (the ‘cases’) with patients who do not have the condition or disease but are otherwise similar (the ‘controls’). Case control studies are often used to identify factors that may contribute to a medical condition.
- Cohort studies: Cohort studies compare two groups over time. Researchers observe what happens to one group that’s been exposed to a particular variable and compare it to a similar group that hasn’t been exposed to the variable. Cohort studies are often done when performing a randomised controlled trial would be unethical; for example, exposing workers to harmful chemicals to see the effects.
- Randomised controlled trial: People are randomly assigned to two or more groups. One group receives the intervention (such as a new drug) while the control group receives nothing or an inactive placebo. In a double-blind study, neither the researcher nor the trial patient knows what they are being given until after the study has finished. Randomised controlled trials are still seen as the most reliable means of testing something.
- Controlled clinical trials are similar, but people aren’t randomly assigned to groups. This can increase the risk of ‘bias’ in the study, making its findings less reliable.
- Systematic review: Reviews are extremely useful summaries of the state of research in a particular field. The authors, experts in the field, will summarise the findings in the literature over recent years, and provide a consensus. These can be especially useful to get your head around conflicting studies.
- A meta-analysis is a type of review in which the numerical data from comparable trials is pooled and reviewed. It’s important when reporting on reviews to ensure they are comprehensive and that the researchers have not just cherry-picked papers or data to suit their conclusions.
Be wary of studies that include questionnaires given after the event.
Although they can be very useful for scientists to get a broad picture of past events, they can also be incredibly subjective and misleading.
Trying to remember how much you used your phone last week can be hard enough, let alone trying to recall how often you used it 10 years ago. Selectively remembering past events is called ‘recall bias’ and can be an issue in many studies that involve questionnaires.
Checking the science
Look at the media release. Universities, journals and research organisations are all trying to get their name in the media and might be tempted to overhype research to get media coverage.
Compare it to the paper. Compare the media release to the research paper. Contact the Australian Science Media Centre if you need help finding the paper.
Sections of a scientific paper
On a deadline the abstract, a results table and the last paragraph of the discussion are often the most useful parts of the paper.
- Abstract
At the top of each paper is a short, easy to understand description of what the scientists did, what they found and what that means. A quick read of this will often show if the press release got it right. Abstracts are usually available for free.
- Introduction
The introduction puts the research into context. It describes what we currently know and where there are gaps. It may help explain abbreviations and jargon that are used throughout the paper.
- Methods
Allows other scientists to try and repeat the research. Describes how the research was done and what statistical methods were used.
- Results
What they actually found. Usually also contains tables and graphs. Look for the p-value, which is a measure of whether the result was just a fluke. The lower the number the better, anything less than 0.05 is considered reasonable, <0.01 is very good. Even 0.05 means there is a 1 in 20 chance of it being a complete fluke.
- Discussion
The discussion is where the researchers interpret their results. It often includes a discussion of the limitations of the work and future directions. The last paragraph of the discussion often summarises the big take-home message.
Exercise
Read this diet soda press release and answer these questions:
- What is the source of the document? Is it published research? Where? Is it peer-reviewed?
- What kind of experiment is the study? Does it have controls?
- What do we know about type of drinks they had? What in the drinks that might have caused the ill-effects?
- What can you say about the people studied, and the subgroups?
- Does the research prove diet drinks cause heart attacks? Can it?
- How do they know how much people drank?
- What other risk factors were taken into account?
- What was the increase in heart attack risk? Is this a relative or absolute risk? What is the baseline risk? (i.e. how many people would we expect to experience vascular events anyway?)
Check out all the exercises for this module.
For further reading/investigation
Peer review – the nuts and blots – Sense About Science
I don’t know what to believe – Sense About Science
Thompson Reuters Researcher ID
Scijourno is a collaborative project
The University of Western Australia also contributed to the academic advisory group.