Author Archives: domizianafrancescon

Why getting missing evidence published is key to improving the quality of peer reviewed research

For Peer review week 2019 Sense about Science have asked for a series of blogposts, to begin a Research ‘TO DO’list. What can researchers, universities, journals, funders and governments do to improve the quality of peer review and research? This is just the start, we need your ideas too.


Imagine a world where society has the high-quality evidence it needs to make informed
decisions about crime, health and education.  What action can researchers, universities, funders and governments take to get us there?

Síle Lane, head of international campaigns and policy at Sense about Science which runs the AllTrials campaign for clinical trial transparency @Senseaboutsci #AllTrials

When evidence goes unpublished the evidence base is incomplete – it has gaps. Furthermore, because it is often so-called ‘negative’ results (results that show there was little or no effect from an intervention or even harm, for example) that more often go unpublished, the evidence that exists is biased. There are many reasons why results go unpublished. It happens when researchers and editors believe that research that didn’t find an ‘interesting’ result is of no interest to the community or no use for a researcher’s career. It happens where publication of results in a journal isn’t the normal practice. It happens when research groups run out of time or funding to spend on a particular project and need to move onto something else. It can also happen, of course, when researchers want to keep evidence hidden for commercial or security reasons.

When results from research aren’t published it means the same research can get repeated unnecessarily. This wastes funds and time and in the case of clinical research, it can lead to patient harm. It means opportunities to explore new areas and to progress the evidence base are missed. Failure to publish results from clinical trials puts subsequent research participants at risk unnecessarily. When research is not peer reviewed and is not opened up to scrutiny by the research community, we miss opportunities to improve research. Decisions made on the basis of the evidence base risk being wrong. Systematic reviews and meta-analyses – the gold standard of evidence – are at risk of being incomplete and biased, meaning that guidelines and decisions based on them may be wrong.


The most studied area of publication bias is in clinical trials. Clinical trials are the large tests often involving hundreds or thousands of people, to check whether a medicine works and is safe. The problem of missing evidence in clinical trials has been known about for decades and hundreds of research papers have been published investigating publication bias in different areas of medicine These pieces of research have routinely shown that around half of the body of clinical trials carried out on medicines we use today have never reported results[1]. Modern clinical trial tracker tools which display live information on which clinical trials have reported results and which have not, show the situation has improved recently but still show that today at least 30% of US and EU clinical trials that are due to have reported results have not. Furthermore, research has shown that clinical trials that show a ‘positive’ result (ie that the medicine is effective) are twice as likely to be published than those that show no effect[2], and are published more rapidly. 

Selective publication of results from clinical trials puts patients at risk and means money is wasted.

A clinical trial of the heart drug lorcainide carried out in 1980 identified that more of the people in the trial who were given lorcainide died than people in the trial taking the placebo. These results were not published until 1993. In the more than a decade when the evidence wasn’t available patients continued to be prescribed drugs in the same class as lorcainide. It has been estimated that 100,000 of these patients in the US alone died unnecessarily.

In response to two separate outbreaks of influenza in 2006-7 and 2012-13, the UK’s NHS spent £424 million on the drug Tamiflu. They decided to do this because guidelines based on evidence from clinical trials supplied by Tamiflu’s manufacturer, Roche, suggested that the evidence showed that Tamiflu reduced complications from the flu and helped people recover from the flu quicker. However, researchers at Cochrane found out that 60% of the research these guidelines were based on was unpublished, and some of the published research contradicted reports from the trials submitted to regulators. It turned out that no one had seen the complete evidence. When all the evidence was finally brought together it showed that the evidence was so flawed such that it was impossible to judge whether Tamiflu was any more effective than placebo.

How to fix the problem of missing evidence

Government TO DO list:

  • Set up publicly accessible registers for research.

These already exist for clinical research. In the last decade the proportion of clinical trials that are registered before they begin has risen sharply. Now, it is rare that a clinical trial that occurs in the main clinical trial centres does not get publicly registered in at least one place. This means that researchers, patient groups, investors in research and research regulators can scrutinise which clinical trials have reported results and which have not, and chase down results from unpublished trials. 

The World Health Organisation, working with the clinical research community, maintains the Trial Registration Data Set, an internationally agreed 24-item list of the pieces of information that  must appear in a register in order for a given trial to be considered fully registered.

  • Instruct ethics committees, research governance boards and public research funders to make registration of research before it begins and reporting of the results, whatever the result, a condition of approval to get funding for or run research.

The UK’s research regulator, the Health Research Authority, committed in 2104 to making registration of a clinical trial within 6 months of its start date a condition for granting approval to run the trial. The HRA is exploring levers to ensure results from UK clinical trials are published.

  • Where appropriate, bring in legislation to mandate registration of research and reporting of results.

In 2014 the European Union has agreed a new Clinical Trials Regulation which mandates that every clinical trial carried out in the EU or EEA must be registered before it begins and report results onto the publicly accessible register within a year of the end of the trial (within 6 months for paediatric trials). There will be sanctions including financial penalties for breaching the law. This will be the law in Europe from 2020.

  • Monitor adherence to registration and reporting rules

The US law, the FDA Amendments Act 2007, mandates that certain US clinical trials must be registered om the federal register before they begin and report results there within a year of the end of the trial, and gives the FDA power to fine research sponsors who breach these rules. Despite the fact that hundreds of applicable trials are gong unreported the FDA has never issued any fines and adherence to this law is patchy.

Publisher/journal TO DO list:

  • Make the decision about accepting a research paper independently of whether the results are ‘positive’ or ‘negative’.
  • Editors can start to encourage researchers to register their research project before it begins by announcing it as a requirement to have a research paper accepted in the journal.

In 2013 the members of the International Committee of Medical Journal Editors committed to only publishing papers from clinical trials that had been proactively registered on a publicly accessible register in their journals. This led to a sharp increase in the proportion of clinical trials that were registered. 

Funder TO DO list:

  • Make registration of research and reporting of the results, whatever the result, a condition of new grants
  • Mandate that researchers must have reported results for all previous research to be considered for new funds

Since 2016 the US federal funder of health research, the National Institutes of Health, has had the power to withhold remaining or future grant funds from a grantee for failure to submit clinical trial registration and results information.

In 2017 twenty-one more of the world’s largest non-commercial clinical research funders signed up to a World Health Organisation initiative, committing them to bring in a new funding policy to mandate registration of all funded clinical trials and reporting of results within a year of their end and that says that a researcher’s past record of registering clinical trials and reporting results would be taken into consideration when assessing an application for new funds.

  • Ensure that grantees understand that the costs, if any, of registering research and reporting results can be covered by grant funds



Universities TO DO list:

  • Universities, hospitals and research centres should appoint a named person to be responsible for research reporting. This person should put in place internal processes to ensure research carried out by employees of the institute does not go unpublished.
  • Ensure researchers are aware that non-publication of research results is increasingly viewed as research misconduct by governments, institutes and professional societies  therefore will be viewed poorly by promotion committees/in tenure decisions.

We extend our thanks to David Tovey @DavidTovey for his work editing these blogposts.

Please tweet your ideas #ResearchTODOlist #PeerRevWk19 and #QualityInPeerReview or email us at hello@senseaboutscience.org.  [EJ3] 


How we can improve research integrity, research quality and peer review

Research TO DO list: How can we improve the quality of peer review and research

For Peer review week 2019 Sense about Science have asked for a series of blogposts , to begin a Research ‘TO DO’ list. What can researchers, universities, funders and governments do to improve the quality of peer review and research? This is just the start, we need your ideas too.


Imagine a world where society has the high-quality evidence it needs to make informed
decisions about crime, health and education.  What action can researchers, universities, funders and governments take to get us there?
 

By Ana Marusic, Editor, Journal of Global Health; Professor at School of Medicine, University of Split, Croatia @ana_marusic Media wiki for research integrity: @EmbassySci

Peer review is an important part of the research process as a quality check for the validity, completeness and honesty of the performed research.

Peer review is something that is rarely taught, although research shows that the best reviewers are young academics. Embedding training in responsible peer review and research in general is thus a good way to develop conscientious researchers, to prevent research misconduct and reduce waste in research.

We also need to know more about peer review, in order to understand how it works and how it leads to good decision making in research. It is surprising that the evidence base we have about such an important part of the research process is weak. We actually have little high-quality evidence of whether and how peer review improves research.

Finally, journal editors have a special responsibility for the integrity of peer review. The current evidence shows that journal editors are not as transparent about their own and their journal’s conflicts of interest, as they require from their authors and reviewers. They need to declare, regularly update and manage their own conflicts to ensure the integrity of what they publish.  

1. Universities TO DO: Teach responsible research from early stages of professional development.

Just as clinical skills need to be taught early in order that students learn how to adopt them in practice (Glasziou et al, 2011) research integrity needs to become a part of professional training to ensure that students grow into responsible researchers.

2. Governments TO DO: Shape research integrity policies together with the research community and the public

It seems that policy makers and researchers do not have a shared understanding of what constitutes  research integrity (Horbach & Halffman 2016). Researchers see integrity as a virtue, closely related to ethics, whereas policy makers see it as a norm, and as the opposite of research misconduct. They need to come together and establish an open dialogue about how to best increase the quality and integrity of research.

3. Funders TO DO: Open up journal and grant peer review for researchers for more methodologically rigorous studies

The research on peer review is fragmented and examined only in small scale studies (Grimaldo et al. 2018). There is a need for more comprehensive and methodologically rigorous research into peer review, so that we can understand it better and use it more wisely. PEERE  (New Frontiers of Peer Review) is an example of trans-disciplinary and cross-sectorial collaboration in innovative peer review research. 

4. Journal editors TO DO: Follow for yourself the same standards in declaring competing interests you ask from authors and reviewers

We now have evidence that journal editors in medicine are often burdened by financial conflict of interest (Liu et al. 2017), but they do not declare it clearly to the public (Dal-Re et al. 2019). As all journals require their authors and reviewers to declare their competing interests, the same level of transparency must be expected from journal editors, so that they can take full responsibility of the integrity of the published record.

5. Industry TO DO: Engage in dialogue with academic and research community about responsible conduct of research

Like other stakeholders, research in academia and in the industry have different views of what constitutes research integrity (Godecharle et al. 2018). We have to be aware of the differences and specificities of different research environments and harmonize expectations about responsible research.

6. Researchers TO DO: Stay true to the principles of good research: honesty, respect, reliability and accountability

European Code of Conduct for Research Integrity defined four principles to guide researchers in addressing challenges in research: reliability to ensure the quality of research, honesty in doing and presenting research, respect for all involved in research, and accountability for what they do. Researchers should follow the development of science and new expectations from them about the integrity and quality of research.

We extend our thanks to David Tovey @DavidTovey for his work editing these blogposts.

Please tweet your ideas #ResearchTODOlist #PeerRevWk19 and #QualityInPeerReview or email us at hello@senseaboutscience.org.  


Research TO DO list: How can we improve the quality of peer review and research

Research TO DO list: How can we improve the quality of peer review and research

For Peer review week 2019 Sense about Science have asked for a series of blogposts, to begin a Research ‘TO DO’ list. What can researchers, universities, funders and governments do to improve the quality of peer review and research? This is just the start, we need your ideas too.


Imagine a world where society has the high-quality evidence it needs to make informed
decisions about crime, health and education.  What action can researchers, universities, funders and governments take to get us there?

How can we improve the quality of peer reviewed publications?

By Professor Gary Collins  @GSCollins and Patricia Logullo @patlogullo on behalf of the EQUATOR Network  @EQUATORNetwork

It has been estimated that more than 80% of articles in biomedical journals lack important information. They are published without the details that would be necessary for their findings to help researchers and health professionals improve people’s lives. Because publications are frequently incomplete or biased, this wastes money and human resources and can lead to patient harm. These are the findings of key systematic reviews on studies published in the last 20 years.

Since the early 1990s, researchers and journal editors have tried to tackle this problem by creating reporting guidelines: these are tools that remind authors to provide a minimum list of information needed to ensure that:

– the article can be understood;

– the study can be replicated by another researcher;

– the results can be used by health professionals or policymakers;

– the study can be included in a systematic review (to inform guidelines for example).

Research has shown that the quality of scientific reporting has improved modestly with the increased use of reporting guidelines. However, there is still a long way to go.

Another source of avoidable harm to patients is when the studies are not reported at all: non publication of research (also called publication bias) is a known cause of research waste. Since 2005, the International Committee of Medical Journal Editors (ICMJE) has mandated prior registration of trial protocols before clinical trials findings are published, Clinical trial registration makes it easier to identify those studies that have completed but not yet published.

The EQUATOR Network (www.equator-network.org) is a global initiative dedicated to improving the quality and transparency of health research. We conduct research on reporting, manage a collection of more than 400 reporting guidelines (which are currently being audited), and train authors and editors how to use reporting guidelines. Our research experience allows us to draft the following “to-do list” about improving reporting of health care research.

How to fix the problem of poor-quality reporting

Based on the EQUATOR Network experience of conducting and evaluating research-on-research, and helping authors, editors and librarians, these are the activities, procedures, approaches or policies thatto improve the completeness of research reporting. Some of these issues are currently being addressed by organisations and researchers, some might not be.

Researchers (research-on-research investigators) TO DO list:

1. Updating reporting guidelines – The first reporting guideline, the CONSORT Statement, designed to guide the reporting of clinical trials, was published in 1996. Since then, numerous reporting guidelines have been developed for other study designs, including observational studies (STROBE), systematic reviews (PRISMA), diagnostic test accuracy (STARD), prognostic model studies (TRIPOD) studies and many more. CONSORT was updated in 2001 and 2010. However, many reporting guidelines have been available for a long time now without having been updated. Periodic updating and revising is important to ensuring their ongoing credibility, reflect advancements made in study methodology and hopefully increase their use by researchers when writing up the findings from their study.

2. Investigating poor adherence – Many reporting guidelines have been published, but evaluations have shown they are often not fully adhered to, with many key details often omitted  — even in reports published in high ranking journals. We therefore need to investigate and understand why researchers are not fully following the recommendations contained in reporting guidelines or do it poorly (incompletely), and why journals are allowing that to happen.

3. Testing new interventions — What could help authors use reporting guidelines? What would be alternative approaches to improving reporting quality? Does training help? We should test them as interventions, in controlled and carefully planned trials.

Publishers and journal editors TO DO list:

1. Knowing reporting guidelines well — Editors working for journals that endorse or claim to enforce the use of reporting guidelines are not always sufficiently familiar with the contents of all the main reporting guidelines and the corresponding checklists. This creates the situation where the editors can ask authors to fill in the wrong checklist as a mandatory step of submission. Training editors on the key reporting guidelines is important to understand whether the relevant guideline and checklist has been correctly followed and completed.

2. Standardise the instructions for authors — The journals’ instructions for authors vary substantially, even between journals that share the same publisher: this means they have quite different expectations of how authors should write and format their papers. This variation persists even in journals that have adopted ICMJE (International Committee of Medical Journal Editors) recommendations. In order to achieve greater consistency and quality, journals should consistently endorse the key reporting guidelines (namely those listed on the EQUATOR homepage).

3. Check usage — Many journals endorse or recommend following reporting guidelines, expecting authors to adhere to them and submit a completed checklist (providing page numbers where key information are reported in the submitted article). However, journals seldom check if reporting guidelines were followed, or indeed if the completed checklist is accurate. Effective solutions are required during the submission and peer review process to identify key information, recommended in the reporting guideline, that has not been reported before acceptance of the article.

4. Increase peer reviewers awareness — Ideally, peer reviewers should be aware of the relevant reporting guidelines for the study design of the article they are reviewing and consult them during the peer review. Peer reviewers have the opportunity to help to improve reporting, by requesting that authors provide details on aspects of the study that are missing.

5. Create the citizen box — Journals could start requiring authors to complete a small paragraph of the manuscripts with information directly relevant to patients and citizens in general, about the importance of the study. This could make manuscripts more understandable to the general public.

Systematic reviewers TO DO list:

1. Provide risk of bias feedback — One of the daily tasks of systematic reviewers is to evaluate the risk of bias of studies. For that, reviewers need detailed information about how the study was done and what it found. When there is “risk of bias due to poor reporting”, we don’t know what happened next. We don’t know what information was asked from authors, because little is reported about this communication between reviewers and triallists. Also, there should be a way to publish information that was collected through direct contact with the authors.

2. Improve reporting — While systematic reviews suffer from a lack of information in the papers they review, not all systematic reviews are well reported, understandable by everybody or useful. Therefore, there is room for improvement in the quality of the systematic reviews reports. This goes from reporting a minimum set of information following the PRISMA reporting guideline to help describe the included studies, to improving the quality of language and format. This is critical for the abstracts and plain language summaries that are frequently used by policymakers and patients.

Government, policymakers and funders TO-DO list:

1. Get to know and require the minimum sets — Funders should mandate the use of reporting guidelines, so that policymakers can make informed decisions on whether or not to adopt a new treatment. This means they need information on harms (an item on several reporting guidelines), feasibility of implementation (providing sufficient detail on the interventions, e.g. by following the TiDIER reporting guideline) and cost effectiveness.    

2. Examine protocol methods using reporting guidelines — Funders routinely evaluate applications for funding. These do not always provide all the important details on how authors intend to conduct the research (where reporting guidelines such as SPIRIT and PRISMA-P for clinical trials and systematic reviews can help). If funders start to require adherence to reporting guidelines, at least for the methods section (which can then be readily evaluated during peer review), this might help make better informed decisions on whether to fund studies.

Patients TO-DO list

1. Involve! – Patients should be actively involved in research. This is increasingly happening in Europe, but still in clinical areas only. Patients could also be encouraged to participate in research-on-research projects. We need to know their view on what is essential or important to be reported in a manuscript.

2. Review! –Patients and members of the public could and should participate in the revision process of new reporting guidelines. Research is done with them and for them. So why not ask them to verify the draft items of reporting guidelines checklists, even when they did not participate in their building?

We extend our thanks to David Tovey @DavidTovey for his work editing these blogposts.

Please tweet your ideas #ResearchTODOlist #PeerRevWk19 and #QualityInPeerReview or email us at hello@senseaboutscience.org.