That’s a wrap!

My reflections on Peer Review Week 2022

Wow! What a week! This year’s Peer Review Week has been a truly educational, enlightening and inspiring week.  As a member of the steering committee, it has been wonderful to see our community coming together to celebrate the essential role of peer review in maintaining research quality.

The breadth and variety of activities on offer this year meant there was something for everyone to engage with. From blog posts to webinars, podcasts to live AMAs, we had it all. And the range of organisations and individuals who got involved is testament to the importance of peer review to the scholarly ecosystem, and shows just how effectively Peer Review Week is at delivering on its aims.

This year’s theme focussed on “Research Integrity: Creating and supporting trust in research”, and in her opening blog post, Danielle Padula, spoke about the numerous layers to that theme. This year’s events and activities really reflected those different strands and the many ways that peer review supports trust in research.

Some of my personal highlights

There has been so much activity and so many great events that I can’t possibly cover them all in detail. I’ve picked out just a few examples to give you a flavour of the week, and to entice you to take a deeper dive.


  • COPE’s webinar on “Practical steps for managing paper mills” really lived up to its name, with speakers Renee Hoch (PLOS) and Jigisha Patel (Jigisha Patel Research Integrity Limited) sharing tips on identifying and investigating suspect papers. We also heard from Sarah Elaine Eaton (editor-in-chief of the International Journal for Educational Integrity) about the parallels between paper mills and so-called “contract cheating” in higher education. And Joris van Rossum (STM’s Director of Research Integrity) provided an overview of the important collaborative effort that is the STM Integrity Hub.
  • PLOS hosted a panel discussionBuilding Trust in Science Communication: The role of journals and journalists, pre- and post-publication” in which an expert panel, including Ivan Oransky of Retraction Watch, had a wide-ranging conversation about science journalism and publication ethics, including the public’s understanding of science.



  • Chris Graf, Research Integrity Director for Springer Nature, gives us reasons to be cheerful about research integrity and peer review in his Scholarly Kitchen article.
  • The UK’s national funding agency, UKRI, announced it is undertaking a review of peer review to inform its future processes.

It’s not too late to participate

Many of the live webinars and panel discussions were recorded so please do browse the list of activities on the Peer Review Week site, and catch up on any you missed. I know I will be doing that! You can also browse #PeerReviewWeek22 on Twitter, to see the great conversations about, and celebrations of, peer review and its role in research integrity.

See you next year!

As this year’s Peer Review Week draws to a close, the steering committee will soon be turning their attention to next year –  new volunteers are always welcome, so if you would like to get involved in the committee drop us a line at

Nicola Nugent

Attribution: This post was written by Nicola Nugent, Publishing Manager, Quality & Ethics at Royal Society of Chemistry

About the author: Dr Nicola Nugent is Publishing Manager, Quality & Ethics at the Royal Society of Chemistry, where she is the strategic lead for quality and impact across journals and books. She has responsibility for the journals peer review strategy, as well as publication ethics, and inclusion & diversity in publishing. She is a member of the Peer Review Week steering panel.

Reinforcing Research Integrity with New Forms of Peer Review

Since its advent in the 18th century and eventual widespread adoption in the 1940s, peer review has been a process mainly intended to ratify the quality and validity of scholarly research work. However, new models of scientific publishing and advancing ways of practicing peer review have bolstered the dynamism of scholarly communication system.

Constant Rebuttal against Conventional Peer Review

The peer review system is broken—a phrase we all may have heard in the recent years seems to have become a truism. Given the existing state and the incessant complaints from authors that peer review is slow and it delays publication. It’s almost a secret affair where authors are unaware of who is reviewing their work—perhaps an ally, or worse, one with a competing interest. Furthermore, the potential of blocking ingenuity can’t be defied either. If the standard peer review process is thought to be beyond repair by authors, there are some cracks in the system that reviewers have acknowledged too. For instance, how are review reports compiled? What proportion of the referee reports are from which round of review? What if the author prefers to take their chances with another journal to avoid the extra work of revising their work?

There are two core issues linking to the weakness of the conventional peer review models:

  1. It is conducted pre-publication which involves a vigilant process of filtering and ratifying scholarly content further delaying publication. This can hinder future advances in research and interrupt dissemination of science, especially for topical matters that require immediate attention and action.
  2. The concern of reviewer bias or even outright scams in which authors manipulate the review process by suggesting cronies as reviewers.

Do New Peer Review Models Ensure Research Integrity?

The apparent increase in cases of scientific fraud and irreproducible research have declared science to be in a state of crisis; more so research integrity and quality of scientific literature have become a subject of heated debate. While self-regulation is a key concern, peer review amongst other mechanisms is considered an essential gatekeeper of both quality and integrity.

The fundamentals of peer review hold true across forms; however, the augment of digital in scholarly communication has stimulated the development of more peer review models for handling manuscripts through stages of review and consideration. The ever evolving peer review process strives to maintain research integrity by introducing new models, such as the ones outlined below.

Pre-publication Peer Review Model: In the new pre-publication peer review model, articles are first submitted to a group of peers, who correspond with the author/s and other reviewers to ensure that the submitted paper is true to science and its claimed results. These papers, often called preprints, can later be published in a journal, which would follow a traditional peer review process. As preprints are reviewed by a community of peers with expertise in the research area, even if the process is not that critical, reviewers ensure that the papers are not published at the stake of research integrity. Examples include:

Open and/or Post-publication Peer Review: Unlike blinded reviews in traditional peer review, wherein reviews are seen only by editors and authors, the new model of post-publication peer review provides an open platform encouraging public conversation about a paper. This model presents an author’s ideas online and readers are invited to publicly post their comments and censure. When making these papers subject to public critique, the peer review report is published alongside the article with signed reports, or indicating the peer reviewers’ names for increased transparency. This encourages reviewers to provide constructive feedback maintaining research integrity, which allows author/s to shape future versions of their work. Examples include:

High Volume Peer Review: Pioneered by PLOS ONE, the model of “Mega-journals” publishes numerous peer reviewed articles covering different subject areas and accepting articles that are technically sound rather than selecting them for perceived importance. This model ensures research integrity by determining the fundamental validity of the contribution and not only its novelty or impact. Examples include:

Independent Peer Review: The allocation of responsibility for research integrity to the peer review system is not recent. However, there is an increasing interest in extricating potentially discrete publication processes and exploring the actual funds required to provide these services, which still remain controversial. Hence, authors are opting for independent service of peer review that does not confine to a particular journal. These independent review systems are AI-based helping in reporting unprejudiced reviews which are reliable and ensures research integrity. Example:

Key Takeaways

If conventional peer review is intended to support the publication process, so are the new peer review models. The rationale for choosing new peer review models is transparency while maintaining scientific research integrity. The advocates of the new models argue that these lead to more constructive feedback, reduce reviewers’ bias, and give credit to the reviewer. Thereby, addressing some of the important concerns raised in conventional models. Furthermore, these new peer review models could potentially reduce the possibilities of reviewers taking unfair advantage of their position as a reviewer—be it plagiarising the manuscript under review, unjustly delaying the publication process, or advising rejection for bigoted reasons.

Every paper could eventually be published cascading until they find a journal that accepts it. The changing landscape of scholarly publishing demands advances in the publishing mechanisms too, including peer review. Although the flaws and redundancies of the peer review process are acknowledged, it is imperative to have a robust and reliable system. As science advances, the need for peer reviewers will grow to help us disseminate knowledge without compromising the integrity or drawing invalid conclusions. With a systematic analysis of all peer review models, especially the ones fuelled by modern technologies, we can improve research integrity and reporting with peer review.

Attribution: This post was written by Enago

How researchers can promote research integrity: Perspective from a Voice of Young Science member

I remember the reaction that meeting someone with ‘Doctor’ or ‘Professor’ before their name provoked in my grandparents. It was a kind of reverence – the word of the doctor or professor was implicitly trusted. The title alone was enough to elevate this person beyond scrutiny. They had already proven themselves as a paragon of integrity.

The truth is that researchers are, and have always been, humans: humans operating under great pressure in an increasingly competitive world; humans who make mistakes.

In this age of information proliferation, where it is increasingly difficult to get to grips with the reliability of claims, it is more important than ever that we researchers know that information we publish or promote is trustworthy. The theme of “Research Integrity: Creating and supporting trust in research” for Peer Review Week 2022 is well-chosen, for integrity is the most important asset researchers can cultivate to overcome the spread of poor quality or misleading information.

While peer review is essential in helping us to maintain high standards, the process of establishing integrity begins long before any article is drafted. The process begins right at the beginning of our project planning stage, through ethics approval, data collection, and finally into analysing and reporting results.

Lack of integrity damages trust in science

I am undertaking a PhD in collaboration with the Centre for Educational Neuroscience in London, but my background is in psychology. I am therefore all too aware of the damage that lack of integrity can cause, both to an individual scientist’s reputation, and to the field. One notorious example is the Eysenck affair, in which dozens of papers by the late psychologist Hans Eysenck have been retracted following misconduct allegations, but there are many more examples of retractions due to issues of integrity, such as lack of ethics approval or results falsification. Apparent widespread corruption prompts articles with headlines like “Never trust a scientist.

We must put the lens to our own practices

One unexpected thing I have learned during my doctorate is how emotional the process of conducting research can be. So much depends on our findings, especially as a researcher without an established presence. In addition, the research publishing landscape is changing under our feet. The growing use of pre-prints, for example, has great potential for promoting dissemination of findings, but the role played by pre-prints in fueling Covid-19 misinformation highlights the careful balance that must be struck between speed of publication and rigorous evaluation of claims.

To combat the spread of misleading information, we must take responsibility for the quality of our work from start to finish. Researchers primarily using quantitative methods may benefit from adopting the methods of reflexivity that are so integral to qualitative research, cultivating an awareness of the researcher’s role in driving the research and the assumptions underpinning our methods. We must put the lens to our own practices, holding ourselves accountable as well as holding others accountable through the peer review process.

Research integrity and peer review

Early career researchers (ECRs), especially those like me in the very early stages who may not have published yet, can find it intimidating to engage with peer review. After all, we are at the bottom of a long ladder of people with more extensive expertise than us. Peer review may be a new researcher’s first foray into interacting with the wider research community.

While we may feel unprepared, the task of peer review often falls to the ECR. Therefore, ECRs need to feel confident, to ensure that standards of scholarly communication remain high. Good-quality peer review begins with cultivating a solid understanding of what good-quality research looks like. This understanding can also be applied to your own research.

What can I do as an ECR?

Earlier this summer, as a member of Sense about Science’s Voice of Young Science (VoYS) network, I attended a practical workshop on Quality and Peer Review at the University of Cambridge which equipped ECRs to get involved in peer review. Here are some tips and resources for promoting research integrity which I took from that afternoon, and from my further involvement with VoYS:

  • Not just a box-ticking exercise: Ethics approval is built to benefit and protect you and your participants, not to stand in your way. Engage with the ethics process meaningfully by giving careful thought to the ethical implications of your studies and outlining them frankly in your applications.
  • Transparent from start to finish: Pre-registration, making data publicly available, and clearly communicating your analyses and results are all ways to improve the trustworthiness of your findings. Initiatives such as Sense about Science’s AllTrials campaign are increasing transparency in clinical trials in the interests of patients, but smaller-scale research projects can also reap the benefits of increased confidence by following the same practices.
  • Put your methods under the microscope: Reflexivity is important even when working with quantitative data. Our findings are the products of many small decisions made along the way, and keeping track of what decisions were made and why is important. Sense about Science produced the world’s first public guide to data science, promoting scrutiny of models and the data underpinning them. Consider where data has come from, the assumptions driving your method, and whether claims can bear the weight we put on them. These three questions can easily be generalised to most studies, and you can apply them when peer reviewing, too.
  • Responsible communication: While we all want our work to attract attention, over-hyping or presenting findings in a misleading way helps no one and damages trust. The release of pre-prints provides opportunity for scrutiny, not publicity.

News about fraud in our respective fields must not be rendered mundane. To build and maintain public trust in research, we must hold ourselves accountable, as we hold our colleagues accountable during the peer review process.

Attribution: This post was written by Astrid Bowen, PhD student at Birkbeck University of London

About the author: Astrid Bowen is a second-year PhD student at Birkbeck, University of London, and a Voice of Young Science member. Her doctoral research project is an evaluation of Project HE:RO, a holistic child-centred educational intervention for primary school children, in collaboration with the Centre for Educational Neuroscience and Evolve, a social impact company. You can find her on Twitter @aejbowen

Research Integrity: towards shared goals and decision making 

Research Integrity in 2022 continues to focus on efforts made by scholarly publishers to maintain the integrity of the published record. We continue to hold publishers and journal editors accountable when we see a publication that doesn’t meet our expectations of scholarly discourse. It seems that as we occupy the research ecosystem jointly with researchers and institutions, some of the responsibility to uphold research integrity may be considered shared with journal editors and publishers.

As a Publisher, SAGE believe that there are steps researchers can take to uphold research integrity principles more proactively and avoid post-publication disputes.

  • Pre-specify your study protocol: If you work within Clinical Research, it is good practice to pre-specify the methods, design, and all analyses in your study protocol. Ad-hoc statistical analyses are sometimes inevitable, but there is merit in deciding the appropriate method before data collection to avoid concerns around any bias.
  • Agree on the author and contributor group, ensure no one is left off: Publishers get a high number of authorship disputes, and these are lengthy disputes that are often complicated to resolve. It is highly desirable that the author and contributor group meet and discuss the publication of the work at an early stage of the research project.
  • Check with an ethics committee if you require approval: If you are starting research with human subjects, consult your ethics committee informally to ask whether your planned research falls within their remit. Too many researchers don’t realise their research requires approval from an institutional ethics committee or institutional review board. It is worth remembering that if authors sit on an ethics committee that is likely to evaluate their research, it is appropriate to recuse those authors from the evaluation process.
  • Consider whether a Registered Report format is appropriate for your work: Many disciplines benefit from the Registered Report model of Publishing research whereby you publish the Stage 1 Registered Report containing the background and methodology sections prior to commencing the research. This model encourages researchers to avoid changing their study design and methodology partway through the data collection or data analyses phases.
  • Researchers as reviewers: Reviewers are best placed as subject experts to provide attention to anomalies in study design that the Editor may not pick up on. Reviewers can determine if appropriate methods are used, the correct reporting guidelines have been followed, and there are any specialist aspects of the study that they wouldn’t be able to comment on. Referring other specialist reviewers to evaluate specific parts of a research submission has tremendous value. These are important mechanisms by which researchers can influence the publishing process and help journals publish high quality and ethical research.
  • Add your datasets to a stable and public repository: While the tide on mandatory data-sharing is changing, it is crucial to remember that the research community hugely benefits from raw data sharing practices, in addition to the traditional research outputs of journal articles and conference proceedings. Data sharing enables error detection, reduces time and effort on replication of findings, and can help train junior researchers. Furthermore, datasets that don’t get published as a research article can be added to a repository and can be cited as any other research output. It is noteworthy that many researchers in the world rely on openly available data to carry on doing research, perhaps due to the nature of their study or due to lack of funds to undertake primary research.

This is by no means an exhaustive list, and there are many discipline specific nuances that can be added in a future blog post. 

For Peer Review Week, SAGE are highlighting ways in which authors can uphold the principles of research integrity and help us publish high quality, ethical research. Check out all of the content in the series here.

Adya Misra

Attribution: This post was written by Adya Misra, Research Integrity and Inclusion Manager at SAGE Publishing

About the Author: Adya Misra is an experienced publishing professional with expertise in publication ethics and clinical research. Adya has worked in senior editorial roles at PLOS and PeerJ bringing together subject level expertise in molecular biology, medicine and public health along with hands-on experience in publication ethics/research integrity principles. Adya has a broad interest in evidence based research, science communication and research integrity, all of which have developed via initiatives at various employers and in her previous life as an academic researcher.

Prioritizing data sharing to support reproducibility and replicability: excerpt from new Research Integrity Toolkit

Within and outside academia, research reproducibility and replicability have become hot-button issues. Examples of failed attempts to duplicate published research conclusions brought forward in recent years have sparked increased scrutiny of the data and methods underpinning scholarly reports and spotlighted the uncomfortable truth that we are amid an ongoing reproducibility and replication crisis. This concerning reality spans research disciplines and affects all stakeholders.

What’s needed to change course, and how can peer review be part of the answer? 

This excerpt from Scholastica and Research Square’s new “Research Integrity Toolkit” blog series discusses data transparency as part of the solution and steps journals can take to promote more widespread data sharing.

Reproducibility, replicability, and the role of data transparency

While reproducibility and replicability are closely linked, it’s worth acknowledging the nuances in their definitions. Reproducibility generally refers to the ability to produce the same findings as a published report using the same methods, whereas replicability refers to the ability to reach the same findings as a published report using different methods.

Non-transparent reporting is one of the primary reasons it can be so difficult to reproduce and replicate published research findings. Possibly the most essential way for journals to increase the reproducibility and replicability of the material they publish is to take proactive measures to maximize data availability and transparency. This can have tangible and far-reaching benefits, including the potential for increased citability of articles when all community members can freely assess the quality of their data and methodology. Moreover, data sharing can boost data reuse, expanding the scope for generating new insights.

Prioritizing data sharing and transparency

Establishing solid data-sharing policies in line with the needs of journal contributors and readers is a vital initial step toward promoting data transparency. These can range from asking researchers to agree to openly share their source data and methods if and when requested to requiring them to submit those resources as a standard part of journal publishing agreements.

Journals can also champion open and comprehensive sharing not just of the raw data underlying research studies but also the details of the procedures and equipment used to generate them, the data-collection methods employed, the statistical techniques applied for analysis, and the peer-review process. PLOS is a helpful case study for incorporating open methods into research findings. They provide scholars with four submission options, including Registered Reports, study protocols, and lab protocols, as well as traditional research articles.

There are also various data and methods transparency initiatives developed in recent years that journals may want to consider adopting, including:

  • FAIR Data Principles: Many publishers are beginning to implement FAIR, a set of guiding “principles for scientific data management and stewardship” first published in Scientific Data in 2016. The principles are widely promoted by GO FAIR, a “stakeholder-driven and self-governed initiative that aims to implement the FAIR data principles.” FAIR stands for making data Findable, Accessible, Interoperable, and Reusable. For further discussion of strategies to achieve this, check out Scholastica’s blog post “3 Ways scholarly journals can promote FAIR data.”
  • Open Science Badges: The Center for Open Science (COS) also features an Open Science Badge scheme journals can adopt to encourage authors to share their research data and methods. There are badges to acknowledge Registered Reports and reports with open data and/or materials. According to COS, “Implementing badges is associated with [an] increasing rate of data sharing (Kidwell et al, 2016), as seeing colleagues practice open science signals that new community norms have arrived.”
  • TOP Guidelines: Another initiative from COS to improve the robustness of research reporting is the Transparency and Openness Promotion (TOP) guidelines, which various publishers now endorse, including the American Psychological Association (APA). The TOP guidelines comprise eight transparent reporting standard areas (e.g., citation standards and data transparency) with three possible compliance levels increasing in stringency. Accompanying the TOP guidelines is TOP Factor, a metric that reports the steps a journal is taking to implement open science practices. COS aims for TOP Factor to be considered alongside other publication reputation and impact indicators, such as the Journal Impact Factor (JIF).

Putting it all together

There are myriad reasons why it’s critical for research to be reproducible and replicable, particularly for publishers, editors, authors, and readers of scholarly journals. Of course, the most fundamental is to avoid the dissemination and perpetuation of misleading results produced either through unintentional errors or, in some cases, by intentional dishonesty. Others include increasing confidence in research, encouraging collaboration, optimizing time and resources, facilitating diversity and inclusion, and accelerating discovery. You can read a collection of articles on these topics in the Harvard Data Science Review (HDSR) Reproducibility and Replicability special issue.

The jury is out on how well the community has tackled these issues up to this point. But it’s clear there’s still work ahead to ensure the integrity of research data and methods and foster greater trust in scholarship.

As noted, this blog post is an excerpt from Scholastica and Research Square’s “Research Integrity Toolkit” series in honor of Peer Review Week. You can read the full post on steps journals can take to promote reproducibility and replicability on Scholastica’s blog here. We’ve also released a post on associated best practices for submitting authors on the Research Square blog here.

Attribution: This post was written by Victoria Kitchener, freelance scholarly publishing writer, and edited by Danielle Padula, head of marketing and community development at Scholastica, to create this excerpt

About the authors:

Victoria Kitchener

Victoria Kitchener is a freelance science editor and writer who frequently contributes to the Scholastica blog. Prior to starting a freelance career, she spent 20 years working as a science editor and writer for Springer Nature Group.

Danielle Padula

Danielle heads up marketing and community development at Scholastica, a scholarly publishing technology provider with peer review, production, and OA journal hosting solutions. Prior to joining Scholastica, Danielle worked in academic book publishing. She enjoys creating resources to help publishers navigate the evolving landscape and is excited to be co-chairing PRW 2022.

Repositioning peer review to support open science, reproducibility and research integrity

Peer Review Week is always a good time to step back and take stock of what’s been happening in the fast paced, ever changing industry that is academic publishing. This year’s theme, the importance of peer review in supporting research integrity, is particularly interesting because it raises questions about the role of peer review in the context of open science and reproducibility, which are both inextricably linked to research integrity.

Peer review has had a rocky reputation in recent decades. There was an intense wave of innovation in peer review at the beginning of the century as concerns about the slowness of the process, potential for bias and the lack of accountability were raised. Open peer review was just one innovation aimed at increasing transparency and accountability for editors and peer reviewers. 

In the years that followed, there were attempts to make peer review more efficient and equitable. Peer review innovations were mostly focused on journal peer review because, at the time, peer review was intended to help journal editors make decisions about what to publish in their journals. Despite these innovations, peer review has continued largely in its traditional form, although there are enough variations in the process to need a standard taxonomy for different types of peer review.

But peer review is changing in a fundamental way. With the increasing popularity of preprints in recent years, driven further by the Covid-19 pandemic, peer review has taken on the wider remit. More than ever, peer review is seen to signify the trustworthiness of different types of research publications rather than a tool to inform editorial decisions on what to publish in Journals. 

Peer review reports can now have a heterogeneous readership rather than just journal editors. It is undergoing another wave of innovation, in many cases involving direct communication between author and reviewer. Peer review reports can be written by and for the specialist and layperson alike.

While expectations of peer review have changed, research integrity as a field of specialisation in its own right has also grown. Again, the Covid-19 pandemic along with the phenomenon of paper mills (a form of sophisticated research misconduct) has made research integrity a ‘hot topic’. Concerns about research integrity have driven calls to change a research culture that motivates questionable research practices and misconduct.

Despite this, there has been little joined up thinking about the role of peer review in maintaining research integrity. There is no obvious direct role for peer review as a mechanism for preventing research misconduct or promoting good research practice. It is not designed to do this. 

Can peer review be redesigned to support research integrity? The key is to remember that peer review is the collation of opinions and therefore, subjective. It can never be the last word on validity. Science is objectively validated by the slow process of repeating the research and showing the same findings via reproducibility and replicability. Peer review can certainly play a role in supporting this validation process.

The open science movement promotes behaviours that allow reproducibility and replicability. Many of these behaviours, such as pre-registering intended research and data sharing are encapsulated in journal editorial policies, but they are rarely the focus of peer review.

Traditionally peer reviewers are asked to comment on whether a piece of research is sound, whether the methods and analysis are appropriate for the research question, and whether the conclusions supported by the data. These are good questions, but peer review needs to be positioned to also assess how far the reporting of research allows for its objective validation in the future. 

Such a repositioning might ask peer reviewers the following questions: 

  • Is it clear whether the research was registered before it began? 
  • Are the raw data shared? 
  • Are reporting guidelines followed? 
  • Are details provided on the sources of materials used? 

The above questions should be asked not just of research published in journals that have policies on reproducibility, but of all types of publications as a standard part of peer review regardless of the editorial policies of the platform on which they are published, or the model used for the peer review process. This coupled with open peer review would incentivise transparent and collaborative practices that would ultimately help to maintain research integrity.

Of course, this would all require widespread support of open science practices and the standardisation of peer review across the industry. This may not seem feasible, but consider, if you read a piece of research that was not written in a way that allowed for its objective validation. Would you trust it?

Jigisha Patel

Attribution: This post was written by Jigisha Patel, Founder of Jigisha Patel Research Integrity Limited

About the author: Formerly a medical doctor, clinical researcher, and medical journal editor, Jigisha is an independent research integrity specialist and founder of Jigisha Patel Research Integrity Limited. Previously, she led the first team dedicated to maintaining research integrity at BioMed Central and was Head of Programme Management for the Springer Nature Research Integrity Group. She has extensive experience in a wide range of research integrity issues in publishing, including the investigation and management of complex cases such as paper mills. She uses her experience to help journals and publishers manage cases of research misconduct and develop policies and processes to maintain research integrity. She also provides various training for journal editors and publishing staff, including a CDP-certified course on research integrity strategy. She is an independently elected member of COPE and Senior Associate Affiliate with Maverick Publishing Specialists.

Unpacking the Many Layers of Research Integrity: PRW 2022 Blog Series Intro

Peer Review Week 2022 is finally here! After months of planning and preparations, we members of the steering committee are excited to kick off a week full of information-packed activities and events on this year’s theme, “Research Integrity: Creating and supporting trust in research,” which was chosen by the scholarly community via an open poll.

A key question raised during PRW planning meetings was, “how do we differentiate this year’s theme from the 2020 theme, “Trust in Peer Review.” What followed was a fruitful and generative discussion about research integrity policies and practices as necessary means to reach the end goal of fostering trust in peer review and, ultimately, the scholarly record both within and outside academia. 

To briefly summarize the PRW steering committee’s views on the aims and scope of this year’s theme, as discussed in The Scholarly Kitchen announcement, Research integrity encompasses conducting, reviewing, and disseminating research in a transparent, rigorous, ethical, and verifiable manner. We want to emphasize that this is a broad working definition of research integrity. The goal of PRW is to invite community input and action steps surrounding this topic to continue fleshing out the many relevant parts and pieces and building upon them.

To help unpack the numerous layers of this year’s theme, we’re launching a series of blog posts about the role of peer review in fostering research integrity before submission, during peer review, and post-publication.

Here’s some background on the relationship between research integrity and peer review and what to expect from our forthcoming blog series.

A bit of background: research integrity and peer review

Ensuring research integrity is, of course, among the primary aims of peer review. Today, myriad peer review standards and best practices are available both generally and within specific disciplinary areas thanks to the tireless efforts of organizations like the Committee on Publication Ethics (COPE), International Committee of Medical Journal Editors (ICMJE), and the EQUATOR Network (Enhancing the QUAlity and Transparency Of health Research).

However, as all of the organizations above would likely agree, it’s necessary to recognize that formalized peer review practices as we know them today are relatively new in the grand scheme of scholarly communication. In fact, despite the oldest research journal, Philosophical Transactions, dating back to the 15th century (first published by the Royal Society in 1665), the formalization of peer review as a process didn’t occur until the 1940s.


Since then, peer review has spread throughout academia to introduce a more thorough and (aspirationally) objective process for vetting scholarship. But, as this comic from Hilda Bastian demonstrates, it’s an imperfect process, and there are still many questions about the extent to which peer review can and should ensure research integrity. Among them are whether biases on the parts of all parties involved — publishers, editors, authors, reviewers, and funders — can be curbed, from biases against negative results to institutional or personal ones. In recent years, evidence of research spin, reproducibility, and replicability challenges has also been mounting.

The theme for PRW 2022, “Research Integrity: Creating and supporting trust in research,” invites all scholarly communication stakeholders to weigh in on current guidelines and initiatives to promote research integrity at all stages of peer review and the way forward.

Topics we’ll touch on in this blog series

We hope this PRW blog series will help spur meaningful conversations and action steps surrounding this year’s theme. We have a great lineup with contributions from PRW steering committee members representing a range of perspectives, from publishers to authors to service providers.

Here’s a quick preview of topics we plan to cover:

  • The role of peer review in spotting and addressing research misconduct
  • The relationship between research reproducibility/replicability and research integrity
  • The role of scholars in promoting research integrity and how Early Career Researchers (ECRs) can get involved
  • New forms of peer review to support research integrity

Tackling tough questions and embracing opportunities during PRW

We acknowledge that our forthcoming blog series will only skim the surface of the topic of research integrity and peer review. There are many tough questions to address, all with the potential to open up new realms of possibilities to reinforce and further the role of peer review in ensuring research integrity. We welcome and look forward to your input and ideas!

We hope you enjoy the blog series and invite you to follow the Peer Review Week Twitter account @PeerRevWeek and this year’s hashtags #PeerReviewWeek22 and #IntegrityAndPeerReview for updates on new posts and all the activities happening this PRW.

Danielle Padula

Attribution: This post was written by Danielle Padula, Head of Marketing and Community Development at Scholastica

About the author: Danielle Padula heads up marketing and community development at Scholastica, a scholarly publishing technology provider with peer review, production, and open access journal hosting solutions. Prior to joining Scholastica, Danielle worked in academic book publishing. She enjoys creating resources to help publishers navigate the evolving research landscape and is excited to be co-chairing Peer Review Week for 2022.