problem with the peer review process in scientific research

The peer review system is broken. We asked academics how to fix it

problem with the peer review process in scientific research

Associate Professor, School of Educational Psychology and Counselling, Faculty of Education, Monash University

problem with the peer review process in scientific research

Durham University

problem with the peer review process in scientific research

Senior Lecturer, Educational Innovation, University of Tasmania

problem with the peer review process in scientific research

Professor and Director of the Centre for Youth Policy and Education Practice, Monash University

Disclosure statement

Kelly-Ann Allen is the Editor-in-Chief of the Educational and Developmental Psychologist and Co-Editor-in-Chief of the Journal of Belonging and Human Connection. She is an Editorial Board member of Educational Psychology Review, Journal of Happiness and Health (JOHAH), and Journal of School and Educational Psychology (JOSEP).

Joseph Crawford is Editor in Chief of the Journal of University Teaching and Learning Practice.

Jonathan Reardon and Lucas Walsh do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Monash University provides funding as a founding partner of The Conversation AU.

Durham University provides funding as a founding partner of The Conversation UK.

University of Tasmania provides funding as a member of The Conversation AU.

View all partners

  • Bahasa Indonesia

The peer review process is a cornerstone of modern scholarship. Before new work is published in an academic journal, experts scrutinise the evidence, research and arguments to make sure they stack up.

However, many authors, reviewers and editors have problems with the way the modern peer review system works. It can be slow, opaque and cliquey, and it runs on volunteer labour from already overworked academics.

Read more: Explainer: what is peer review?

Last month, one of us (Kelly-Ann Allen) expressed her frustration at the difficulties of finding peer reviewers on Twitter. Hundreds of replies later, we had a huge crowd-sourced collection of criticisms of peer review and suggestions for how to make it better.

The suggestions for journals, publishers and universities show there is plenty to be done to make peer review more accountable, fair and inclusive. We have summarised our full findings below.

Three challenges of peer review

We see three main challenges facing the peer review system.

First, peer review can be exploitative.

Many of the companies that publish academic journals make a profit from subscriptions and sales. However, the authors, editors and peer reviewers generally give their time and effort on a voluntary basis, effectively performing free labour.

And while peer review is often seen as a collective enterprise of the academic community, in practice a small fraction of researchers do most of the work. One study of biomedical journals found that, in 2015, just 20% of researchers performed up to 94% of the peer reviewing .

Peer review can be a ‘black box’

The second challenge is a lack of transparency in the peer review process.

Peer review is generally carried out anonymously: researchers don’t know who is reviewing their work, and reviewers don’t know whose work they are reviewing. This provides space for honesty, but can also make the process less open and accountable .

The opacity may also suppress discussion, protect biases, and decrease the quality of the reviews.

Peer review can be slow

The final challenge is the speed of peer review.

When a researcher submits a paper to a journal, if they make it past initial rejection , they may face a long wait for review and eventual publication. It is not uncommon for research to be published a year or more after submission.

This delay is bad for everyone. For policymakers, leaders and the public, it means they may be making decisions based on outdated scientific evidence. For scholars, delays can stall their careers as they wait for the publications they need to get promotions or tenure.

Read more: Journal papers, grants, jobs ... as rejections pile up, it's not enough to tell academics to 'suck it up'

Scholars suggest the delays are typically caused by a shortage of reviewers . Many academics report challenging workloads can discourage them from participating in peer review, and this has become worse since the onset of the COVID-19 pandemic.

It has also been found that many journals rely heavily on US and European reviewers , limiting the size and diversity of the pool of reviewers.

Can we fix peer review?

So, what can be done? Most of the constructive suggestions from the large Twitter conversation mentioned earlier fell into three categories.

First, many suggested there should be better incentives for conducting peer reviews.

This might include publishers paying reviewers (the journals of the American Economic Association already do this) or giving some profits to research departments. Journals could also offer reviewers free subscriptions, publication fee vouchers, or fast-track reviews.

However, we should recognise that journals offering incentives might create new problems.

Read more: Explainer: the ins and outs of peer review

Another suggestion is that universities could do better in acknowledging peer review as part of the academic workload, and perhaps reward outstanding contributors to peer review.

Some Twitter commentators argued tenured scholars should review a certain number of articles each year. Others thought more should be done to support non-profit journals, given a recent study found some 140 journals in Australia alone ceased publishing between 2011 and 2021.

Most respondents agreed that conflicts of interest should be avoided. Some suggested databases of experts would make it easier to find relevant reviewers.

Use more inclusive peer review recruitment strategies

Many respondents also suggested journals can improve how they recruit reviewers, and what work they distribute. Expert reviewers could be selected on the basis of method or content expertise, and asked to focus on that element rather than both.

Respondents also argued journals should do more to tailor their invitations to target the most relevant experts, with a simpler process to accept or reject the offer.

Others felt that more non-tenured scholars, PhD researchers, people working in related industries, and retired experts should be recruited. More peer review training for graduate students and increased representation for women and underrepresented minorities would be a good start.

Rethink double-blind peer review

Some repondents pointed to a growing movement towards more open peer review processes, which may create a more human and transparent approach to reviewing. For example, Royal Society Open Science publishes all decisions, review letters, and voluntary identification of peer reviewers.

Another suggestion to speed up the publishing process was to give higher priority to time-sensitive research.

What can be done?

The overall message from the enormous response to a single tweet is that there is a need for systemic changes within the peer review process.

There is no shortage of ideas for how to improve the process for the benefit of scholars and the broader public. However, it will be up to journals, publishers and universities to put them into practice and create a more accountable, fair and inclusive system .

The authors would like to thank Emily Rainsford, David V. Smith and Yumin Lu for their contribution to the original article Towards improving peer review: Crowd-sourced insights from Twitter .

  • Peer review
  • Academic publishing

problem with the peer review process in scientific research

Lecturer / Senior Lecturer - Marketing

problem with the peer review process in scientific research

Case Management Specialist

problem with the peer review process in scientific research

Assistant Editor - 1 year cadetship

problem with the peer review process in scientific research

Executive Dean, Faculty of Health

problem with the peer review process in scientific research

Lecturer/Senior Lecturer, Earth System Science (School of Science)

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 23 November 2022

To fix peer review, break it into stages

problem with the peer review process in scientific research

  • Olavo B. Amaral 0

Olavo B. Amaral is a metaresearcher at the Federal University of Rio de Janeiro, where he coordinates the Brazilian Reproducibility Initiative.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

Peer review is not the best way to detect errors and problematic data. Expert reviewers are few, their tasks are myriad and it’s not feasible for them to check data thoroughly for every article, especially when the data are not shared. Scandals such as the 2020 retractions of high-profile COVID-19 papers by researchers at US company Surgisphere show how easily papers with unverified results can slip through the cracks.

As a metaresearcher studying peer review, I am struck by how vague the concept is. It conflates the evaluation of rigour with the curation of what deserves space in a journal. Whereas the first is key to keeping the scientific record straight, the second was shaped in an era when printed space was limited.

For most papers, checking whether the data are valid is more important than evaluating whether their claims are warranted. It is the data, not the conclusions, that will become the evidence base for a given subject. Undetected errors or fabricated results will permanently damage the scientific record.

problem with the peer review process in scientific research

The researchers using AI to analyse peer review

I do not dispute that expert review can be crucial for many things, but not all published research needs to be reviewed by an expert. Much of the low-hanging fruit of quality control doesn’t need a specialist — or even a human. Only after confirming that the data are consistent is it worthwhile to evaluate a paper’s conclusions.

Breaking down peer review into modular steps of quality control could improve published science while making review less burdensome. Every article could receive basic checks — for example, of whether all data are available, calculations hold up and analyses are reproducible. But peer review by domain specialists would be reserved for articles that raise interest in the community or are selected by journals. Experts might be the best people to assess a paper’s conclusions, but it is unrealistic for every article to get their attention. More efficient, widely applicable solutions for quality control would allow reviewers to use their time more effectively, on papers whose data is sound.

Some basic verifications can be performed efficiently by algorithms. In 2015, researchers in the Netherlands developed statcheck , an open-source software package that checks whether P values quoted in psychology articles match test statistics. SciScore — a program that checks biomedical manuscripts for criteria of rigour such as randomization, experiment blinding and cell-line authentication — has screened thousands of COVID-19 preprints. And tests such as GRIM, SPRITE and the Carlisle method have been used to flag numerically inconsistent results in the clinical literature.

Decentralizing peer review is not a new idea , but its implementation is still hampered by lack of data standardization. The accuracy and efficiency of automated methods are limited when they are run on unstructured text or tables. Statcheck, for instance, can do its job only because the American Psychological Association has a widely-used convention for describing statistical results.

problem with the peer review process in scientific research

Should AI have a role in assessing research quality?

This kind of standardization, currently the exception rather than the rule, can be applied more broadly, to data, code and metadata. When these are shared in systematic formats, checking them becomes less labour-intensive than reviewing articles. Experts are estimated to spend more than 100 million hours per year on peer review; if they spare some of that time to agree on how to structure data in their fields, they are likely to have a greater impact on quality control.

Still, checking data cannot guarantee that they were collected as reported, or that they represent an unbiased record of what was observed. For this to happen, certification must move upstream, from results to data acquisition — rather than scrutinizing manuscripts, quality control should target laboratories and facilities, as proposed by frameworks such as Enhancing Quality in Preclinical Data (EQIPD). This can increase transparency and trust in results, and make room for errors to be prevented rather than detected too late.

Most process-level quality control still lies behind closed doors, but some communities have taken steps to change this. Various consortia in genomics, for example, set collective standards for data collection and metadata. Particle physics has a long history of blind analysis of data by independent teams. And reproducibility hubs such as the QUEST Center at the Berlin Institute of Health at Charité have been set up to oversee processes across multiple research groups at their institutions.

These systematic efforts will not become integral to the scientific process unless institutions and funding agencies grant them the status currently enjoyed by journal peer review. If these organizations reward researchers for having specific aspects of their results certified, they could create a market for such modular services to thrive.

In the long run, this could make published science more trustworthy, and could prove more viable than the current system, in which peer review drains hundreds of millions of hours from researchers but delivers little. To maximize benefit, quality control should be aimed at data and processes before moving on to words and theory. Discerning which data are valid is fundamental to science, and should be approached through systematic methods rather than expert opinion.

Nature 611 , 637 (2022)

doi: https://doi.org/10.1038/d41586-022-03791-5

Reprints and permissions

Competing Interests

The author declares no competing interests.

Related Articles

problem with the peer review process in scientific research

  • Peer review
  • Research data

I’m worried I’ve been contacted by a predatory publisher — how do I find out?

I’m worried I’ve been contacted by a predatory publisher — how do I find out?

Career Feature 15 MAY 24

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Career Feature 08 MAY 24

Mount Etna’s spectacular smoke rings and more — April’s best science images

Mount Etna’s spectacular smoke rings and more — April’s best science images

News 03 MAY 24

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Plagiarism in peer-review reports could be the ‘tip of the iceberg’

Nature Index 01 MAY 24

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Algorithm ranks peer reviewers by reputation — but critics warn of bias

Nature Index 25 APR 24

Structure peer review to make it more robust

Structure peer review to make it more robust

World View 16 APR 24

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

News Feature 14 MAY 24

The US Congress is taking on AI — this computer scientist is helping

The US Congress is taking on AI — this computer scientist is helping

News Q&A 09 MAY 24

Powerful ‘nanopore’ DNA sequencing method tackles proteins too

Powerful ‘nanopore’ DNA sequencing method tackles proteins too

Technology Feature 08 MAY 24

Postdoc in CRISPR Meta-Analytics and AI for Therapeutic Target Discovery and Priotisation (OT Grant)

APPLICATION CLOSING DATE: 14/06/2024 Human Technopole (HT) is a new interdisciplinary life science research institute created and supported by the...

Human Technopole

problem with the peer review process in scientific research

Research Associate - Metabolism

Houston, Texas (US)

Baylor College of Medicine (BCM)

problem with the peer review process in scientific research

Postdoc Fellowships

Train with world-renowned cancer researchers at NIH? Consider joining the Center for Cancer Research (CCR) at the National Cancer Institute

Bethesda, Maryland

NIH National Cancer Institute (NCI)

Faculty Recruitment, Westlake University School of Medicine

Faculty positions are open at four distinct ranks: Assistant Professor, Associate Professor, Full Professor, and Chair Professor.

Hangzhou, Zhejiang, China

Westlake University

problem with the peer review process in scientific research

PhD/master's Candidate

PhD/master's Candidate    Graduate School of Frontier Science Initiative, Kanazawa University is seeking candidates for PhD and master's students i...

Kanazawa University

problem with the peer review process in scientific research

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views

Problems with peer review

  • Related content
  • Peer review
  • Mark Henderson , science editor
  • 1 The Times, London
  • mark.henderson{at}thetimes.co.uk

Several recent high profile cases have raised questions about the effectiveness of peer review in ensuring the quality of published research. Mark Henderson investigates

Mention peer review to any researcher and the chances are that he or she will soon start to grumble. Although the system by which research papers and grant applications are vetted is often described as science’s “gold standard,” it has always garnered mixed reviews from academics at its sharp end.

Most researchers have a story about a beautiful study that has been unreasonably rejected. An editor might have turned it down summarily without review. A referee might have demanded a futile and time consuming extra analysis. Or a rival might have sat on a manuscript for months, consigning it to limbo under the cloak of anonymity.

Barely less common are mordant criticisms of high profile papers published by high impact journals. How could Stanley Ewen and Arpad Pusztai’s 1990s research on genetically modified food have been passed by the Lancet ? 1 How could studies that describe mere technical advances be deemed worthy of Cell or Nature ? And how could Science have failed to rumble the fraudulent cloning work of Hwang Woo-suk? 2

A bubbling undercurrent of resentment and jealousy, of course, afflicts every fiercely competitive professional field. But in recent weeks, three incidents have brought concern about peer review to a head.

Firstly, leaked emails showed that Phil Jones, former head of the Climate Research Unit at the University of East Anglia, had pledged to exclude papers from the Intergovernmental Panel on Climate Change (IPCC) report “even if we have to redefine what the peer-reviewed literature is.” Then came an even more damaging realisation. The panel’s last report claimed that Himalayan glaciers were likely to melt entirely by 2035—an egregious error that should have been picked up by any specialist.

Soon afterwards, the Lancet finally retracted perhaps the most controversial medical paper of the past 15 years: Andrew Wakefield’s 1998 case series that started the MMR vaccine scare. 3 Widely criticised as poor science that was unworthy of a major medical journal, it was partially retracted in 2004 because of an undeclared conflict of interest. Other more substantial concerns raised at the time were considered by the Lancet and Wakefield’s institution to be unproved, until Wakefield was found guilty of professional misconduct by the General Medical Council in January.

The following week came allegations from stem cell researchers that peer review was failing their field. Austin Smith, of the University of Cambridge, and Robin Lovell-Badge, of the National Institute for Medical Research, told the BBC that a “clique” of influential reviewers was keeping competitors’ papers out of the best journals, while supporting publication of inferior work. 4

Mistakes will happen

The charges against the IPCC, the Lancet , and the stem cell journals reflect a well rehearsed criticism of peer review: that it fails to root out error. Yet even the most rigorous refereeing procedures cannot prevent every inaccuracy, and they can achieve still less when conflicts go undeclared or outright fraud is involved. The best and most conscientious reviewers cannot spot every slip.

Though the IPCC’s error was indefensibly glaring, many of its scientists have reasonably pointed out that it would be remarkable for a 3000 page report to be completely error-free. As Jürgen Willebrand, an IPCC lead author, told Nature : “IPCC reports are written by humans. I have no doubt that similar errors could be found in earlier IPCC reports, but nobody has bothered to look in detail.” 5 No mistake in the IPCC’s work has yet been identified that alters its fundamental conclusions. And for all Professor Jones’s bluster, the papers to which he objected were in fact considered by the appropriate working group.

In the Lancet case, Evan Harris, the Liberal Democrat member of parliament, led calls for a retraction six years ago, when Wakefield’s undeclared legal aid funding was first revealed. At the time, however, the journal ruled that no misrepresentations in the paper itself had been proved.

The Lancet , it might be argued, ought not to have published a paper with such significant implications for public health without checking these details. Yet when a researcher is not candid, it can be difficult for even the most assiduous reviewer or editor to find flaws. Submitted data must generally be taken on trust, though its interpretation must always be checked.

Genuinely bad behaviour is more usually identified after publication, when others replicate experiments or pick over the published research in detail. Hwang’s work, for example, fell under scrutiny when rivals failed to repeat his techniques, ethical doubts emerged over his egg collection procedures, and a former colleague turned whistleblower. This invited fresh analysis that revealed much of his data had been faked. Short of insisting that experiments are independently repeated before acceptance (as Nature did with a monkey cloning paper after the Hwang affair 6 7 ), peer review can only do so much to detect fraud.

Reviewing the reviewers

Of the three recent incidents, the criticisms by Professors Smith and Lovell-Badge are most challenging. Their concern is that in the eyes of editors and reviewers, some scientists are more equal than others. Some papers thus do not get the scrutiny they need, while others are unfairly rejected.

“On the one hand, papers are held up by referees asking for experiments that no reasonable person would demand,” Professor Smith said. “On the other, people are making important and extraordinary claims without going the extra mile and providing the critical bit of data. Most people in the field have had one or more bad experiences.”

Some editors, they say, are reluctant to upset favourite scientists by overturning their reviews, for fear that they will stop submitting their work to that journal. That can give them excessive and unaccountable power. Anonymity also means that some referees do not declare their interests and review the work of a fierce rival or a collaborator. “If I receive a paper which someone in my lab has worked on, or even a good friend, I will say there is a conflict of interest and decline to review,” Professor Lovell-Badge said. “I’m sure not everybody does that.”

Philip Campbell, the editor of Nature , rejects the charge. His journal uses more than 400 referees in stem cell research alone, and he cites cases where editors have published a paper they think is important despite three unfavourable reviews. “We try to avoid all situations where referees abuse their positions,” he said. “Our editors keep in good touch with the research community, they never get dependent on a small group. I’m in no way denying that there are concerns out there, but it isn’t the case that referees are keeping good research out of Nature .”

For Mark Walport, director of the Wellcome Trust, it is good editors that should make the system tick. “It is the job of scientific editors, who usually have two or three reviews in front of them, to spot when people are misbehaving,” he said. “A good editor undoubtedly can.”

Despite its perceived weaknesses, improvements to peer review are notoriously hard to find. A double blind approach by which neither reviewer nor author knows the other’s identity, for instance, is difficult because authors can usually be guessed from their citations and subject matter.

Professor Smith accepts there is no easy answer, and Dr Walport likes to quote Winston Churchill’s famous dictum about democracy: that it is the worst form of government, “except for all those other forms that have been tried from time to time.” Yet two new models that are starting to gain ground do have some potential to address the most common complaint: that the system is unnecessarily opaque and unaccountable.

Open review

The BMJ has adopted one radical approach—opening up peer review so that referees are no longer anonymous. In most other journals reviews are unsigned to encourage candour and so that junior researchers can take part without fear that a negative opinion might be held against them by a senior figure. Drs Campbell and Walport both reject open review for just this reason. But Fiona Godlee, the BMJ ’s editor, says the journal has not had this problem.

“We did a randomised controlled trial of signed versus unsigned reviews and found that it was acceptable to authors and reviewers, and that it made no significant difference to the reviews,” she said. “The quality was unchanged, though there was a slightly greater tendency to recommend acceptance. 8 Since implementing open review, we have had one or two reviewers saying they won’t review for us, but the vast majority of reviewers are fine with it. And authors like it.”

She accepts that open review may not work for every journal, particularly those covering very specialist areas in which researchers tend to know each other well. She also highlights the importance of the BMJ’s editors: “They make the final decision on papers, so we are not reliant on the recommendation of the peer reviewer about whether to publish.”

There is also an intermediate solution, which has been pioneered by the European Molecular Biology Organisation Journal. Although it does not name reviewers, it publishes their reports. It is an approach that appeals to Smith, Lovell-Badge, and Walport. “If you publish a package of supplementary material, including anonymous reviews, it provides a paper trail and another level of accountability,” Dr Walport said. “It would place pressure on reviewers to be scrupulously fair, because anything openly hostile or ridiculous would be out there, and on journal editors to think very carefully about their comments.”

The BMJ is about to take this one step further—publishing its signed reviews alongside published papers after a second randomised trial found this feasible and acceptable to authors and reviewers. Meanwhile Nature is considering the anonymous publication of referees’ reports. “We’ve been thinking about that for a few years,” Dr Campbell said. “There are questions we need to be careful about, such as does this change the relationship between the editor and the referee, but it is absolutely something we are looking at.”

It may be true that peer review is the worst system of scrutinising science, except for all the others that have been tried from time to time. But like democracy, that does not mean it can’t be tweaked to make it fairer.

Cite this as: BMJ 2010;340:c1409

Competing interests: The author has completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares (1) no financial support for the submitted work from anyone other than their  employer; (2) a small fee from Wellcome Trust for speaking; (3) no spouses, partners, or children with relationships with commercial entities that might have an interest in the submitted work; and (4) no non-financial interests that may be relevant to the submitted work.

  • ↵ Ewen SW, Pusztai A. Effect of diets containing genetically modified potatoes expressing Galanthus nivalis lectin on rat small intestine. Lancet 1999 ; 354 : 1353 -4. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hwang WS, Roh SI, Lee BC, Kang SK, Kwon DK, Kim S, et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts [retracted in Science 2006;311:335] Science 2005 ; 308 : 1777 -83. OpenUrl Abstract / FREE Full Text
  • ↵ Wakefield AJ, Murch SH, Anthony A, Linnell J, Casson DM, Malik M, et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children [retracted in Lancet 2010 Feb 2. doi: 10.1016/S0140-6736(10)60175-4 .] Lancet 1998 ; 351 : 637 -41. OpenUrl CrossRef PubMed Web of Science
  • ↵ Ghosh P. Journal stem cell work “blocked.” BBC News 2010 Feb 2. http://news.bbc.co.uk/1/hi/8490291.stm .
  • ↵ Schiermeier Q. IPPC flooded by criticism. Nature 2010 ; 463 : 596 -7. OpenUrl CrossRef PubMed Web of Science
  • ↵ Byrne J, Pedersen DA, Clepper LL, Nelson M, Sanger WG, Gokhale S, et al. Producing primate embryonic stem cells by somatic cell nuclear transfer. Nature 2007 ; 450 : 497 -502. OpenUrl CrossRef PubMed
  • ↵ Cram D, Song B, Trounson A. Genotyping of rhesus SCNT pluripotent stem cell lines. Nature 2007 ; 450 : E12 -4. OpenUrl CrossRef PubMed
  • ↵ Van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ 1999 ; 318 : 23 -7. OpenUrl Abstract / FREE Full Text

problem with the peer review process in scientific research

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

problem with the peer review process in scientific research

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Peer-reviewed journals are publications in which scientific contributions have been vetted by experts in the relevant field.
  • Peer-reviewed articles provide a trusted form of scientific communication. Peer-reviewed work isn’t necessarily correct or conclusive, but it does meet the standards of science.

Scrutinizing science: Peer review

In science, peer review helps provide assurance that published research meets minimum standards for scientific quality. Peer review typically works something like this:

  • A group of scientists completes a study and writes it up in the form of an article. They submit it to a journal for publication.
  • The journal’s editors send the article to several other scientists who work in the same field (i.e., the “peers” of peer review).
  • Those reviewers provide feedback on the article and tell the editor whether or not they think the study is of high enough quality to be published.
  • The authors may then revise their article and resubmit it for consideration.
  • Only articles that meet good scientific standards (e.g., acknowledge and build upon other work in the field, rely on logical reasoning and well-designed studies, back up claims with evidence , etc.) are accepted for publication.

Peer review and publication are time-consuming, frequently involving more than a year between submission and publication. The process is also highly competitive. For example, the highly-regarded journal Science accepts less than 8% of the articles it receives, and The New England Journal of Medicine publishes just 6% of its submissions.

Peer-reviewed articles provide a trusted form of scientific communication. Even if you are unfamiliar with the topic or the scientists who authored a particular study, you can trust peer-reviewed work to meet certain standards of scientific quality. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. No scientist would want to base their own work on someone else’s unreliable study! Peer-reviewed work isn’t necessarily correct or conclusive, but it does meet the standards of science. And that means that once a piece of scientific research passes through peer review and is published, science must deal with it somehow — perhaps by incorporating it into the established body of scientific knowledge, building on it further, figuring out why it is wrong, or trying to replicate its results.

PEER REVIEW: NOT JUST SCIENCE

Many fields outside of science use peer review to ensure quality. Philosophy journals, for example, make publication decisions based on the reviews of other philosophers, and the same is true of scholarly journals on topics as diverse as law, art, and ethics. Even those outside the research community often use some form of peer review. Figure-skating championships may be judged by former skaters and coaches. Wine-makers may help evaluate wine in competitions. Artists may help judge art contests. So while peer review is a hallmark of science, it is not unique to science.

  • Science in action
  • Take a sidetrip

What’s peer review good for? To find out, explore what happens when the process is by-passed. Visit  Cold fusion: A case study for scientific behavior .

  • To find out how to tell if research is peer-reviewed and why this is important, check out this  handy guide from Sense About Science .
  • Advanced: Visit the Visionlearning website for advanced material on peer review .
  • Advanced: Visit The Scientist  magazine to learn about  how peer review benefits the people doing the reviewing .

Publish or perish?

Copycats in science: The role of replication

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool
  • Misconceptions

Preserving the Quality of Scientific Research: Peer Review of Research Articles

  • First Online: 20 January 2017

Cite this chapter

problem with the peer review process in scientific research

  • Pali U. K. De Silva 3 &
  • Candace K. Vance 3  

Part of the book series: Fascinating Life Sciences ((FLS))

1347 Accesses

8 Citations

Peer review of scholarly articles is a mechanism used to assess and preserve the trustworthiness of reporting of scientific findings. Since peer reviewing is a qualitative evaluation system that involves the judgment of experts in a field about the quality of research performed by their colleagues (and competitors), it inherently encompasses a strongly subjective element. Although this time-tested system, which has been evolving since the mid-eighteenth century, is being questioned and criticized for its deficiencies, it is still considered an integral part of the scholarly communication system, as no other procedure has been proposed to replace it. Therefore, to improve and strengthen the existing peer review process, it is important to understand its shortcomings and to continue the constructive deliberations of all participants within the scientific scholarly communication system . This chapter discusses the strengths, issues, and deficiencies of the peer review system, conventional closed models (single-blind and double-blind), and the new open peer review model and its variations that are being experimented with by some journals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Evaluative criteria may also vary depending on the scope of the specific journal.

Krebs and Johnson ( 1937 ).

McClintock ( 1950 ).

Bombardier et al. ( 2000 ).

“Nature journals offer double-blind review” Nature announcement— http://www.nature.com/news/nature-journals-offer-double-blind-review-1.16931 .

Contains all versions of the manuscript, named reviewer reports, author responses, and (where relevant) editors’ comments (Moylan et al. 2014 ).

https://www.elsevier.com/about/press-releases/research-and-journals/peer-review-survey-2009-preliminary-findings .

Review guidelines, Frontiers in Neuroscience http://journal.frontiersin.org/journal/synaptic-neuroscience#review-guidelines .

Editorial policies - BioMed Central  http://www.biomedcentral.com/getpublished/editorial-policies#peer+review .

Hydrology and Earth System Sciences Interactive Public Peer Review  http://www.hydrology-and-earth-system-sciences.net/peer_review/interactive_review_process.html .

Copernicus Publications  http://publications.copernicus.org/services/public_peer_review.html .

Copernicus Publications - Interactive Public Peer Review  http://home.frontiersin.org/about/impact-and-tiering .

Biology Direct http://www.biologydirect.com/ .

F1000 Research  http://f1000research.com .

GigaScience  http://www.gigasciencejournal.com

Journal of Negative Results in Biomedicine  http://www.jnrbm.com/ .

BMJOpen  http://bmjopen.bmj.com/ .

PeerJ  http://peerj.com/ .

ScienceOpen  https://www.scienceopen.com .

ArXiv  http://arxiv.org .

Retraction of articles from Springer journals. London: Springer, August 18, 2015 ( http://www.springer.com/gp/about-springer/media/statements/retraction-of-articles-from-springer-journals/735218 ).

COPE statement on inappropriate manipulation of peer review processes ( http://publicationethics.org/news/cope-statement-inappropriate-manipulation-peer-review-processes ).

Hindawi concludes an in-depth investigation into peer review fraud, July 2015 ( http://www.hindawi.com/statement/ ).

.Wakefield, A. J., Murch, S. H., Anthony, A., Linnell, J., Casson, D. M., Malik, M., ... & Valentine, A. (1998). Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet , 351 (9103), 637–641. (RETRACTED:See The Lancet 375 (9713) p.445)

A practice used by researchers to increase the number of articles publishing multiple papers using very similar pieces of a single dataset. The drug industry also uses this tactic to increase publications with positive findings on their products.

Neuroscience Peer Reviewer Consortium  http://nprc.incf.org/ .

“About 80% of submitted manuscripts are rejected during this initial screening stage, usually within one week to 10 days.” http://www.sciencemag.org/site/feature/contribinfo/faq/ (accessed on October 18, 2016); “Nature has space to publish only 8% or so of the 200 papers submitted each week” http://www.nature.com/nature/authors/get_published/ (accessed on October 18, 2016).

Code of Conduct and Best Practice Guidelines for Journal Editors  http://publicationethics.org/files/Code%20of%20Conduct_2.pdf .

Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals  http://www.icmje.org/icmje-recommendations.pdf .

Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing peer review. Science, 321 (5885), 15.

Article   PubMed   Google Scholar  

Ali, P. A., & Watson, R. (2016). Peer review and the publication process. Nursing Open . doi: 10.1002/nop2.51 .

Baggs, J. G., Broome, M. E., Dougherty, M. C., Freda, M. C., & Kearney, M. H. (2008). Blinding in peer review: the preferences of reviewers for nursing journals. Journal of Advanced Nursing, 64 (2), 131–138.

Bjork, B.-C., Roos, A., & Lauri, M. (2009). Scientific journal publishing: yearly volume and open access availability. Information Research: An International Electronic Journal, 14 (1).

Google Scholar  

Bohannon, J. (2013). Who’s afraid of peer review. Science, 342 (6154).

Boldt, A. (2011). Extending ArXiv. org to achieve open peer review and publishing. Journal of Scholarly Publishing, 42 (2), 238–242.

Article   Google Scholar  

Bombardier, C., Laine, L., Reicin, A., Shapiro, D., Burgos-Vargas, R., Davis, B., … & Kvien, T. K. (2000). Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. New England Journal of Medicine, 343 (21), 1520–1528

Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45 (1), 197–245.

Bornmann, L., & Daniel, H.-D. (2009). Reviewer and editor biases in journal peer review: An investigation of manuscript refereeing at Angewandte Chemie International Edition. Research Evaluation, 18 (4), 262–272.

Bornmann, L., & Daniel, H. D. (2010). Reliability of reviewers’ ratings when using public peer review: A case study. Learned Publishing, 23 (2), 124–131.

Bornmann, L., Mutz, R., & Daniel, H.-D. (2007). Gender differences in grant peer review: A meta-analysis. Journal of Informetrics, 1 (3), 226–238.

Borsuk, R. M., Aarssen, L. W., Budden, A. E., Koricheva, J., Leimu, R., Tregenza, T., et al. (2009). To name or not to name: The effect of changing author gender on peer review. BioScience, 59 (11), 985–989.

Bosch, X., Pericas, J. M., Hernández, C., & Doti, P. (2013). Financial, nonfinancial and editors’ conflicts of interest in high-impact biomedical journals. European Journal of Clinical Investigation, 43 (7), 660–667.

Brown, R. J. C. (2007). Double anonymity in peer review within the chemistry periodicals community. Learned Publishing, 20 (2), 131–137.

Budden, A. E., Tregenza, T., Aarssen, L. W., Koricheva, J., Leimu, R., & Lortie, C. J. (2008). Double-blind review favours increased representation of female authors. Trends in Ecology & Evolution, 23 (1), 4–6.

Burnham, J. C. (1990). The evolution of editorial peer review. JAMA, 263 (10), 1323–1329.

Article   CAS   PubMed   Google Scholar  

Callaham, M. L., & Tercier, J. (2007). The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med, 4 (1), e40.

Article   PubMed   PubMed Central   Google Scholar  

Campbell, P. (2006). Peer Review Trial and Debate. Nature .  http://www.nature.com/nature/peerreview/debate/

Campbell, P. (2008). Nature peer review trial and debate. Nature: International Weekly Journal of Science, 11

Campos-Arceiz, A., Primack, R. B., & Koh, L. P. (2015). Reviewer recommendations and editors’ decisions for a conservation journal: Is it just a crapshoot? And do Chinese authors get a fair shot? Biological Conservation, 186, 22–27.

Cantor, M., & Gero, S. (2015). The missing metric: Quantifying contributions of reviewers. Royal Society open science, 2 (2), 140540.

CDC. (2016). Measles: Cases and Outbreaks. Retrieved from http://www.cdc.gov/measles/cases-outbreaks.html

Ceci, S. J., & Williams, W. M. (2011). Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences, 108 (8), 3157–3162.

Article   CAS   Google Scholar  

Chan, A. W., Hróbjartsson, A., Haahr, M. T., Gøtzsche, P. C., & Altman, D. G. (2004). Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA, 291 (20), 2457–2465.

Charlton, B. G. (2004). Conflicts of interest in medical science: peer usage, peer review andCoI consultancy’. Medical Hypotheses, 63 (2), 181–186.

Cressey, D. (2014). Journals weigh up double-blind peer review. Nature news .

Dalton, R. (2001). Peers under pressure. Nature, 413 (6852), 102–104.

DeVries, D. R., Marschall, E. A., & Stein, R. A. (2009). Exploring the peer review process: What is it, does it work, and can it be improved? Fisheries, 34 (6), 270–279. doi: 10.1577/1548-8446-34.6.270

Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. (2010). Testing for the presence of positive-outcome bias in peer review: A randomized controlled trial. Archives of Internal Medicine, 170 (21), 1934–1939.

Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US States Data. PLoS ONE, 5 (4), e10271.

Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences, 109 (42), 17028–17033. doi: 10.1073/pnas.1212247109

Ferguson, C., Marcus, A., & Oransky, I. (2014). Publishing: The peer-review scam. Nature, 515 (7528), 480.

Ford, E. (2015). Open peer review at four STEM journals: An observational overview. F1000Research, 4 .

Fountain, H. (2014). Science journal pulls 60 papers in peer-review fraud. Science, 3, 06.

Freda, M. C., Kearney, M. H., Baggs, J. G., Broome, M. E., & Dougherty, M. (2009). Peer reviewer training and editor support: Results from an international survey of nursing peer reviewers. Journal of Professional Nursing, 25 (2), 101–108.

Gillespie, G. W., Chubin, D. E., & Kurzon, G. M. (1985). Experience with NIH peer review: Researchers’ cynicism and desire for change. Science, Technology and Human Values, 10 (3), 44–54.

Greaves, S., Scott, J., Clarke, M., Miller, L., Hannay, T., Thomas, A., et al. (2006). Overview: Nature’s peer review trial. Nature , 10.

Grieneisen, M. L., & Zhang, M. (2012). A comprehensive survey of retracted articles from the scholarly literature. PLoS ONE, 7 (10), e44118.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Grivell, L. (2006). Through a glass darkly. EMBO Reports, 7 (6), 567–570.

Harrison, C. (2004). Peer review, politics and pluralism. Environmental Science & Policy, 7 (5), 357–368.

Hartog, C. S., Kohl, M., & Reinhart, K. (2011). A systematic review of third-generation hydroxyethyl starch (HES 130/0.4) in resuscitation: Safety not adequately addressed. Anesthesia and Analgesia, 112 (3), 635–645.

Hojat, M., Gonnella, J. S., & Caelleigh, A. S. (2003). Impartial judgment by the “gatekeepers” of science: Fallibility and accountability in the peer review process. Advances in Health Sciences Education, 8 (1), 75–96.

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Med, 2 (8), e124.

James, M. J., Cook-Johnson, R. J., & Cleland, L. G. (2007). Selective COX-2 inhibitors, eicosanoid synthesis and clinical outcomes: A case study of system failure. Lipids, 42 (9), 779–785.

Janssen, S. J., Bredenoord, A. L., Dhert, W., de Kleuver, M., Oner, F. C., & Verlaan, J.-J. (2015). Potential conflicts of interest of editorial board members from five leading spine journals. PLoS ONE, 10 (6), e0127362.

Jefferson, T., Alderson, P., Wager, E., & Davidoff, F. (2002). Effects of editorial peer review: A systematic review. JAMA, 287 (21), 2784–2786.

Jelicic, M., & Merckelbach, H. (2002). Peer-review: Let’s imitate the lawyers! Cortex, 38 (3), 406–407.

Jinha, A. E. (2010). Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing, 23 (3), 258–263.

Khan, K. (2010). Is open peer review the fairest system? No. Bmj, 341, c6425.

Kilwein, J. H. (1999). Biases in medical literature. Journal of Clinical Pharmacy and Therapeutics, 24 (6), 393–396.

Koonin, E. V., Landweber, L. F., & Lipman, D. J. (2013). Biology direct: Celebrating 7 years of open, published peer review. Biology direct, 8 (1), 1.

Kozlowski, L. T. (2016). Coping with the conflict-of-interest pandemic by listening to and doubting everyone, including yourself. Science and Engineering Ethics, 22 (2), 591–596.

Krebs, H. A., & Johnson, W. A. (1937). The role of citric acid in intermediate metabolism in animal tissues. Enzymologia, 4, 148–156.

CAS   Google Scholar  

Kriegeskorte, N., Walther, A., & Deca, D. (2012). An emerging consensus for open evaluation: 18 visions for the future of scientific publishing. Beyond open access: Visions for open evaluation of scientific papers by post-publication peer review , 5.

Langfeldt, L. (2006). The policy challenges of peer review: Managing bias, conflict of interests and interdisciplinary assessments. Research Evaluation, 15 (1), 31–41.

Lawrence, P. A. (2003). The politics of publication. Nature, 422 (6929), 259–261.

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64 (1), 2–17.

Lippert, S., Callaham, M. L., & Lo, B. (2011). Perceptions of conflict of interest disclosures among peer reviewers. PLoS ONE, 6 (11), e26900.

Link, A. M. (1998). US and non-US submissions: an analysis of reviewer bias. Jama , 280 (3), 246–247.

Lo, B., & Field, M. J. (Eds.). (2009). Conflict of interest in medical research, education, and practice . Washington, D.C.: National Academies Press.

Loonen, M. P. J., Hage, J. J., & Kon, M. (2005). Who benefits from peer review? An analysis of the outcome of 100 requests for review by Plastic and Reconstructive Surgery. Plastic and Reconstructive Surgery, 116 (5), 1461–1472.

Luukkonen, T. (2012). Conservatism and risk-taking in peer review: Emerging ERC practices. Research Evaluation , rvs001.

McClintock, B. (1950). The origin and behavior of mutable loci in maize. Proceedings of the National Academy of Sciences, 36 (6), 344–355.

McCullough, J. (1989). First comprehensive survey of NSF applicants focuses on their concerns about proposal review. Science, Technology and Human Values, 14 (1), 78–88.

McIntyre, W. F., & Evans, G. (2014). The Vioxx ® legacy: Enduring lessons from the not so distant past. Cardiology Journal, 21 (2), 203–205.

Moylan, E. C., Harold, S., O’Neill, C., & Kowalczuk, M. K. (2014). Open, single-blind, double-blind: Which peer review process do you prefer? BMC Pharmacology and Toxicology, 15 (1), 1.

Mulligan, A., Hall, L., & Raphael, E. (2013). Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology, 64 (1), 132–161.

Nath, S. B., Marcus, S. C., & Druss, B. G. (2006). Retractions in the research literature: misconduct or mistakes? Medical Journal of Australia, 185 (3), 152.

PubMed   Google Scholar  

Nature Editorial (2008). Working double-blind. Nature, 451, 605–606.

Nature Neuroscience Editorial. (2006). Women in neuroscience: A numbers game. Nature Neuroscience, 9, 853.

Okike, K., Hug, K. T., Kocher, M. S., & Leopold, S. S. (2016). Single-blind vs double-blind peer review in the setting of author prestige. JAMA, 316 (12), 1315–1316.

Olson, C. M., Rennie, D., Cook, D., Dickersin, K., Flanagin, A., Hogan, J. W., … & Pace, B. (2002). Publication bias in editorial decision making. JAMA, 287 (21), 2825–2828.

Palmer, A. R. (2000). Quasireplication and the contract of error: lessons from sex ratios, heritabilities and fluctuating asymmetry. Annual Review of Ecology and Systematics , 441–480.

Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5 (02), 187–195.

PLOS MED Editors. (2008). Making sense of non-financial competing interests. PLOS Med, 5 (9), e199.

Pulverer, B. (2010). Transparency showcases strength of peer review. Nature, 468 (7320), 29–31.

Pöschl, U., & Koop, T. (2008). Interactive open access publishing and collaborative peer review for improved scientific communication and quality assurance. Information Services & Use, 28 (2), 105–107.

Relman, A. S. (1985). Dealing with conflicts of interest. New England Journal of Medicine, 313 (12), 749–751.

Rennie, J., & Chief, I. N. (2002). Misleading math about the Earth. Scientific American, 286 (1), 61.

Resch, K. I., Ernst, E., & Garrow, J. (2000). A randomized controlled study of reviewer bias against an unconventional therapy. Journal of the Royal Society of Medicine, 93 (4), 164–167.

CAS   PubMed   PubMed Central   Google Scholar  

Resnik, D. B., & Elmore, S. A. (2016). Ensuring the quality, fairness, and integrity of journal peer review: A possible role of editors. Science and Engineering Ethics, 22 (1), 169–188.

Ross, J. S., Gross, C. P., Desai, M. M., Hong, Y., Grant, A. O., Daniels, S. R., et al. (2006). Effect of blinded peer review on abstract acceptance. JAMA, 295 (14), 1675–1680.

Sandström, U. (2009, BRAZIL. JUL 14-17, 2009). Cognitive bias in peer review: A new approach. Paper presented at the 12th International Conference of the International-Society-for-Scientometrics-and-Informetrics.

Shatz, D. (2004). Peer review: A critical inquiry . Lanham, MD: Rowman & Littlefield.

Schneider, L. (2016, September 4). Beall-listed Frontiers empire strikes back. Retrieved from https://forbetterscience.wordpress.com/2016/09/14/beall-listed-frontiers-empire-strikes-back/

Schroter, S., Black, N., Evans, S., Carpenter, J., Godlee, F., & Smith, R. (2004). Effects of training on quality of peer review: Randomised controlled trial. BMJ, 328 (7441), 673.

Service, R. F. (2002). Scientific misconduct. Bell Labs fires star physicist found guilty of forging data. Science (New York, NY), 298 (5591), 30.

Shimp, C. P. (2004). Scientific peer review: A case study from local and global analyses. Journal of the Experimental Analysis of Behavior, 82 (1), 103–116.

Article   PubMed Central   Google Scholar  

Smith, R. (1999). Opening up BMJ peer review: A beginning that should lead to complete transparency. BMJ, 318, 4–5.

Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99 (4), 178–182. doi: 10.1258/jrsm.99.4.178

Souder, L. (2011). The ethics of scholarly peer review: A review of the literature. Learned Publishing, 24 (1), 55–72.

Spielmans, G. I., Biehn, T. L., & Sawrey, D. L. (2009). A case study of salami slicing: pooled analyses of duloxetine for depression. Psychotherapy and Psychosomatics, 79 (2), 97–106.

Spier, R. (2002). The history of the peer-review process. Trends in Biotechnology, 20 (8), 357–358.

Squazzoni, F. (2010). Peering into peer review. Sociologica, 4 (3).

Squazzoni, F., & Gandelli, C. (2012). Saint Matthew strikes again: An agent-based model of peer review and the scientific community structure. Journal of Informetrics, 6 (2), 265–275.

Steen, R. G. (2010). Retractions in the scientific literature: is the incidence of research fraud increasing? Journal of Medical Ethics , jme-2010.

Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7 (6), 670–688.

Tite, L., & Schroter, S. (2007). Why do peer reviewers decline to review? A survey. Journal of Epidemiology and Community Health, 61 (1), 9–12.

Travis, G. D. L., & Collins, H. M. (1991). New light on old boys: cognitive and institutional particularism in the peer review system. Science, Technology and Human Values, 16 (3), 322–341.

Tregenza, T. (2002). Gender bias in the refereeing process? Trends in Ecology & Evolution, 17 (8), 349–350.

Valkonen, L., & Brooks, J. (2011). Gender balance in Cortex acceptance rates. Cortex, 47 (7), 763–770.

van Rooyen, S., Delamothe, T., & Evans, S. J. W. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: Randomised controlled trial. BMJ, 341, c5729.

van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers’ recommendations: A randomised trial. British Medical Journal, 318 (7175), 23–27.

Walker, R., & Rocha da Silva, P. (2014). Emerging trends in peer review—A survey. Frontiers in neuroscience, 9, 169.

Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: A randomised controlled trial. The British Journal of Psychiatry, 176 (1), 47–51.

Walters, W. P., & Bajorath, J. (2015). On the evolving open peer review culture for chemical information science. F1000Research, 4 .

Ware, M. (2008). Peer review in scholarly journals: Perspective of the scholarly community-Results from an international study. Information Services and Use, 28 (2), 109–112.

Ware, M. (2011). Peer review: Recent experience and future directions. New Review of Information Networking , 16 (1), 23–53.

Webb, T. J., O’Hara, B., & Freckleton, R. P. (2008). Does double-blind review benefit female authors? Heredity, 77, 282–291.

Wellington, J., & Nixon, J. (2005). Shaping the field: The role of academic journal editors in the construction of education as a field of study. British Journal of Sociology of Education, 26 (5), 643–655.

Whittaker, R. J. (2008). Journal review and gender equality: A critical comment on Budden et al. Trends in Ecology & Evolution, 23 (9), 478–479.

Wiedermann, C. J. (2016). Ethical publishing in intensive care medicine: A narrative review. World Journal of Critical Care Medicine, 5 (3), 171.

Download references

Author information

Authors and affiliations.

Murray State University, Murray, Kentucky, USA

Pali U. K. De Silva & Candace K. Vance

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Pali U. K. De Silva .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

De Silva, P.U.K., K. Vance, C. (2017). Preserving the Quality of Scientific Research: Peer Review of Research Articles. In: Scientific Scholarly Communication. Fascinating Life Sciences. Springer, Cham. https://doi.org/10.1007/978-3-319-50627-2_6

Download citation

DOI : https://doi.org/10.1007/978-3-319-50627-2_6

Published : 20 January 2017

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-50626-5

Online ISBN : 978-3-319-50627-2

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Newsletters

Site search

  • Israel-Hamas war
  • Home Planet
  • 2024 election
  • Supreme Court
  • All explainers
  • Future Perfect

Filed under:

  • Let's stop pretending peer review works

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: Let's stop pretending peer review works

problem with the peer review process in scientific research

In the early 1980s, there was growing concern about the quality of peer review at scientific journals. So two researchers at Cornell and the University of North Dakota decided to run a little experiment to test the process.

The idea behind peer review is simple: It's supposed to weed out bad science. Peer reviewers read over promising studies that have been submitted to a journal to help gauge whether they should be published or need changes. Ideally, reviewers are experts in fields related to the studies in question. They add helpful comments, point out problems and holes, or simply reject flawed papers that shouldn't see the light of day.

The two researchers,  D ouglas Peters and Stephen Ceci, wanted to test how reliable and unbiased this process actually is. To do this, they selected 12 papers that had been published about two to three years earlier in extremely selective American psychology journals.

The researchers then altered the names and university affiliations on the journal manuscripts and resubmitted the papers to the same journal. In theory, these papers should have been high quality — they'd already made it into these prestigious publications. If the process worked well, the studies that were published the first time would be approved for publication again the second time around. What Peters and Ceci found was surprising. Nearly 90 percent of the peer reviewers who looked at the resubmitted articles recommended against publication this time. In many cases, they said the articles had "serious methodological flaws." This raised a number of disquieting possibilities. Were these, in fact, seriously flawed papers that got accepted and published? Can bad papers squeak through depending on who reviews them? Did some papers get in because of the prestige of their authors or affiliations? At the very least, the experiment suggested the peer review process was unnervingly inconsistent. The finding, though published more than 30 years ago, is still relevant. Since then, other researchers have been uncovering more and more problems with the peer review process, raising the question of why scientists bother with it in the first place.

All too often, peer review misses big problems with studies

Researchers who have examined peer review often find evidence that it works barely better than chance at keeping poor-quality studies out of journals or that it doesn't work at all. That conclusion has been arrived at in experiments like this one or this one and systematic reviews that bring together all the relevant studies, like this one and this one . The reasons it fails are similar to the reasons any human process falls down. Usually, it's only a few reviewers who look at an article. Those reviewers aren't paid for their time, but they participate out of a belief in the scientific process and to contribute to their respective fields. Maybe they're rushed when reading a manuscript. Maybe they're poorly matched to the study and unqualified to pick it apart. Maybe they have a bias against the writer or institution behind the paper. Since the process is usually blinded — at least on the side of the reviewer (with the aim of eliciting frank feedback) — this can also up the snark factor or encourage rushed and unhelpful comments, as the popular #sixwordpeerreview hashtag shows.

The Lancet editor Richard Horton has called the process "unjust, unaccountable ... often insulting, usually ignorant, occasionally foolish, and frequently wrong." Not to mention that identifying peer reviewers and getting their comments slows down the progress of science — papers can be held up for months or years — and costs society a lot of money. Scientists and professors, after all, need to take time away from their research to edit, unpaid, the work of others.

Richard Smith, the former editor of the BMJ , summed up: "We have little or no evidence that peer review 'works,' but we have lots of evidence of its downside." Another former editor of the Lancet , Robbie Fox, used to joke that his journal "had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom." Not exactly reassuring comments from the editors of the world's leading medical journals.

Should we abolish peer review?

So should we just abolish peer review? We put the question to Jeff Drazen, the current editor of the top-ranked medical publication the New England Journal of Medicine . He said he knows the process is imperfect — and that's why he doesn't rely on it all that much. At his journal, peer review is only a first step to vetting papers that may be interesting and relevant for readers. After a paper passes peer review, it is then given to a team of staff editors who each have a lot of time and space to go through the submission with a fine-toothed comb. So highly qualified editors, not necessarily peer review, act as the journal's gatekeepers.

"[Peer review] is like everything else," Drazen said. "There are lots of things out there — some are high quality, some aren't."

Drazen is probably onto something real in that journal editors, with enough resources, can add real value to scientific publications and give them their "golden glow." But how many journals actually provide that value add? We're probably talking about 10 in the world out of the tens of thousands that exist. The New England Journal of Medicine is much more an outlier than the rule in that regard.

Even at the best journals, ridiculously flawed and silly articles get through. A few readers can't possibly catch all the potential problems with a study, or sometimes they don't have access to all the data that they need to make informed edits. It can take years, multiple sets of fresh eyes, and people with adversarial views for the truth to come to light. Look no further than the study that linked autism to the measles-mumps-rubella vaccine, published in the Lancet . That paper was retracted after it was found to be not only fraudulent but also deeply flawed.

For some, that's a reason to get rid of peer review. Brandon Stell , the president of the PubPeer Foundation, favors "post-publication" peer review on websites like his own ( Pubpeer.com ). There, users from around the world can critique and comment on articles that have already been published. These crowdsourced comments have led to corrections or even retractions of studies.

"There’s no reason why we couldn't publish everything immediately on the internet and have it peer-reviewed after it's been published," Stell said arguing for abolishing pre-publication peer review. There are already journals that do just this, he added, such as the Winnower .

But replacing one flawed system (traditional pre-publication peer review) with what may be another (post-publication peer review) doesn't fully solve the problem. Places like PubPeer are a fantastic development, but it's not yet clear that they're significantly better at catching errors and bad science consistently compared with traditional pre-publication peer review. Even with its flaws, at the very least peer review seems to work at least a little better than chance. That's not great, but that may be better than nothing. In a world without the peer review culture, it's possible even more bad science would sneak through.

A complex solution for a complex problem

Stell pointed to another great innovation: sites like Biorxiv , which allow researchers to "pre-print" their manuscripts online as soon as they're ready and get open comment before they're ever peer-reviewed and published in academic journals. This adds another step in the process to publication, another chance to filter problems before they make it to peer review and onto the scientific record. Ivan Oransky , a medical journalist who tracks retractions in journals at his site Retraction Watch , had a more holistic view. He didn't think post-publication review should supplant the traditional process, but that it should be an add-on. "Post-publication peer review is nothing new, but in the past it's happened in private, with no feedback for the authors or larger scientific community," Oransky said. Sites like PubPeer open up the process and make it more transparent, and should therefore be strengthened. "Let's stop pretending that once a paper is published, it's scientific gospel," he added.

We think that's closer to the solution. Science would probably be better off if researchers checked the quality and accuracy of their work in a multi-step process with redundancies built in to weed out errors and bad science. The internet makes that much easier. Traditional peer review would be just one check; pre-print commenting, post-publication peer review, and, wherever possible, highly skilled journal editors would be others.

Before this ideal system is put in place, there's one thing we can do immediately to make peer review better. We need to adjust our expectations about what peer review does. Right now, many people think peer review means, "This paper is great and trustworthy!" In reality, it should mean something like, "A few scientists have looked at this paper and didn't find anything wrong with it, but that doesn't mean you should take it as gospel. Only time will tell." Insiders like journal editors have long known that the system is flawed. It's time the public embraced that, too, and supported ways to make it better.

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

problem with the peer review process in scientific research

In This Stream

Burden of proof.

  • The evidence on travel bans for diseases like Ebola is clear: they don't work
  • When disasters like Ebola hit, the world turns to the WHO. And it’s failing.

Next Up In Science

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

problem with the peer review process in scientific research

Why are Americans spending so much?

Sam Altman is seen in profile against a dark background with a bright light overhead.

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Altman and Sutskever sitting in chairs.

ChatGPT can talk, but OpenAI employees sure can’t

A man stands silhouetted with his back to the camera and looking at a painting twice his height, of King Charles in a Welsh Guards uniform with a butterfly at his shoulder, all in shades of red except his face, which looks friendly.

Blood, flames, and horror movies: The evocative imagery of King Charles’s portrait

A child waits to receive a bowl of food.

Why the US built a pier to get aid into Gaza

A Palestinian woman stands amid rubble, her arms  turned upward and a sad expression on her face, in the remains of a city street in Gaza.

The controversy over Gaza’s death toll, explained

Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

What is the Purpose of Peer Review?

What makes a good peer reviewer, how do you decide whether to review a paper, how do you complete a peer review, limitations of peer review, conclusions, research methods: how to perform an effective peer review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Elise Peterson Lu , Brett G. Fischer , Melissa A. Plesac , Andrew P.J. Olson; Research Methods: How to Perform an Effective Peer Review. Hosp Pediatr November 2022; 12 (11): e409–e413. https://doi.org/10.1542/hpeds.2022-006764

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer review process, including discussion of the purpose and limitations of peer review, the qualities of a good peer reviewer, and a step-by-step process of how to conduct an effective peer review.

Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1 , 2   It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3   In 2012, there were more than 28 000 scholarly peer-reviewed journals and more than 3 million peer reviewed articles are now published annually. 3 , 4   However, even with this volume, most peer reviewers learn to review “on the (unpaid) job” and no standard training system exists to ensure quality and consistency. 5   Expectations and format vary between journals and most, but not all, provide basic instructions for reviewers. In this paper, we provide a general introduction to the peer review process and identify common strategies for success as well as pitfalls to avoid.

Modern peer review serves 2 primary purposes: (1) as “a screen before the diffusion of new knowledge” 6   and (2) as a method to improve the quality of published work. 1 , 5  

As screeners, peer reviewers evaluate the quality, validity, relevance, and significance of research before publication to maintain the credibility of the publications they serve and their fields of study. 1 , 2 , 7   Although peer reviewers are not the final decision makers on publication (that role belongs to the editor), their recommendations affect editorial decisions and thoughtful comments influence an article’s fate. 6 , 8  

As advisors and evaluators of manuscripts, reviewers have an opportunity and responsibility to give authors an outside expert’s perspective on their work. 9   They provide feedback that can improve methodology, enhance rigor, improve clarity, and redefine the scope of articles. 5 , 8 , 10   This often happens even if a paper is not ultimately accepted at the reviewer’s journal because peer reviewers’ comments are incorporated into revised drafts that are submitted to another journal. In a 2019 survey of authors, reviewers, and editors, 83% said that peer review helps science communication and 90% of authors reported that peer review improved their last paper. 11  

Expertise: Peer reviewers should be up to date with current literature, practice guidelines, and methodology within their subject area. However, academic rank and seniority do not define expertise and are not actually correlated with performance in peer review. 13  

Professionalism: Reviewers should be reliable and objective, aware of their own biases, and respectful of the confidentiality of the peer review process.

Critical skill : Reviewers should be organized, thorough, and detailed in their critique with the goal of improving the manuscript under their review, regardless of disposition. They should provide constructive comments that are specific and addressable, referencing literature when possible. A peer reviewer should leave a paper better than he or she found it.

Is the manuscript within your area of expertise? Generally, if you are asked to review a paper, it is because an editor felt that you were a qualified expert. In a 2019 survey, 74% of requested reviews were within the reviewer’s area of expertise. 11   This, of course, does not mean that you must be widely published in the area, only that you have enough expertise and comfort with the topic to critique and add to the paper.

Do you have any biases that may affect your review? Are there elements of the methodology, content area, or theory with which you disagree? Some disagreements between authors and reviewers are common, expected, and even helpful. However, if a reviewer fundamentally disagrees with an author’s premise such that he or she cannot be constructive, the review invitation should be declined.

Do you have the time? The average review for a clinical journal takes 5 to 6 hours, though many take longer depending on the complexity of the research and the experience of the reviewer. 1 , 14   Journals vary on the requested timeline for return of reviews, though it is usually 1 to 4 weeks. Peer review is often the longest part of the publication process and delays contribute to slower dissemination of important work and decreased author satisfaction. 15   Be mindful of your schedule and only accept a review invitation if you can reasonably return the review in the requested time.

Once you have determined that you are the right person and decided to take on the review, reply to the inviting e-mail or click the associated link to accept (or decline) the invitation. Journal editors invite a limited number of reviewers at a time and wait for responses before inviting others. A common complaint among journal editors surveyed was that reviewers would often take days to weeks to respond to requests, or not respond at all, making it difficult to find appropriate reviewers and prolonging an already long process. 5  

Now that you have decided to take on the review, it is best of have a systematic way of both evaluating the manuscript and writing the review. Various suggestions exist in the literature, but we will describe our standard procedure for review, incorporating specific do’s and don’ts summarized in Table 1 .

Dos and Don’ts of Peer Review

First, read the manuscript once without making notes or forming opinions to get a sense of the paper as whole. Assess the overall tone and flow and define what the authors identify as the main point of their work. Does the work overall make sense? Do the authors tell the story effectively?

Next, read the manuscript again with an eye toward review, taking notes and formulating thoughts on strengths and weaknesses. Consider the methodology and identify the specific type of research described. Refer to the corresponding reporting guideline if applicable (CONSORT for randomized control trials, STROBE for observational studies, PRISMA for systematic reviews). Reporting guidelines often include a checklist, flow diagram, or structured text giving a minimum list of information needed in a manuscript based on the type of research done. 16   This allows the reviewer to formulate a more nuanced and specific assessment of the manuscript.

Next, review the main findings, the significance of the work, and what contribution it makes to the field. Examine the presentation and flow of the manuscript but do not copy edit the text. At this point, you should start to write your review. Some journals provide a format for their reviews, but often it is up to the reviewer. In surveys of journal editors and reviewers, a review organized by manuscript section was the most favored, 5 , 6   so that is what we will describe here.

As you write your review, consider starting with a brief summary of the work that identifies the main topic, explains the basic approach, and describes the findings and conclusions. 12 , 17   Though not universally included in all reviews, we have found this step to be helpful in ensuring that the work is conveyed clearly enough for the reviewer to summarize it. Include brief notes on the significance of the work and what it adds to current knowledge. Critique the presentation of the work: is it clearly written? Is its length appropriate? List any major concerns with the work overall, such as major methodological flaws or inaccurate conclusions that should disqualify it from publication, though do not comment directly on disposition. Then perform your review by section:

Abstract : Is it consistent with the rest of the paper? Does it adequately describe the major points?

Introduction : This section should provide adequate background to explain the need for the study. Generally, classic or highly relevant studies should be cited, but citations do not have to be exhaustive. The research question and hypothesis should be clearly stated.

Methods: Evaluate both the methods themselves and the way in which they are explained. Does the methodology used meet the needs of the questions proposed? Is there sufficient detail to explain what the authors did and, if not, what needs to be added? For clinical research, examine the inclusion/exclusion criteria, control populations, and possible sources of bias. Reporting guidelines can be particularly helpful in determining the appropriateness of the methods and how they are reported.

Some journals will expect an evaluation of the statistics used, whereas others will have a separate statistician evaluate, and the reviewers are generally not expected to have an exhaustive knowledge of statistical methods. Clarify expectations if needed and, if you do not feel qualified to evaluate the statistics, make this clear in your review.

Results: Evaluate the presentation of the results. Is information given in sufficient detail to assess credibility? Are the results consistent with the methodology reported? Are the figures and tables consistent with the text, easy to interpret, and relevant to the work? Make note of data that could be better detailed in figures or tables, rather than included in the text. Make note of inappropriate interpretation in the results section (this should be in discussion) or rehashing of methods.

Discussion: Evaluate the authors’ interpretation of their results, how they address limitations, and the implications of their work. How does the work contribute to the field, and do the authors adequately describe those contributions? Make note of overinterpretation or conclusions not supported by the data.

The length of your review often correlates with your opinion of the quality of the work. If an article has major flaws that you think preclude publication, write a brief review that focuses on the big picture. Articles that may not be accepted but still represent quality work merit longer reviews aimed at helping the author improve the work for resubmission elsewhere.

Generally, do not include your recommendation on disposition in the body of the review itself. Acceptance or rejection is ultimately determined by the editor and including your recommendation in your comments to the authors can be confusing. A journal editor’s decision on acceptance or rejection may depend on more factors than just the quality of the work, including the subject area, journal priorities, other contemporaneous submissions, and page constraints.

Many submission sites include a separate question asking whether to accept, accept with major revision, or reject. If this specific format is not included, then add your recommendation in the “confidential notes to the editor.” Your recommendation should be consistent with the content of your review: don’t give a glowing review but recommend rejection or harshly criticize a manuscript but recommend publication. Last, regardless of your ultimate recommendation on disposition, it is imperative to use respectful and professional language and tone in your written review.

Although peer review is often described as the “gatekeeper” of science and characterized as a quality control measure, peer review is not ideally designed to detect fundamental errors, plagiarism, or fraud. In multiple studies, peer reviewers detected only 20% to 33% of intentionally inserted errors in scientific manuscripts. 18 , 19   Plagiarism similarly is not detected in peer review, largely because of the huge volume of literature available to plagiarize. Most journals now use computer software to identify plagiarism before a manuscript goes to peer review. Finally, outright fraud often goes undetected in peer review. Reviewers start from a position of respect for the authors and trust the data they are given barring obvious inconsistencies. Ultimately, reviewers are “gatekeepers, not detectives.” 7  

Peer review is also limited by bias. Even with the best of intentions, reviewers bring biases including but not limited to prestige bias, affiliation bias, nationality bias, language bias, gender bias, content bias, confirmation bias, bias against interdisciplinary research, publication bias, conservatism, and bias of conflict of interest. 3 , 4 , 6   For example, peer reviewers score methodology higher and are more likely to recommend publication when prestigious author names or institutions are visible. 20   Although bias can be mitigated both by the reviewer and by the journal, it cannot be eliminated. Reviewers should be mindful of their own biases while performing reviews and work to actively mitigate them. For example, if English language editing is necessary, state this with specific examples rather than suggesting the authors seek editing by a “native English speaker.”

Peer review is an essential, though imperfect, part of the forward movement of science. Peer review can function as both a gatekeeper to protect the published record of science and a mechanism to improve research at the level of individual manuscripts. Here, we have described our strategy, summarized in Table 2 , for performing a thorough peer review, with a focus on organization, objectivity, and constructiveness. By using a systematized strategy to evaluate manuscripts and an organized format for writing reviews, you can provide a relatively objective perspective in editorial decision-making. By providing specific and constructive feedback to authors, you contribute to the quality of the published literature.

Take-home Points

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest to disclose.

Dr Lu performed the literature review and wrote the manuscript. Dr Fischer assisted in the literature review and reviewed and edited the manuscript. Dr Plesac provided background information on the process of peer review, reviewed and edited the manuscript, and completed revisions. Dr Olson provided background information and practical advice, critically reviewed and revised the manuscript, and approved the final manuscript.

Advertising Disclaimer »

Citing articles via

Email alerts.

problem with the peer review process in scientific research

Affiliations

  • Editorial Board
  • Editorial Policies
  • Pediatrics On Call
  • Online ISSN 2154-1671
  • Print ISSN 2154-1663
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J R Soc Med
  • v.99(4); 2006 Apr

Peer review: a flawed process at the heart of science and journals

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times ). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)

WHAT IS PEER REVIEW?

My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance. 1

That is why Robbie Fox, the great 20th century editor of the Lancet , who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'

DOES PEER REVIEW `WORK' AND WHAT IS IT FOR?

But does peer review `work' at all? A systematic review of all the available evidence on peer review concluded that `the practice of peer review is based on faith in its effects, rather than on facts'. 2 But the answer to the question on whether peer review works depends on the question `What is peer review for?'.

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal. Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review. 1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers. 3 , 4 Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.

THE DEFECTS OF PEER REVIEW

So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.

Slow and expensive

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost', as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of £100, whereas the cost of a paper that made it right though the system was closer to £1000.

The cost of peer review has become important because of the open access movement, which hopes to make research freely available to everybody. With the current publishing model peer review is usually `free' to authors, and publishers make their money by charging institutions to access the material. One open access model is that authors will pay for peer review and the cost of posting their article on a website. So those offering or proposing this system have had to come up with a figure—which is currently between $500-$2500 per article. Those promoting the open access system calculate that at the moment the academic community pays about $5000 for access to a peer reviewed paper. (The $5000 is obviously paying for much more than peer review: it includes other editorial costs, distribution costs—expensive with paper—and a big chunk of profit for the publisher.) So there may be substantial financial gains to be had by academics if the model for publishing science changes.

There is an obvious irony in people charging for a process that is not proved to be effective, but that is how much the scientific community values its faith in peer review.

Inconsistent

People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits' Reviewer B: `It is written in a clear style and would be understood by any reader'.

This—perhaps inevitable—inconsistency can make peer review something of a lottery. You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot.

The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants. 5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci. 6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.

This is known as the Mathew effect: `To those who have, shall be given; to those who have not shall be taken away even the little that they have'. I remember feeling the effect strongly when as a young editor I had to consider a paper submitted to the BMJ by Karl Popper. 7 I was unimpressed and thought we should reject the paper. But we could not. The power of the name was too strong. So we published, and time has shown we were right to do so. The paper argued that we should pay much more attention to error in medicine, about 20 years before many papers appeared arguing the same.

The editorial peer review process has been strongly biased against `negative studies', i.e. studies that find an intervention does not work. It is also clear that authors often do not even bother to write up such studies. This matters because it biases the information base of medicine. It is easy to see why journals would be biased against negative studies. Journalistic values come into play. Who wants to read that a new treatment does not work? That's boring.

We became very conscious of this bias at the BMJ; we always tried to concentrate not on the results of a study we were considering but on the question it was asking. If the question is important and the answer valid, then it must not matter whether the answer is positive or negative. I fear, however, that bias is not so easily abolished and persists.

The Lancet has tried to get round the problem by agreeing to consider the protocols (plans) for studies yet to be done. 8 If it thinks the protocol sound and if the protocol is followed, the Lancet will publish the final results regardless of whether they are positive or negative. Such a system also has the advantage of stopping resources being spent on poor studies. The main disadvantage is that it increases the sum of peer reviewing—because most protocols will need to be reviewed in order to get funding to perform the study.

Abuse of peer review

There are several ways to abuse the process of peer review. You can steal ideas and present them as your own, or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine , for review to Vijay Soman. 9 Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine . This journal, by coincidence, sent it for review to the boss of the author of the plagiarized paper. She realized that she had been plagiarized and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients, and left the country. Rennie learnt a lesson that he never subsequently forgot but which medical authorities seem reluctant to accept: those who behave dishonestly in one way are likely to do so in other ways as well.

HOW TO IMPROVE PEER REVIEW?

The most important question with peer review is not whether to abandon it, but how to improve it. Many ideas have been advanced to do so, and an increasing number have been tested experimentally. The options include: standardizing procedures; opening up the process; blinding reviewers to the identity of authors; reviewing protocols; training reviewers; being more rigorous in selecting and deselecting reviewers; using electronic review; rewarding reviewers; providing detailed feedback to reviewers; using more checklists; or creating professional review agencies. It might be, however, that the best response would be to adopt a very quick and light form of peer review—and then let the broader world critique the paper or even perhaps rank it in the way that Amazon asks users to rank books and CDs.

I hope that it will not seem too indulgent if I describe the far from finished journey of the BMJ to try and improve peer review. We tried as we went to conduct experiments rather than simply introduce changes.

The most important step on the journey was realizing that peer review could be studied just like anything else. This was the idea of Stephen Lock, my predecessor as editor, together with Drummond Rennie and John Bailar. At the time it was a radical idea, and still seems radical to some—rather like conducting experiments with God or love.

Blinding reviewers to the identity of authors

The next important step was hearing the results of a randomized trial that showed that blinding reviewers to the identity of authors improved the quality of reviews (as measured by a validated instrument). 10 This trial, which was conducted by Bob McNutt, A T Evans, and Bob and Suzanne Fletcher, was important not only for its results but because it provided an experimental design for investigating peer review. Studies where you intervene and experiment allow more confident conclusions than studies where you observe without intervening.

This trial was repeated on a larger scale by the BMJ and by a group in the USA who conducted the study in many different journals. 11 , 12 Neither study found that blinding reviewers improved the quality of reviews. These studies also showed that such blinding is difficult to achieve (because many studies include internal clues on authorship), and that reviewers could identify the authors in about a quarter to a third of cases. But even when the results were analysed by looking at only those cases where blinding was successful there was no evidence of improved quality of the review.

Opening up peer review

At this point we at the BMJ thought that we would change direction dramatically and begin to open up the process. We hoped that increasing the accountability would improve the quality of review. We began by conducting a randomized trial of open review (meaning that the authors but not readers knew the identity of the reviewers) against traditional review. 13 It had no effect on the quality of reviewers' opinions. They were neither better nor worse. We went ahead and introduced the system routinely on ethical grounds: such important judgements should be open and acountable unless there were compelling reasons why they could not be—and there were not.

Our next step was to conduct a trial of our current open system against a system whereby every document associated with peer review, together with the names of everybody involved, was posted on the BMJ 's website when the paper was published. Once again this intervention had no effect on the quality of the opinion. We thus planned to make posting peer review documents the next stage in opening up our peer review process, but that has not yet happened—partly because the results of the trial have not yet been published and partly because this step required various technical developments.

The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse. Often I found the discourse around a study was a lot more interesting than the study itself. Now that I have left I am not sure if this system will be introduced.

Training reviewers

The BMJ also experimented with another possible way to improve peer review—by training reviewers. 4 It is perhaps extraordinary that there has been no formal training for such an important job. Reviewers learnt either by trial and error (without, it has to be said, very good feedback), or by working with an experienced reviewer (who might unfortunately be experienced but not very good).

Our randomized trial of training reviewers had three arms: one group got nothing; one group had a day's face-to-face training plus a CD-rom of the training; and the third group got just the CD-rom. The overall result was that training made little difference. 4 The groups that had training did show some evidence of improvement relative to those who had no training, but we did not think that the difference was big enough to be meaningful. We cannot conclude from this that longer or better training would not be helpful. A problem with our study was that most of the reviewers had been reviewing for a long time. `Old dogs cannot be taught new tricks', but the possibility remains that younger ones could.

TRUST IN SCIENCE AND PEER REVIEW

One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ , make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.

So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.

Richard Smith was editor of the BMJ and chief executive of the BMJ Publishing Group for 13 years. In his last year at the journal he retreated to a 15th century palazzo in Venice to write a book. The book will be published by RSM Press [ www.rsmpress.co.uk ], and this is the second in a series of extracts that will be published in the JRSM.

  • Open access
  • Published: 16 May 2024

Promoting equality, diversity and inclusion in research and funding: reflections from a digital manufacturing research network

  • Oliver J. Fisher 1 ,
  • Debra Fearnshaw   ORCID: orcid.org/0000-0002-6498-9888 2 ,
  • Nicholas J. Watson 3 ,
  • Peter Green 4 ,
  • Fiona Charnley 5 ,
  • Duncan McFarlane 6 &
  • Sarah Sharples 2  

Research Integrity and Peer Review volume  9 , Article number:  5 ( 2024 ) Cite this article

Metrics details

Equal, diverse, and inclusive teams lead to higher productivity, creativity, and greater problem-solving ability resulting in more impactful research. However, there is a gap between equality, diversity, and inclusion (EDI) research and practices to create an inclusive research culture. Research networks are vital to the research ecosystem, creating valuable opportunities for researchers to develop their partnerships with both academics and industrialists, progress their careers, and enable new areas of scientific discovery. A feature of a network is the provision of funding to support feasibility studies – an opportunity to develop new concepts or ideas, as well as to ‘fail fast’ in a supportive environment. The work of networks can address inequalities through equitable allocation of funding and proactive consideration of inclusion in all of their activities.

This study proposes a strategy to embed EDI within research network activities and funding review processes. This paper evaluates 21 planned mitigations introduced to address known inequalities within research events and how funding is awarded. EDI data were collected from researchers engaging in a digital manufacturing network activities and funding calls to measure the impact of the proposed method.

Quantitative analysis indicates that the network’s approach was successful in creating a more ethnically diverse network, engaging with early career researchers, and supporting researchers with care responsibilities. However, more work is required to create a gender balance across the network activities and ensure the representation of academics who declare a disability. Preliminary findings suggest the network’s anonymous funding review process has helped address inequalities in funding award rates for women and those with care responsibilities, more data are required to validate these observations and understand the impact of different interventions individually and in combination.

Conclusions

In summary, this study offers compelling evidence regarding the efficacy of a research network's approach in advancing EDI within research and funding. The network hopes that these findings will inform broader efforts to promote EDI in research and funding and that researchers, funders, and other stakeholders will be encouraged to adopt evidence-based strategies for advancing this important goal.

Peer Review reports

Introduction

Achieving equality, diversity, and inclusion (EDI) is an underpinning contributor to human rights, civilisation and society-wide responsibility [ 1 ]. Furthermore, promoting and embedding EDI within research environments is essential to make the advancements required to meet today’s research challenges [ 2 ]. This is evidenced by equal, diverse and inclusive teams leading to higher productivity, creativity and greater problem-solving ability [ 3 ], which increases the scientific impact of research outputs and researchers [ 4 ]. However, there remains a gap between EDI research and the everyday implementation of inclusive practices to achieve change [ 5 ]. This paper presents and reflects on the EDI measures trialled by the UK Engineering and Physical Sciences Research Council (EPSRC) funded digital manufacturing research network, Connected Everything (grant number: EP/S036113/1) [ 6 ]. The EPSRC is a UK research council that funds engineering and physical sciences research. By sharing these reflections, this work aims to contribute to the wider effort of creating an inclusive research culture. The perceptions of equality, diversity, and inclusion may vary among individuals. For the scope of this study, the following definitions are adopted:

Equality: Equality is about ensuring that every individual has an equal opportunity to make the most of their lives and talents. No one should have poorer life chances because of the way they were born, where they come from, what they believe, or whether they have a disability.

Diversity: Diversity concerns understanding that each individual is unique, recognising our differences, and exploring these differences in a safe, positive, and nurturing way to value each other as individuals.

Inclusion: Inclusion is an effort and practice in which groups or individuals with different backgrounds are culturally and socially accepted, welcomed and treated equally. This concerns treating each person as an individual, making them feel valued, and supported and being respectful of who they are.

Research networks have varied goals, but a common purpose is to create new interdisciplinary research communities, by fostering interactions between researchers and appropriate scientific, technological and industrial groups. These networks aim to offer valuable career progression opportunities for researchers, through access to research funding, forming academic and industrial collaborations at network events, personal and professional development, and research dissemination. However, feedback from a 2021 survey of 19 UK research networks, suggests that these research networks are not always diverse, and whilst on the face of it they seem inclusive, they are perceived as less inclusive by minority groups (including non-males, those with disabilities, and ethnic minority respondents) [ 7 ]. The exclusivity of these networks further exacerbates the inequality within the academic community as it prevents certain groups from being able to engage with all aspects of network activities.

Research investigating the causes of inequality and exclusivity has identified several suggestions to make research culture more inclusive, including improving diverse representation within event programmes and panels [ 8 , 9 ]; ensuring events are accessible to all [ 10 ]; providing personalised resources and training to build capacity and increase engagement [ 11 ]; educating institutions and funders to understand and address the barriers to research [ 12 ]; and increasing diversity in peer review and funding panels [ 13 ]. Universities, research institutions and research funding bodies are increasingly taking responsibility to ensure the health of the research and innovation system and to foster inclusion. For example, the EPSRC has set out their own ‘Expectation for EDI’ to promote the formation of a diverse and inclusive research culture [ 14 ]. To drive change, there is an emphasis on the importance of measuring diversity and links to measured outcomes to benchmark future studies on how interventions affect diversity [ 5 ]. Further, collecting and sharing EDI data can also drive aspirations, provide a target for actions, and allow institutions to consider common issues. However, there is a lack of available data regarding the impact of EDI practices on diversity that presents an obstacle, impeding the realisation of these benefits and hampering progress in addressing common issues and fostering diversity and inclusion [ 5 ].

Funding acquisition is important to an academic’s career progression, yet funding may often be awarded in ways that feel unequal and/or non-transparent. The importance of funding in academic career progression means that, if credit for obtaining funding is not recognised appropriately, careers can be damaged, and, as a result of the lack of recognition for those who have been involved in successful research, funding bodies may not have a complete picture of the research community, and are unable to deliver the best value for money [ 15 ]. Awarding funding is often a key research network activity and an area where networks can have a positive impact on the wider research community. It is therefore important that practices are established to embed EDI consideration within the funding process and to ensure that network funding is awarded without bias. Recommendations from the literature to make the funding award process fairer include: ensuring a diverse funding panel; funders instituting reviewer anti-bias training; anonymous review; and/or automatic adjustments to correct for known biases [ 16 ]. In the UK, the government organisation UK Research and Innovation (UKRI), tasked with overseeing research and innovation funding, has pledged to publish data to enhance transparency. This initiative aims to furnish an evidence base for designing interventions and evaluating their efficacy. While the data show some positive signs (e.g., the award rates for male and female PI applicants were equal at 29% in 2020–21), Ottoline Leyser (UKRI Chief Executive) highlights the ‘persistent pernicious disparities for under-represented groups in applying for and winning research funding’ [ 17 ]. This suggests that a more radical approach to rethinking the traditional funding review process may be required.

This paper describes the approach taken by the ‘Connected Everything’ EPSRC-funded Network to embed EDI in all aspects of its research funding process, and evaluates the impact of this ambition, leading to recommendations for embedding EDI in research funding allocation.

Connected everything’s equality diversity and inclusion strategy

Connected Everything aims to create a multidisciplinary community of researchers and industrialists to address key challenges associated with the future of digital manufacturing. The network is managed by an investigator team who are responsible for the strategic planning and, working with the network manager, to oversee the delivery of key activities. The network was first funded between 2016–2019 (grant number: EP/P001246/1) and was awarded a second grant (grant number: EP/S036113/1). The network activities are based around three goals: building partnerships, developing leadership and accelerating impact.

The Connected Everything network represents a broad range of disciplines, including manufacturing, computer science, cybersecurity, engineering, human factors, business, sociology, innovation and design. Some of the subject areas, such as Computer Science and Engineering, tend to be male-dominated (e.g., in 2021/22, a total of 185,42 higher education student enrolments in engineering & technology subjects was broken down as 20.5% Female and 79.5% Male [ 18 ]). The networks also face challenges in terms of accessibility for people with care responsibilities and disabilities. In 2019, Connected Everything committed to embedding EDI in all its network activities and published a guiding principle and goals for improving EDI (see Additional file 1 ). When designing the processes to deliver the second iteration of Connected Everything, the team identified several sources of potential bias/exclusion which have the potential to impact engagement with the network. Based on these identified factors, a series of mitigation interventions were implemented and are outlined in Table  1 .

Connected everything anonymous review process

A key Connected Everything activity is the funding of feasibility studies to enable cross-disciplinary, foresight, speculative and risky early-stage research, with a focus on low technology-readiness levels. Awards are made via a short, written application followed by a pitch to a multidisciplinary diverse panel including representatives from industry. Six- to twelve-month-long projects are funded to a maximum value of £60,000.

The current peer-review process used by funders may reveal the applicants’ identities to the reviewer. This can introduce dilemmas to the reviewer regarding (a) deciding whether to rely exclusively on information present within the application or search for additional information about the applicants and (b) whether or not to account for institutional prestige [ 34 ]. Knowing an applicant’s identity can bias the assessment of the proposal, but by focusing the assessment on the science rather than the researcher, equality is more frequently achieved between award rates (i.e., the proportion of successful applications) [ 15 ]. To progress Connected Everything’s commitment to EDI, the project team created a 2-stage review process, where the applicants’ identity was kept anonymous during the peer review stage. This anonymous process, which is outlined in Fig.  1 , was created for the feasibility study funding calls in 2019 and used for subsequent funding calls.

figure 1

Connected Everything’s anonymous review process [EDI: Equality, diversity, and inclusion]

To facilitate the anonymous review process, the proposal was submitted in two parts: part A the research idea and part B the capability-to-deliver statement. All proposals were first anonymously reviewed by a random selection of two members from the Connected Everything executive group, which is a diverse group of digital manufacturing experts and peers from academia, industry and research institutions that provide guidance and leadership on Connected Everything activities. The reviewers rated the proposals against the selection criteria (see Additional file 1 , Table 1) and provided overall comments alongside a recommendation on whether or not the applicant should be invited to the panel pitch. This information was summarised and shared with a moderation sift panel, made up of a minimum of two Connected Everything investigators and a minimum of one member of the executive group, that tensioned the reviewers’ comments (i.e. comments and evaluations provided by the peer reviewers are carefully considered and weighed against each other) and ultimately decided which proposals to invite to the panel. This tension process included using the identifying information to ensure the applicants did have the capability to deliver the project. If this remained unclear, the applicants were asked to confirm expertise in an area the moderation sift panel thought was key or asked to bring in additional expertise to the project team during the panel pitch.

During stage two the applicants were invited to pitch their research idea to a panel of experts who were selected to reflect the diversity of the community. The proposals, including applicants’ identities, were shared with the panel at least two weeks ahead of the panel. Individual panel members completed a summary sheet at the end of the pitch session to record how well the proposal met the selection criteria (see Additional file 1 , Table 1). Panel members did not discuss their funding decision until all the pitches had been completed. A panel chair oversaw the process but did not declare their opinion on a specific feasibility study unless the panel could not agree on an outcome. The panel and panel chair were reminded to consider ways to manage their unconscious bias during the selection process.

Due to the positive response received regarding the anonymous review process, Connected Everything extended its use when reviewing other funded activities. As these awards were for smaller grant values (~ £5,000), it was decided that no panel pitch was required, and the researcher’s identity was kept anonymous for the entire process.

Data collection and analysis methods

Data collection.

Equality, diversity and inclusion data were voluntarily collected from applicants for Connected Everything research funding and from participants who won scholarships to attend Connected Everything funded activities. Responses to the EDI data requests were collected from nine Connected Everything coordinated activities between 2019 and 2022. Data requests were sent after the applicant had applied for Connected Everything funding or had attended a Connected Everything funded activity. All data requests were completed voluntarily, with reassurance given that completion of the data requested in no way affected their application. In total 260 responses were received, of which the three feasibility study calls comprised 56.2% of the total responses received. Overall, there was a 73.8% response rate.

To understand the diversity of participants engaging with Connected Everything activities and funding, the data requests asked for details of specific diversity characteristics: gender, transgender, disability, ethnicity, age, and care responsibilities. Although sex and gender are terms that are often used interchangeably, they are two different concepts. To clarify, the definitions used by the UK government describe sex as a set of biological attributes that is generally limited to male or female, and typically attributed to individuals at birth. In contrast, gender identity is a social construction related to behaviours and attributes, and is self-determined based on a person’s internal perception, identification and experience. Transgender is a term used to describe people whose gender identity is not the same as the sex they were registered at birth. Respondents were first asked to identify their gender and then whether their gender was different from their birth sex.

For this study, respondents were asked to (voluntarily) self-declare whether they consider themselves to be disabled or not. Ethnicity within the data requests was based on the 2011 census classification system. When reporting ethnicity data, this study followed the AdvanceHE example to aggregate the census categories into six groups to enable benchmarking against the available academic ethnicity data. AdvanceHE is a UK charity that works to improve the higher education system for staff, students and society. However, it was acknowledged that there were limitations with this grouping, including the assumption that minority ethnic staff or students are a homogenous group [ 16 ]. Therefore, this study made sure to breakdown these groups during the discussion of the results. The six groups are:

Asian: Asian/Asian British: Indian, Pakistani, Bangladeshi, and any other Asian background;

Black: Black/African/Caribbean/Black British: African, Caribbean, and any other Black/African/Caribbean background;

Other ethnic backgrounds, including Arab.

White: all white ethnic groups.

Benchmarking data

Published data from the Higher Education Statistics Agency [ 26 ] (a UK organisation responsible for collecting, analysing, and disseminating data related to higher education institutions and students), UKRI funding data [ 19 , 35 ] and 2011 census data [ 36 ] were used to benchmark the EDI data collected within this study. The responses to the data collected were compared to the engineering and technology cluster of academic disciplines, as this is most represented by Connected Everything’s main funded EPSRC. The Higher Education Statistics Agency defines the engineering and technology cluster as including the following subject areas: general engineering; chemical engineering; mineral, metallurgy & materials engineering; civil engineering; electrical, electronic & computer engineering; mechanical, aero & production engineering and; IT, systems sciences & computer software engineering [ 37 ].

When assessing the equality in funding award rates, previous studies have focused on analysing the success rates of only the principal investigators [ 15 , 16 , 38 ]; however, Connected Everything recognised that writing research proposals is a collaborative task, so requested diversity data from the whole research team. The average of the last six years of published principal investigator and co-investigator diversity data for UKRI and EPSRC funding awards (2015–2021) was used to benchmark the Connected Everything funding data [ 35 ]. The UKRI and EPSRC funding review process includes a peer review stage followed by panel pitch and assessment stage; however, the applicant's track record is assessed during the peer review stage, unlike the Connected Everything review process.

The data collected have been used to evaluate the success of the planned migrations to address EDI factors affecting the higher education research ecosystem, as outlined in Table  1 (" Connected Everything’s Equality Diversity and Inclusion Strategy " Section).

Dominance of small number of research-intensive universities receiving funding from network

The dominance of a small number of research-intensive universities receiving funding from a network can have implications for the field of research, including: the unequal distribution of resources; a lack of diversity of research, limited collaboration opportunities; and impact on innovation and progress. Analysis of published EPSRC funding data between 2015 and 2021 [ 19 ], shows that the funding has been predominately (74.1%, 95% CI [71.%, 76.9%] out of £3.98 billion) awarded to Russell Group universities. The Russell Group is a self-selected association of 24 research-intensive universities (out of the 174 universities) in the UK, established in 1994. Evaluation of the universities that received Connected Everything feasibility study funding between 2016–2019, shows that Connected Everything awarded just over half (54.6%, 95% CI [25.1%, 84.0%] out of 11 awards) to Russell Group universities. Figure  2 shows that the Connected Everything funding awarded to Russell Group universities reduced to 44.4%, 95% CI [12.0%, 76.9%] of 9 awards between 2019–2022.

figure 2

A comparison of funding awarded by EPSRC (total = £3.98 billion) across Russell Group universities and non-Russell Group universities, alongside the allocations for Connected Everything I (total = £660 k) and Connected Everything II (total = £540 k)

Dominance of successful applications from men

The percentage point difference between the award rates of researchers who identified as female, those who declare a disability, or identified as ethnic minority applicants and carers and their respective counterparts have been plotted in Fig.  3 . Bars to the right of the axis mean that the award rate of the female/declared-disability/ethnic-minority/carer applicants is greater than that of male/non- disability/white/not carer applicants.

figure 3

Percentage point (PP) differences in award rate by funding provider for gender, disability status, ethnicity and care responsibilities (data not collected by UKRI and EPSRC [ 35 ]). The total number of applicants for each funder are as follows: Connected Everything = 146, EPSRC = 37,960, and UKRI = 140,135. *The numbers of applicants were too small (< 5) to enable a meaningful discussion

Figure  3 (A) shows that between 2015 and 2021 research team applicants who identified as male had a higher award rate than those who identified as female when applying for EPSRC and wider UKRI research council funding. Connected Everything funding applicants who identified as female achieved a higher award rate (19.4%, 95% CI [6.5%, 32.4%] out of 146) compared to male applicants (15.6%, 95% CI [8.8%, 22.4%] out of 146). These data suggest that biases have been reduced by the Connected Everything review process and other mitigation strategies (e.g., visible gender diversity in panel pitch members and publishing CE principal and goals to demonstrate commitment to equality and fairness). This finding aligns with an earlier study that found gender bias during the peer review process, resulting in female investigators receiving less favourable evaluations than their male counterparts [ 15 ].

Over-representation of people identifying as male in engineering and technology academic community

Figure  4 shows the response to the gender question, with 24.2%, 95% CI [19.0%, 29.4%] of 260 responses identifying as female. This aligns with the average for the engineering and technology cluster (21.4%, 95% CI [20.9%, 21.9%] female of 27,740 academic staff), which includes subject areas representative of our main funder, EPSRC [ 22 ]. We also sought to understand the representation of transgender researchers within the network. However, following the rounding policy outlined by UK Government statistics policies and procedures [ 39 ], the number of responses that identified as a different sex to birth was too low (< 5) to enable a meaningful discussion.

figure 4

Gender question responses from a total of 260 respondents

Dominance of successful applications from white academics

Figure  3 (C) shows that researchers with a minority ethnicity consistently have a lower award rate than white researchers when applying for EPSRC and UKRI funding. Similarly, the results in Fig.  3 (C) indicate that white researchers are more successful (8.0% percentage point, 95% CI [-8.6%, 24.6%]) when applying for Connected Everything funding. These results indicate that more measures should be implemented to support the ethnic minority researchers applying for Connected Everything funding, as well as sense checking there is no unconscious bias in any of the Connected Everything funding processes. The breakdown of the ethnicity diversity of applicants at different stages of the Connected Everything review process (i.e. all applications, applicants invited to panel pitch and awarded feasibility studies) has been plotted in Fig.  5 to help identify where more support is needed. Figure  5 shows an increase in the proportion of white researchers from 54%, 95% CI [45.4%, 61.8%] of all 146 applicants to 66%, 95% CI [52.8%, 79.1%] of the 50 researchers invited to the panel pitch. This suggests that stage 1 of the Connected Everything review process (anonymous review of written applications) may favour white applicants and/or introduce unconscious bias into the process.

figure 5

Ethnicity questions responses from different stages during the Connected Everything anonymous review process. The total number of applicants is 146, with 50 at the panel stage and 23 ultimately awarded

Under-representation of those from black or minority ethnic backgrounds

Connected Everything appears to have a wide range of ethnic diversity, as shown in Fig.  6 . The ethnicities Asian (18.3%, 95% CI [13.6%, 23.0%]), Black (5.1%, 95% CI [2.4%, 7.7%]), Chinese (12.5%, 95% CI [8.4%, 16.5%]), mixed (3.5%, 95% CI [1.3%, 5.7%]) and other (7.8%, 95% CI [4.5%, 11.1%]) have a higher representation among the 260 individuals engaging with network’s activities, in contrast to both the engineering and technology academic community and the wider UK population. When separating these groups into the original ethnic diversity answers, it becomes apparent that there is no engagement with ‘Black or Black British: Caribbean’, ‘Mixed: White and Black Caribbean’ or ‘Mixed: White and Asian’ researchers within Connected Everything activities. The lack of engagement with researchers from a Caribbean heritage is systemic of a lack of representation within the UK research landscape [ 25 ].

figure 6

Ethnicity question responses from a total of 260 respondents compared to distribution of the 13,085 UK engineering and technology (E&T) academic staff [ 22 ] and 56 million people recorded in the UK 2011 census data [ 36 ]

Under-representation of disabilities, chronic conditions, invisible illnesses and neurodiversity in funded activities and events.

Figure  7 (A) shows that 5.7%, 95% CI [2.4%, 8.9%] of 194 responses declared a disability. This is higher than the average of engineering and technology academics that identify as disabled (3.4%, 95% CI [3.2%, 3.7%] of 27,730 academics). Between Jan-March 2022, 9.0 million people of working age (16–64) within the UK were identified as disabled by the Office for National Statistics [ 40 ], which is 21% of the working age population [ 27 ]. Considering these statistics, there is a stark under-representation of disabilities, chronic conditions, invisible illnesses and neurodiversity amongst engineering and technology academic staff and those engaging in Connected Everything activities.

figure 7

Responses to A  Disability and B  Care responsibilities questions colected from a total of 194 respondents

Between 2015 and 2020 academics that declared a disability have been less successful than academics without a disability in attracting UKRI and EPSRC funding, as shown in Fig.  3 (B). While Fig.  3 (B) shows that those who declare a disability have a higher Connected Everything funding award rate, the number of applicants who declared a disability was too small (< 5) to enable a meaningful discussion regarding this result.

Under-representation of those with care responsibilities in funded activities and events

In response to the care responsibilities question, Fig.  7 (B) shows that 27.3%, 95% CI [21.1%, 33.6%] of 194 respondents identified as carers, which is higher than the 6% of adults estimated to be providing informal care across the UK in a UK Government survey of the 2020/2021 financial year [ 41 ]. However, the ‘informal care’ definition used by the 2021 survey includes unpaid care to a friend or family member needing support, perhaps due to illness, older age, disability, a mental health condition or addiction [ 41 ]. The Connected Everything survey included care responsibilities across the spectrum of care that includes partners, children, other relatives, pets, friends and kin. It is important to consider a wide spectrum of care responsibilities, as key academic events, such as conferences, have previously been demonstrably exclusionary sites for academics with care responsibilities [ 42 ]. Breakdown analysis of the responses to care responsibilities by gender in Fig.  8 reveals that 37.8%, 95% CI [25.3%, 50.3%] of 58 women respondents reported care responsibilities, compared to 22.6%, 95% CI [61.1%, 76.7%] of 136 men respondents. Our findings reinforce similar studies that conclude the burden of care falls disproportionately on female academics [ 43 ].

figure 8

Responses to care responsibilities when grouped by A  136 males and B  58 females

Figure  3 (D) shows that researchers with careering responsibilities applying for Connected Everything funding have a higher award rate than those researchers applying without care responsibilities. These results suggest that the Connected Everything review process is supportive of researchers with care responsibilities, who have faced barriers in other areas of academia.

Reduced opportunities for ECRs

Early-career researchers (ECRs) represent the transition stage between starting a PhD and senior academic positions. EPSRC defines an ECR as someone who is either within eight years of their PhD award, or equivalent professional training or within six years of their first academic appointment [ 44 ]. These periods exclude any career break, for example, due to family care; health reasons; and reasons related to COVID-19 such as home schooling or increased teaching load. The median age for starting a PhD in the UK is 24 to 25, while PhDs usually last between three and four years [ 45 ]. Therefore, these data would imply that the EPSRC median age of ECRs is between 27 and 37 years. It should be noted, however, that this definition is not ideal and excludes ECRs who may have started their research career later in life.

Connected Everything aims to support ECRs via measures that include mentoring support, workshops, summer schools and podcasts. Figure  9 shows a greater representation of researchers engaging with Connected Everything activities that are aged between 30–44 (62.4%, 95% CI [55.6%, 69.2%] of 194 respondents) when compared to the wider engineering and technology academic community (43.7%, 95% CI [43.1%, 44.3%] of 27,780 academics) and UK population (26.9%, 95% CI [26.9%, 26.9%]).

figure 9

Age question responses from a total of 194 respondents compared to distribution of the 27,780 UK engineering and technology (E&T) academic staff [ 22 ] and 56 million people recorded in the UK 2011 census data [ 36 ]

High competition for funding has a greater impact on ECRs

Figure  10 shows that the largest age bracket applying for and winning Connected Everything funding is 31–45, whereas 72%, CI 95% [70.1%, 74.5%] of 12,075 researchers awarded EPSRC grants between 2015 and 2021 were 40 years or older. These results suggest that measures introduced by Connected Everything has been successful at providing funding opportunities for researchers who are likely to be early-mid career stage.

figure 10

Age of researchers at applicant and awarded funding stages for A  Connected Everything between 2019–2022 (total of 146 applicants and 23 awarded) and B  EPSRC funding between 2015–2021 [ 35 ] (total of 35,780 applicants and 12,075 awarded)

The results of this paper provide insights into the impact that Connected Everything’s planned mitigations have had on promoting equality, diversity, and inclusion (EDI) in research and funding. Collecting EDI data from individuals who engage with network activities and apply for research funding enabled an evaluation of whether these mitigations have been successful in achieving the intended outcomes outlined at the start of the study, as summarised in Table  2 .

The results in Table  2 indicate that Connected Everything’s approach to EDI has helped achieve the intended outcome to improve representation of women, ECRs, those with a declared disability and black/minority ethnic backgrounds engaging with network events when compared to the engineering and technology academic community. In addition, the network has helped raise awareness of the high presence of researchers with care responsibilities at network events, which can help to track progress towards making future events inclusive and accessible towards these carers. The data highlights two areas for improvement: (1) ensuring a gender balance; and (2) increasing representation of those with declared disabilities. Both these discrepancies are indicative of the wider imbalances and underrepresentation of these groups in the engineering and technology academic community [ 26 ], yet represent areas where networks can strive to make a difference. Possible strategies include: using targeted outreach; promoting greater representation of these groups in event speakers; and going further to create a welcoming and inclusive environment. One barrier that can disproportionately affect women researchers is the need to balance care responsibilities with attending network events [ 46 ]. This was reflected in the Connected Everything data that reported 37.8%, 95% CI [25.3%, 50.3%] of women engaging with network activities had care responsibilities, compared to 22.6%, 95% CI [61.1%, 76.7%] of men. Providing accommodations such as on-site childcare, flexible scheduling, or virtual attendance options can therefore help to promote inclusivity and allow more women researchers to attend.

Only 5.7%, 95% CI [2.4%, 8.9%] of responses engaging with Connected Everything declared a disability, which is higher than the engineering and technology academic community (3.4%, 95% CI [3.2%, 3.7%]) [ 26 ], but unrepresentative of the wider UK population. It has been suggested that academics can be uncomfortable when declaring disabilities because scholarly contributions and institutional citizenship are so prized that they feel they cannot be honest about their issues or health concerns and keep them secret [ 47 ]. In research networks, it is important to be mindful of this hidden group within higher education and ensure that measures are put in place to make the network’s activities inclusive to all. Future considerations for accommodations to improve research events inclusivity include: improving physical accessibility of events; providing assistive technology such as screen readers, audio descriptions, and captioning can help individuals with visual or hearing impairments to access and participate; providing sign language interpreters; offering flexible scheduling options; and the provision of quiet rooms, written materials in accessible formats, and support staff trained to work with individuals with cognitive disabilities.

Connected Everything introduced measures (e.g., anonymised reviewing process, Q&A sessions before funding calls, inclusive design of panel pitch) to help address inequalities in how funding is awarded. Table 2 shows success in reducing the dominance of researchers who identify as male and research-intensive universities in winning research funding and that researchers with care responsibilities were more successful at winning funding than those without care responsibilities. The data revealed that the proposed measures were unable to address the inequality in award rates between white and ethnic minority researchers, which is an area to look to improve. The inequality appears to occur during the anonymous review stage, with a greater proportion of white researchers being invited to panel. Recommendations to make the review process fairer include: ensuring greater diversity of reviewers; reviewer anti-bias training; and automatic adjustments to correct for known biases in writing style [ 16 , 32 ].

When reflecting on the development of a strategy to embed EDI throughout the network, Connected Everything has learned several key lessons that may benefit other networks undergoing a similar activity. These include:

EDI is never ‘done’: There is a constant need to review approaches to EDI to ensure they remain relevant to the network community. Connected Everything could review its principles to include the concept of justice in its approach to diversity and inclusion. The concept of justice concerning EDI refers to the removal of systematic barriers that stop fair and equitable distribution of resources and opportunities among all members of society, regardless of their individual characteristics or backgrounds. The principles and subsequent actions could be reviewed against the EDI expectations [ 14 ], paying particular attention to areas where barriers may still be present. For example, shifting from welcoming people into existing structures and culture to creating new structures and culture together, with specific emphasis on decision or advisory mechanisms within the network. This activity could lend itself to focusing more on tailored support to overcome barriers, thus achieving equity, if it is not within the control of the network to remove the barrier itself (justice).

Widen diversity categories: By collecting data on a broad range of characteristics, we can identify and address disparities and biases that might otherwise be overlooked. A weakness of this dataset is that ignores the experience of those with intersectional identities, across race, ethnicity, gender, class, disability and/ or LGBTQI. The Wellcome Trust noted how little was known about the socio-economic background of scientists and researchers [ 48 ].

Collect data on whole research teams: For the first two calls for feasibility study funding, Connected Everything only asked the Principal Investigator to voluntarily provide their data. We realised that this was a limited approach and, in the third call, asked for the data regarding the whole research team to be shared anonymously. Furthermore, we do not currently measure the diversity of our event speakers, panellists or reviewers. Collecting these data in the future will help to ensure the network is accountable and will ensure that all groups are represented during our activities and in the funding decision-making process.

High response rate: Previous surveys measuring network diversity (e.g., [ 7 ]) have struggled to get responses when surveying their memberships; whereas, this study achieved a response rate of 73.8%. We attribute this high response rate to sending EDI data requests on the point of contact with the network (e.g., on submitting funding proposals or after attending network events), rather than trying to survey the entire network membership at anyone point in time.

Improve administration: The administration associated with collecting EDI data requires a commitment to transparency, inclusivity, and continuous improvement. For example, during the first feasibility funding call, Connected Everything made it clear that the review process would be anonymous, but the application form was not in separate documents. This made anonymising the application forms extremely time-consuming. For the subsequent calls, separate documents were created – Part A for identifying information (Principal Investigator contact details, Project Team and Industry collaborators) and Part B for the research idea.

Accepting that this can be uncomfortable: Trying to improve EDI can be uncomfortable because it often requires challenging our assumptions, biases, and existing systems and structures. However, it is essential if we want to make real progress towards equity and inclusivity. Creating processes to support embedding EDI takes time and Connected Everything has found it is rare to get it right the first time. Connected Everything is sharing its learning as widely as possible both to support others in their approaches and continue our learning as we reflect on how to continually improve, even when it is challenging.

Enabling individual engagement with EDI: During this work, Connected Everything recognised that methods for engaging with such EDI issues in research design and delivery are lacking. Connected Everything, with support from the Future Food Beacon of Excellence at the University of Nottingham, set out to develop a card-based tool [ 49 ] to help researchers and stakeholders identify questions around how their work may promote equity and increase inclusion or have a negative impact towards one or more protected groups and how this can be overcome. The results of this have been shared at conference presentations [ 50 ] and will be published later.

While this study provides insights into how EDI can be improved in research network activities and funding processes, it is essential to acknowledge several limitations that may impact the interpretation of the findings.

Sample size and generalisability: A total of 260 responses were received, which may not be representative of our overall network of 500 + members. Nevertheless, this data provides a sense of the current diversity engaging in Connected Everything activities and funding opportunities, which we can compare with other available data to steer action to further diversify the network.

Handling of missing data: Out of the 260 responses, 66 data points were missing for questions regarding age, disability, and caring responsibilities. These questions were mistakenly omitted from a Connected Everything summer school survey, contributing to 62 missing data points. While we assumed the remainer of missing data to be at random during analysis, it's important to acknowledge it could be related to other factors, potentially introducing bias into our results.

Emphasis on quantitative data: The study relies on using quantitative data to evaluate the impact of the EDI measures introduced by Connected Everything. However, relying solely on quantitative metrics may overlook nuanced aspects of EDI that cannot be easily quantified. For example, EDI encompasses multifaceted issues influenced by historical, cultural, and contextual factors. These nuances may not be fully captured by numbers alone. In addition, some EDI efforts may not yield immediate measurable outcomes but still contribute to a more inclusive environment.

Diversity and inclusion are not synonymous: The study proposes 21 measures to contribute towards creating an equal, diverse and inclusive research culture and collects diversity data to measure the impact of these measures. However, while diversity is simpler to monitor, increasing diversity alone does not guarantee equality or inclusion. Even with diverse research groups, individuals from underrepresented groups may still face barriers, microaggressions, or exclusion.

Balancing anonymity and rigour in grant reviews:The proposed anonymous review process proposed by Connected Everything removes personal and organisational details from the research ideas under reviewer evaluation. However, there exists a possibility that a reviewer could discern the identity of the grant applicant based on the research idea. Reviewers are expected to be subject matter experts in the field relevant to the grant proposal they are evaluating. Given the specialised nature of scientific research, it is conceivable that a well-known applicant could be identified through the specifics of the work, the methodologies employed, and even the writing style.

Expanding gender identity options: A limitation of this study emerged from the restricted gender options (male, female, other, prefer not to say) provided to respondents when answering the gender identity question. This limitation reflects the context of data collection in 2018, a time when diversity monitoring guidance was still limited. As our understanding of gender identity evolves beyond binary definitions, future data collection efforts should embrace a more expansive and inclusive approach, recognising the diverse spectrum of gender identities.

In conclusion, this study provides evidence of the effectiveness of a research network's approach to promoting equality, diversity, and inclusion (EDI) in research and funding. By collecting EDI data from individuals who engage with network activities and apply for research funding, this study has shown that the network's initiatives have had a positive impact on representation and fairness in the funding process. Specifically, the analysis reveals that the network is successful at engaging with ECRs, and those with care responsibilities and has a diverse range of ethnicities represented at Connected Everything events. Additionally, the network activities have a more equal gender balance and greater representation of researchers with disabilities when compared to the engineering and technology academic community, though there is still an underrepresentation of these groups compared to the national population.

Connected Everything introduced measures to help address inequalities in how funding is awarded. The measures introduced helped reduce the dominance of researchers who identified as male and research-intensive universities in winning research funding. Additionally, researchers with care responsibilities were more successful at winning funding than those without care responsibilities. However, inequality persisted with white researchers achieving higher award rates than those from ethnic minority backgrounds. Recommendations to make the review process fairer include: ensuring greater diversity of reviewers; reviewer anti-bias training; and automatic adjustments to correct for known biases in writing style.

Connected Everything’s approach to embedding EDI in network activities has already been shared widely with other EPSRC-funded networks and Hubs (e.g. the UKRI Circular Economy Hub and the UK Acoustics Network Plus). The network hopes that these findings will inform broader efforts to promote EDI in research and funding and that researchers, funders, and other stakeholders will be encouraged to adopt evidence-based strategies for advancing this important goal.

Availability of data and materials

The data collected was anonymously, however, it may be possible to identify an individual by combining specific records of the data request form data. Therefore, the study data has been presented in aggregate form to protect the confidential of individuals and the data utilised in this study cannot be made openly accessible due to ethical obligations to protect the privacy and confidentiality of the data providers.

Abbreviations

Early career researcher

Equality, diversity and inclusion

Engineering physical sciences research council

UK research and innovation

Xuan J, Ocone R. The equality, diversity and inclusion in energy and AI: call for actions. Energy AI. 2022;8:100152.

Article   Google Scholar  

Guyan K, Oloyede FD. Equality, diversity and inclusion in research and innovation: UK review. Advance HE; 2019.  https://www.ukri.org/wp-content/uploads/2020/10/UKRI-020920-EDI-EvidenceReviewUK.pdf .

Cooke A, Kemeny T. Cities, immigrant diversity, and complex problem solving. Res Policy. 2017;46:1175–85.

AlShebli BK, Rahwan T, Woon WL. The preeminence of ethnic diversity in scientific collaboration. Nat Commun. 2018;9:5163.

Gagnon S, Augustin T, Cukier W. Interplay for change in equality, diversity and inclusion studies: Hum Relations. Epub ahead of print 23 April 2021. https://doi.org/10.1177/00187267211002239 .

Everything C. https://connectedeverything.ac.uk/ . Accessed 27 Feb (2023).

Chandler-Wilde S, Kanza S, Fisher O, Fearnshaw D, Jones E. Reflections on an EDI Survey of UK-Government-Funded Research Networks in the UK. In: The 51st International Congress and Exposition on Noise Control Engineering. St. Albans: Institute of Acoustics; 2022. p. 9.0–940.

Google Scholar  

Prathivadi Bhayankaram K, Prathivadi Bhayankaram N. Conference panels: do they reflect the diversity of the NHS workforce? BMJ Lead 2022;6:57 LP – 59.

Goodman SW, Pepinsky TB. Gender representation and strategies for panel diversity: Lessons from the APSA Annual Meeting. PS Polit Sci Polit 2019;52:669–676.

Olsen J, Griffiths M, Soorenian A, et al. Reporting from the margins: disabled academics reflections on higher education. Scand J Disabil Res. 2020;22:265–74.

Baldie D, Dickson CAW, Sixsmith J. Building an Inclusive Research Culture. In: Knowledge, Innovation, and Impact. 2021, pp. 149–157.

Sato S, Gygax PM, Randall J, et al. The leaky pipeline in research grant peer review and funding decisions: challenges and future directions. High Educ 2020 821. 2020;82:145–62.

Recio-Saucedo A, Crane K, Meadmore K, et al. What works for peer review and decision-making in research funding: a realist synthesis. Res Integr Peer Rev. 2022;2022 71:7: 1–28.

EPSRC. Expectations for equality, diversity and inclusion – UKRI, https://www.ukri.org/about-us/epsrc/our-policies-and-standards/equality-diversity-and-inclusion/expectations-for-equality-diversity-and-inclusion/ (2022, Accessed 26 Apr 2022).

Witteman HO, Hendricks M, Straus S, et al. Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency. Lancet. 2019;393:531–40.

Li YL, Bretscher H, Oliver R, et al. Racism, equity and inclusion in research funding. Sci Parliam. 2020;76:17–9.

UKRI publishes latest diversity. data for research funding – UKRI, https://www.ukri.org/news/ukri-publishes-latest-diversity-data-for-research-funding/ (Accessed 28 July 2022).

Higher Education Statistics Agency. What do HE students study? https://www.hesa.ac.uk/data-and-analysis/students/what-study (2023, Accessed 25 March 2023).

UKRI. Competitive funding decisions, https://www.ukri.org/what-we-offer/what-we-have-funded/competitive-funding-decisions / (2023, Accessed 2 April 2023).

Santos G, Van Phu SD. Gender and academic rank in the UK. Sustain. 2019;11:3171.

Jebsen JM, Nicoll Baines K, Oliver RA, et al. Dismantling barriers faced by women in STEM. Nat Chem. 2022;14:1203–6.

Advance HE. Equality in higher education: staff statistical report 2021 | Advance HE, https://www.advance-he.ac.uk/knowledge-hub/equality-higher-education-statistical-report-2021 (28 October 2021, Accessed 26 April 2022).

EngineeringUK. Engineering in Higher Education, https://www.engineeringuk.com/media/318874/engineering-in-higher-education_report_engineeringuk_march23_fv.pdf (2023, Accessed 25 March 2023).

Bhopal K. Academics of colour in elite universities in the UK and the USA: the ‘unspoken system of exclusion’. Stud High Educ. 2022;47:2127–37.

Williams P, Bath S, Arday J et al. The Broken Pieline: Barriers to Black PhD Students Accessing Research Council Funding . 2019.

HESA. Who’s working in HE? Personal characteristics, https://www.hesa.ac.uk/data-and-analysis/staff/working-in-he/characteristics (2023, Accessed 1 April 2023).

Office for National Statistics. Principal projection - UK population in age groups, https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/datasets/tablea21principalprojectionukpopulationinagegroups (2022, Accessed 3 August 2022).

HESA. Who’s studying in HE? Personal characteristics, https://www.hesa.ac.uk/data-and-analysis/students/whos-in-he/characteristics (2023, Accessed 1 April 2023).

Herman E, Nicholas D, Watkinson A et al. The impact of the pandemic on early career researchers: what we already know from the internationally published literature. Prof la Inf ; 30. Epub ahead of print 11 March 2021. https://doi.org/10.3145/epi.2021.mar.08 .

Moreau M-P, Robertson M. ‘Care-free at the top’? Exploring the experiences of senior academic staff who are caregivers, https://srhe.ac.uk/wp-content/uploads/2020/03/Moreau-Robertson-SRHE-Research-Report.pdf (2019).

Shillington AM, Gehlert S, Nurius PS, et al. COVID-19 and long-term impacts on tenure-line careers. J Soc Social Work Res. 2020;11:499–507.

de Winde CM, Sarabipour S, Carignano H et al. Towards inclusive funding practices for early career researchers. J Sci Policy Gov; 18. Epub ahead of print 24 March 2021. https://doi.org/10.38126/JSPG180105 .

Trust W. Grant funding data report 2018/19, https://wellcome.org/sites/default/files/grant-funding-data-2018-2019.pdf (2020).

Vallée-Tourangeau G, Wheelock A, Vandrevala T, et al. Peer reviewers’ dilemmas: a qualitative exploration of decisional conflict in the evaluation of grant applications in the medical humanities and social sciences. Humanit Soc Sci Commun. 2022;2022 91:9: 1–11.

Diversity data – UKRI. https://www.ukri.org/what-we-offer/supporting-healthy-research-and-innovation-culture/equality-diversity-and-inclusion/diversity-data/ (accessed 30 September 2022).

2011 Census - Office for National Statistics. https://www.ons.gov.uk/census/2011census (Accessed 2 August 2022).

Cost centres. (2012/13 onwards) | HESA, https://www.hesa.ac.uk/support/documentation/cost-centres/2012-13-onwards (Accessed 28 July 2022).

Viner N, Powell P, Green R. Institutionalized biases in the award of research grants: a preliminary analysis revisiting the principle of accumulative advantage. Res Policy. 2004;33:443–54.

ofqual. Rounding policy - GOV.UK, https://www.gov.uk/government/publications/ofquals-statistics-policies-and-procedures/rounding-policy (2023, Accessed 2 April 2023).

Office for National Statistics. Labour market status of disabled people, https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/datasets/labourmarketstatusofdisabledpeoplea08 (2022, Accessed 3 August 2022).

Family Resources Survey. financial year 2020 to 2021 - GOV.UK, https://www.gov.uk/government/statistics/family-resources-survey-financial-year-2020-to-2021 (Accessed 10 Aug 2022).

Henderson E. Academics in two places at once: (not) managing caring responsibilities at conferences. 2018, p. 218.

Jolly S, Griffith KA, DeCastro R, et al. Gender differences in time spent on parenting and domestic responsibilities by high-achieving young physician-researchers. Ann Intern Med. 2014;160:344–53.

UKRI. Early career researchers, https://www.ukri.org/what-we-offer/developing-people-and-skills/esrc/early-career-researchers/ (2022, Accessed 2 April 2023).

Cornell B. PhD Life: The UK student experience , www.hepi.ac.uk (2019, Accessed 2 April 2023).

Kibbe MR, Kapadia MR. Underrepresentation of women at academic medical conferences—manels must stop. JAMA Netw Open 2020; 3:e2018676–e2018676.

Brown N, Leigh J. Ableism in academia: where are the disabled and ill academics? 2018; 33: 985–989.  https://doi.org/10.1080/0968759920181455627

Bridge Group. Diversity in Grant Awarding and Recruitment at Wellcome Summary Report. 2017.

Peter Craigon O, Fisher D, Fearnshaw et al. VERSION 1 - The Equality Diversity and Inclusion cards. Epub ahead of print 2022. https://doi.org/10.6084/m9.figshare.21222212.v3 .

Connected Everything II. EDI ideation cards for research - YouTube, https://www.youtube.com/watch?v=GdJjL6AaBbc&ab_channel=ConnectedEverythingII (2022, Accessed 7 June 2023).

Download references

Acknowledgements

The authors would like to acknowledge the support Engineering and Physical Sciences Research Council (EPSRC) [grant number EP/S036113/1], Connected Everything II: Accelerating Digital Manufacturing Research Collaboration and Innovation. The authors would also like to gratefully acknowledge the Connected Everything Executive Group for their contribution towards developing Connected Everything’s equality, diversity and inclusion strategy.

This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) [grant number EP/S036113/1].

Author information

Authors and affiliations.

Food, Water, Waste Research Group, Faculty of Engineering, University of Nottingham, University Park, Nottingham, UK

Oliver J. Fisher

Human Factors Research Group, Faculty of Engineering, University of Nottingham, University Park, Nottingham, UK

Debra Fearnshaw & Sarah Sharples

School of Food Science and Nutrition, University of Leeds, Leeds, UK

Nicholas J. Watson

School of Engineering, University of Liverpool, Liverpool, UK

Peter Green

Centre for Circular Economy, University of Exeter, Exeter, UK

Fiona Charnley

Institute for Manufacturing, University of Cambridge, Cambridge, UK

Duncan McFarlane

You can also search for this author in PubMed   Google Scholar

Contributions

OJF analysed and interpreted the data, and was the lead author in writing and revising the manuscript. DF led the data acquisition and supported the interpretation of the data. DF was also a major contributor to the design of the equality diversity and inclusion (EDI) strategy proposed in this work. NJW supported the design of the EDI strategy and was a major contributor in reviewing and revising the manuscript. PG supported the design of the EDI strategy, and was a major contributor in reviewing and revising the manuscript. FC supported the design of the EDI strategy and the interpretation of the data. DM supported the design of the EDI strategy. SS led the development EDI strategy proposed in this work, and was a major contributor in data interpretation and reviewing and revising the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Debra Fearnshaw .

Ethics declarations

Ethics approval and consent to participate.

Research was considered exempt from requiring ethical approval as is uses completely anonymous surveys results that are routinely collected as part of the administration of the network plus and informed consent was obtained at the time of original data collection.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fisher, O.J., Fearnshaw, D., Watson, N.J. et al. Promoting equality, diversity and inclusion in research and funding: reflections from a digital manufacturing research network. Res Integr Peer Rev 9 , 5 (2024). https://doi.org/10.1186/s41073-024-00144-w

Download citation

Received : 12 October 2023

Accepted : 09 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s41073-024-00144-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research integrity
  • Network policy
  • Funding reviewing
  • EDI interventions

Research Integrity and Peer Review

ISSN: 2058-8615

problem with the peer review process in scientific research

  • Open access
  • Published: 13 May 2024

What are the strengths and limitations to utilising creative methods in public and patient involvement in health and social care research? A qualitative systematic review

  • Olivia R. Phillips 1 , 2   na1 ,
  • Cerian Harries 2 , 3   na1 ,
  • Jo Leonardi-Bee 1 , 2 , 4   na1 ,
  • Holly Knight 1 , 2 ,
  • Lauren B. Sherar 2 , 3 ,
  • Veronica Varela-Mato 2 , 3 &
  • Joanne R. Morling 1 , 2 , 5  

Research Involvement and Engagement volume  10 , Article number:  48 ( 2024 ) Cite this article

103 Accesses

2 Altmetric

Metrics details

There is increasing interest in using patient and public involvement (PPI) in research to improve the quality of healthcare. Ordinarily, traditional methods have been used such as interviews or focus groups. However, these methods tend to engage a similar demographic of people. Thus, creative methods are being developed to involve patients for whom traditional methods are inaccessible or non-engaging.

To determine the strengths and limitations to using creative PPI methods in health and social care research.

Electronic searches were conducted over five databases on 14th April 2023 (Web of Science, PubMed, ASSIA, CINAHL, Cochrane Library). Studies that involved traditional, non-creative PPI methods were excluded. Creative PPI methods were used to engage with people as research advisors, rather than study participants. Only primary data published in English from 2009 were accepted. Title, abstract and full text screening was undertaken by two independent reviewers before inductive thematic analysis was used to generate themes.

Twelve papers met the inclusion criteria. The creative methods used included songs, poems, drawings, photograph elicitation, drama performance, visualisations, social media, photography, prototype development, cultural animation, card sorting and persona development. Analysis identified four limitations and five strengths to the creative approaches. Limitations included the time and resource intensive nature of creative PPI, the lack of generalisation to wider populations and ethical issues. External factors, such as the lack of infrastructure to support creative PPI, also affected their implementation. Strengths included the disruption of power hierarchies and the creation of a safe space for people to express mundane or “taboo” topics. Creative methods are also engaging, inclusive of people who struggle to participate in traditional PPI and can also be cost and time efficient.

‘Creative PPI’ is an umbrella term encapsulating many different methods of engagement and there are strengths and limitations to each. The choice of which should be determined by the aims and requirements of the research, as well as the characteristics of the PPI group and practical limitations. Creative PPI can be advantageous over more traditional methods, however a hybrid approach could be considered to reap the benefits of both. Creative PPI methods are not widely used; however, this could change over time as PPI becomes embedded even more into research.

Plain English Summary

It is important that patients and public are included in the research process from initial brainstorming, through design to delivery. This is known as public and patient involvement (PPI). Their input means that research closely aligns with their wants and needs. Traditionally to get this input, interviews and group discussions are held, but this can exclude people who find these activities non-engaging or inaccessible, for example those with language challenges, learning disabilities or memory issues. Creative methods of PPI can overcome this. This is a broad term describing different (non-traditional) ways of engaging patients and public in research, such as through the use or art, animation or performance. This review investigated the reasons why creative approaches to PPI could be difficult (limitations) or helpful (strengths) in health and social care research. After searching 5 online databases, 12 studies were included in the review. PPI groups included adults, children and people with language and memory impairments. Creative methods included songs, poems, drawings, the use of photos and drama, visualisations, Facebook, creating prototypes, personas and card sorting. Limitations included the time, cost and effort associated with creative methods, the lack of application to other populations, ethical issues and buy-in from the wider research community. Strengths included the feeling of equality between academics and the public, creation of a safe space for people to express themselves, inclusivity, and that creative PPI can be cost and time efficient. Overall, this review suggests that creative PPI is worthwhile, however each method has its own strengths and limitations and the choice of which will depend on the research project, PPI group characteristics and other practical limitations, such as time and financial constraints.

Peer Review reports

Introduction

Patient and public involvement (PPI) is the term used to describe the partnership between patients (including caregivers, potential patients, healthcare users etc.) or the public (a community member with no known interest in the topic) with researchers. It describes research that is done “‘with’ or ‘by’ the public, rather than ‘to,’ ‘about’ or ‘for’ them” [ 1 ]. In 2009, it became a legislative requirement for certain health and social care organisations to include patients, families, carers and communities in not only the planning of health and social care services, but the commissioning, delivery and evaluation of them too [ 2 ]. For example, funding applications for the National Institute of Health and Care Research (NIHR), a UK funding body, mandates a demonstration of how researchers plan to include patients/service users, the public and carers at each stage of the project [ 3 ]. However, this should not simply be a tokenistic, tick-box exercise. PPI should help formulate initial ideas and should be an instrumental, continuous part of the research process. Input from PPI can provide unique insights not yet considered and can ensure that research and health services are closely aligned to the needs and requirements of service users PPI also generally makes research more relevant with clearer outcomes and impacts [ 4 ]. Although this review refers to both patients and the public using the umbrella term ‘PPI’, it is important to acknowledge that these are two different groups with different motivations, needs and interests when it comes to health research and service delivery [ 5 ].

Despite continuing recognition of the need of PPI to improve quality of healthcare, researchers have also recognised that there is no ‘one size fits all’ method for involving patients [ 4 ]. Traditionally, PPI methods invite people to take part in interviews or focus groups to facilitate discussion, or surveys and questionnaires. However, these can sometimes be inaccessible or non-engaging for certain populations. For example, someone with communication difficulties may find it difficult to engage in focus groups or interviews. If individuals lack the appropriate skills to interact in these types of scenarios, they cannot take advantage of the participation opportunities it can provide [ 6 ]. Creative methods, however, aim to resolve these issues. These are a relatively new concept whereby researchers use creative methods (e.g., artwork, animations, Lego), to make PPI more accessible and engaging for those whose voices would otherwise go unheard. They ensure that all populations can engage in research, regardless of their background or skills. Seminal work has previously been conducted in this area, which brought to light the use of creative methodologies in research. Leavy (2008) [ 7 ] discussed how traditional interviews had limits on what could be expressed due to their sterile, jargon-filled and formulaic structure, read by only a few specialised academics. It was this that called for more creative approaches, which included narrative enquiry, fiction-based research, poetry, music, dance, art, theatre, film and visual art. These practices, which can be used in any stage of the research cycle, supported greater empathy, self-reflection and longer-lasting learning experiences compared to interviews [ 7 ]. They also pushed traditional academic boundaries, which made the research accessible not only to researchers, but the public too. Leavy explains that there are similarities between arts-based approaches and scientific approaches: both attempts to investigate what it means to be human through exploration, and used together, these complimentary approaches can progress our understanding of the human experience [ 7 ]. Further, it is important to acknowledge the parallels and nuances between creative and inclusive methods of PPI. Although creative methods aim to be inclusive (this should underlie any PPI activity, whether creative or not), they do not incorporate all types of accessible, inclusive methodologies e.g., using sign language for people with hearing impairments or audio recordings for people who cannot read. Given that there was not enough scope to include an evaluation of all possible inclusive methodologies, this review will focus on creative methods of PPI only.

We aimed to conduct a qualitative systematic review to highlight the strengths of creative PPI in health and social care research, as well as the limitations, which might act as a barrier to their implementation. A qualitative systematic review “brings together research on a topic, systematically searching for research evidence from primary qualitative studies and drawing the findings together” [ 8 ]. This review can then advise researchers of the best practices when designing PPI.

Public involvement

The PHIRST-LIGHT Public Advisory Group (PAG) consists of a team of experienced public contributors with a diverse range of characteristics from across the UK. The PAG was involved in the initial question setting and study design for this review.

Search strategy

For the purpose of this review, the JBI approach for conducting qualitative systematic reviews was followed [ 9 ]. The search terms were (“creativ*” OR “innovat*” OR “authentic” OR “original” OR “inclu*”) AND (“public and patient involvement” OR “patient and public involvement” OR “public and patient involvement and engagement” OR “patient and public involvement and engagement” OR “PPI” OR “PPIE” OR “co-produc*” OR “co-creat*” OR “co-design*” OR “cooperat*” OR “co-operat*”). This search string was modified according to the requirements of each database. Papers were filtered by title, abstract and keywords (see Additional file 1 for search strings). The databases searched included Web of Science (WoS), PubMed, ASSIA and CINAHL. The Cochrane Library was also searched to identify relevant reviews which could lead to the identification of primary research. The search was conducted on 14/04/23. As our aim was to report on the use of creative PPI in research, rather than more generic public engagement, we used electronic databases of scholarly peer-reviewed literature, which represent a wide range of recognised databases. These identified studies published in general international journals (WoS, PubMed), those in social sciences journals (ASSIA), those in nursing and allied health journals (CINAHL), and trials of interventions (Cochrane Library).

Inclusion criteria

Only full-text, English language, primary research papers from 2009 to 2023 were included. This was the chosen timeframe as in 2009 the Health and Social Reform Act made it mandatory for certain Health and Social Care organisations to involve the public and patients in planning, delivering, and evaluating services [ 2 ]. Only creative methods of PPI were accepted, rather than traditional methods, such as interviews or focus groups. For the purposes of this paper, creative PPI included creative art or arts-based approaches (e.g., e.g. stories, songs, drama, drawing, painting, poetry, photography) to enhance engagement. Titles were related to health and social care and the creative PPI was used to engage with people as research advisors, not as study participants. Meta-analyses, conference abstracts, book chapters, commentaries and reviews were excluded. There were no limits concerning study location or the demographic characteristics of the PPI groups. Only qualitative data were accepted.

Quality appraisal

Quality appraisal using the Critical Appraisal Skills Programme (CASP) checklist [ 10 ] was conducted by the primary authors (ORP and CH). This was done independently, and discrepancies were discussed and resolved. If a consensus could not be reached, a third independent reviewer was consulted (JRM). The full list of quality appraisal questions can be found in Additional file 2 .

Data extraction

ORP extracted the study characteristics and a subset of these were checked by CH. Discrepancies were discussed and amendments made. Extracted data included author, title, location, year of publication, year study was carried out, research question/aim, creative methods used, number of participants, mean age, gender, ethnicity of participants, setting, limitations and strengths of creative PPI and main findings.

Data analysis

The included studies were analysed using inductive thematic analysis [ 11 ], where themes were determined by the data. The familiarisation stage took place during full-text reading of the included articles. Anything identified as a strength or limitation to creative PPI methods was extracted verbatim as an initial code and inputted into the data extraction Excel sheet. Similar codes were sorted into broader themes, either under ‘strengths’ or ‘limitations’ and reviewed. Themes were then assigned a name according to the codes.

The search yielded 9978 titles across the 5 databases: Web of Science (1480 results), PubMed (94 results), ASSIA (2454 results), CINAHL (5948 results) and Cochrane Library (2 results), resulting in 8553 different studies after deduplication. ORP and CH independently screened their titles and abstracts, excluding those that did not meet the criteria. After assessment, 12 studies were included (see Fig.  1 ).

figure 1

PRISMA flowchart of the study selection process

Study characteristics

The included studies were published between 2018 and 2022. Seven were conducted in the UK [ 12 , 14 , 15 , 17 , 18 , 19 , 23 ], two in Canada [ 21 , 22 ], one in Australia [ 13 ], one in Norway [ 16 ] and one in Ireland [ 20 ]. The PPI activities occurred across various settings, including a school [ 12 ], social club [ 12 ], hospital [ 17 ], university [ 22 ], theatre [ 19 ], hotel [ 20 ], or online [ 15 , 21 ], however this information was omitted in 5 studies [ 13 , 14 , 16 , 18 , 23 ]. The number of people attending the PPI sessions varied, ranging from 6 to 289, however the majority (ten studies) had less than 70 participants [ 13 , 14 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Seven studies did not provide information on the age or gender of the PPI groups. Of those that did, ages ranged from 8 to 76 and were mostly female. The ethnicities of the PPI group members were also rarely recorded (see Additional file 3 for data extraction table).

Types of creative methods

The type of creative methods used to engage the PPI groups were varied. These included songs, poems, drawings, photograph elicitation, drama performance, visualisations, Facebook, photography, prototype development, cultural animation, card sorting and creating personas (see Table  1 ). These were sometimes accompanied by traditional methods of PPI such as interviews and focus group discussions.

The 12 included studies were all deemed to be of good methodological quality, with scores ranging from 6/10 to 10/10 with the CASP critical appraisal tool [ 10 ] (Table  2 ).

Thematic analysis

Analysis identified four limitations and five strengths to creative PPI (see Fig.  2 ). Limitations included the time and resource intensity of creative PPI methods, its lack of generalisation, ethical issues and external factors. Strengths included the disruption of power hierarchies, the engaging and inclusive nature of the methods and their long-term cost and time efficiency. Creative PPI methods also allowed mundane and “taboo” topics to be discussed within a safe space.

figure 2

Theme map of strengths and limitations

Limitations of creative PPI

Creative ppi methods are time and resource intensive.

The time and resource intensive nature of creative PPI methods is a limitation, most notably for the persona-scenario methodology. Valaitis et al. [ 22 ] used 14 persona-scenario workshops with 70 participants to co-design a healthcare intervention, which aimed to promote optimal aging in Canada. Using the persona method, pairs composed of patients, healthcare providers, community service providers and volunteers developed a fictional character which they believed represented an ‘end-user’ of the healthcare intervention. Due to the depth and richness of the data produced the authors reported that it was time consuming to analyse. Further, they commented that the amount of information was difficult to disseminate to scientific leads and present at team meetings. Additionally, to ensure the production of high-quality data, to probe for details and lead group discussion there was a need for highly skilled facilitators. The resource intensive nature of the creative co-production was also noted in a study using the persona scenario and creative worksheets to develop a prototype decision support tool for individuals with malignant pleural effusion [ 17 ]. With approximately 50 people, this was also likely to yield a high volume of data to consider.

To prepare materials for populations who cannot engage in traditional methods of PPI was also timely. Kearns et al. [ 18 ] developed a feedback questionnaire for people with aphasia to evaluate ICT-delivered rehabilitation. To ensure people could participate effectively, the resources used during the workshops, such as PowerPoints, online images and photographs, had to be aphasia-accessible, which was labour and time intensive. The author warned that this time commitment should not be underestimated.

There are further practical limitations to implementing creative PPI, such as the costs of materials for activities as well as hiring a space for workshops. For example, the included studies in this review utilised pens, paper, worksheets, laptops, arts and craft supplies and magazines and took place in venues such as universities, a social club, and a hotel. Further, although not limited to creative PPI methods exclusively but rather most studies involving the public, a financial incentive was often offered for participation, as well as food, parking, transport and accommodation [ 21 , 22 ].

Creative PPI lacks generalisation

Another barrier to the use of creative PPI methods in health and social care research was the individual nature of its output. Those who participate, usually small in number, produce unique creative outputs specific to their own experiences, opinions and location. Craven et al. [ 13 ], used arts-based visualisations to develop a toolbox for adults with mental health difficulties. They commented, “such an approach might still not be worthwhile”, as the visualisations were individualised and highly personal. This indicates that the output may fail to meet the needs of its end-users. Further, these creative PPI groups were based in certain geographical regions such as Stoke-on-Trent [ 19 ] Sheffield [ 23 ], South Wales [ 12 ] or Ireland [ 20 ], which limits the extent the findings can be applied to wider populations, even within the same area due to individual nuances. Further, the study by Galler et al. [ 16 ], is specific to the Norwegian context and even then, maybe only a sub-group of the Norwegian population as the sample used was of higher socioeconomic status.

However, Grindell et al. [ 17 ], who used persona scenarios, creative worksheets and prototype development, pointed out that the purpose of this type of research is to improve a certain place, rather than apply findings across other populations and locations. Individualised output may, therefore, only be a limitation to research wanting to conduct PPI on a large scale.

If, however, greater generalisation within PPI is deemed necessary, then social media may offer a resolution. Fedorowicz et al. [ 15 ], used Facebook to gain feedback from the public on the use of video-recording methodology for an upcoming project. This had the benefit of including a more diverse range of people (289 people joined the closed group), who were spread geographically around the UK, as well as seven people from overseas.

Creative PPI has ethical issues

As with other research, ethical issues must be taken into consideration. Due to the nature of creative approaches, as well as the personal effort put into them, people often want to be recognised for their work. However, this compromises principles so heavily instilled in research such as anonymity and confidentiality. With the aim of exploring issues related to health and well-being in a town in South Wales, Byrne et al. [ 12 ], asked year 4/5 and year 10 pupils to create poems, songs, drawings and photographs. Community members also created a performance, mainly of monologues, to explore how poverty and inequalities are dealt with. Byrne noted the risks of these arts-based approaches, that being the possibility of over-disclosure and consequent emotional distress, as well as people’s desire to be named for their work. On one hand, the anonymity reduces the sense of ownership of the output as it does not portray a particular individual’s lived experience anymore. On the other hand, however, it could promote a more honest account of lived experience. Supporting this, Webber et al. [ 23 ], who used the persona method to co-design a back pain educational resource prototype, claimed that the anonymity provided by this creative technique allowed individuals to externalise and anonymise their own personal experience, thus creating a more authentic and genuine resource for future users. This implies that anonymity can be both a limitation and strength here.

The use of creative PPI methods is impeded by external factors

Despite the above limitations influencing the implementation of creative PPI techniques, perhaps the most influential is that creative methodologies are simply not mainstream [ 19 ]. This could be linked to the issues above, like time and resource intensity, generalisation and ethical issues but it is also likely to involve more systemic factors within the research community. Micsinszki et al. [ 21 ], who co-designed a hub for the health and well-being of vulnerable populations, commented that there is insufficient infrastructure to conduct meaningful co-design as well as a dominant medical model. Through a more holistic lens, there are “sociopolitical environments that privilege individualism over collectivism, self-sufficiency over collaboration, and scientific expertise over other ways of knowing based on lived experience” [ 21 ]. This, it could be suggested, renders creative co-design methodologies, which are based on the foundations of collectivism, collaboration and imagination an invalid technique in the research field, which is heavily dominated by more scientific methods offering reproducibility, objectivity and reliability.

Although we acknowledge that creative PPI techniques are not always appropriate, it may be that their main limitation is the lack of awareness of these methods or lack of willingness to use them. Further, there is always the risk that PPI, despite being a mandatory part of research, is used in a tokenistic or tick-box fashion [ 20 ], without considering the contribution that meaningful PPI could make to enhancing the research. It may be that PPI, let alone creative PPI, is not at the forefront of researchers’ minds when planning research.

Strengths of creative PPI

Creative ppi disrupts power hierarchies.

One of the main strengths of creative PPI techniques, cited most frequently in the included literature, was that they disrupt traditional power hierarchies [ 12 , 13 , 17 , 19 , 23 ]. For example, the use of theatre performance blurred the lines between professional and lay roles between the community and policy makers [ 12 ]. Individuals created a monologue to portray how poverty and inequality impact daily life and presented this to representatives of the National Assembly of Wales, Welsh Government, the Local Authority, Arts Council and Westminster. Byrne et al. [ 12 ], states how this medium allowed the community to engage with the people who make decisions about their lives in an environment of respect and understanding, where the hierarchies are not as visible as in other settings, e.g., political surgeries. Creative PPI methods have also removed traditional power hierarchies between researchers and adolescents. Cook et al. [ 13 ], used arts-based approaches to explore adolescents’ ideas about the “perfect” condom. They utilised the “Life Happens” resource, where adolescents drew and then decorated a person with their thoughts about sexual relationships, not too dissimilar from the persona-scenario method. This was then combined with hypothetical scenarios about sexuality. A condom-mapping exercise was then implemented, where groups shared the characteristics that make a condom “perfect” on large pieces of paper. Cook et al. [ 13 ], noted that usually power imbalances make it difficult to elicit information from adolescents, however these power imbalances were reduced due to the use of creative co-design techniques.

The same reduction in power hierarchies was noted by Grindell et al. [ 17 ], who used the person-scenario method and creative worksheets with individuals with malignant pleural effusion. This was with the aim of developing a prototype of a decision support tool for patients to help with treatment options. Although this process involved a variety of stakeholders, such as patients, carers and healthcare professionals, creative co-design was cited as a mechanism that worked to reduce power imbalances – a limitation of more traditional methods of research. Creative co-design blurred boundaries between end-users and clinical staff and enabled the sharing of ideas from multiple, valuable perspectives, meaning the prototype was able to suit user needs whilst addressing clinical problems.

Similarly, a specific creative method named cultural animation was also cited to dissolve hierarchies and encourage equal contributions from participants. Within this arts-based approach, Keleman et al. [ 19 ], explored the concept of “good health” with individuals from Stoke-on Trent. Members of the group created art installations using ribbons, buttons, cardboard and straws to depict their idea of a “healthy community”, which was accompanied by a poem. They also created a 3D Facebook page and produced another poem or song addressing the government to communicate their version of a “picture of health”. Public participants said that they found the process empowering, honest, democratic, valuable and practical.

This dissolving of hierarchies and levelling of power is beneficial as it increases the sense of ownership experienced by the creators/producers of the output [ 12 , 17 , 23 ]. This is advantageous as it has been suggested to improve its quality [ 23 ].

Creative PPI allows the unsayable to be said

Creative PPI fosters a safe space for mundane or taboo topics to be shared, which may be difficult to communicate using traditional methods of PPI. For example, the hypothetical nature of condom mapping and persona-scenarios meant that adolescents could discuss a personal topic without fear of discrimination, judgement or personal disclosure [ 13 ]. The safe space allowed a greater volume of ideas to be generated amongst peers where they might not have otherwise. Similarly, Webber et al. [ 23 ], , who used the persona method to co-design the prototype back pain educational resource, also noted how this method creates anonymity whilst allowing people the opportunity to externalise personal experiences, thoughts and feelings. Other creative methods were also used, such as drawing, collaging, role play and creating mood boards. A cardboard cube (labelled a “magic box”) was used to symbolise a physical representation of their final prototype. These creative methods levelled the playing field and made personal experiences accessible in a safe, open environment that fostered trust, as well as understanding from the researchers.

It is not only sensitive subjects that were made easier to articulate through creative PPI. The communication of mundane everyday experiences were also facilitated, which were deemed typically ‘unsayable’. This was specifically given in the context of describing intangible aspects of everyday health and wellbeing [ 11 ]. Graphic designers can also be used to visually represent the outputs of creative PPI. These captured the movement and fluidity of people and well as the relationships between them - things that cannot be spoken but can be depicted [ 21 ].

Creative PPI methods are inclusive

Another strength of creative PPI was that it is inclusive and accessible [ 17 , 19 , 21 ]. The safe space it fosters, as well as the dismantling of hierarchies, welcomed people from a diverse range of backgrounds and provided equal opportunities [ 21 ], especially for those with communication and memory difficulties who might be otherwise excluded from PPI. Kelemen et al. [ 19 ], who used creative methods to explore health and well-being in Stoke-on-Trent, discussed how people from different backgrounds came together and connected, discussed and reached a consensus over a topic which evoked strong emotions, that they all have in common. Individuals said that the techniques used “sets people to open up as they are not overwhelmed by words”. Similarly, creative activities, such as the persona method, have been stated to allow people to express themselves in an inclusive environment using a common language. Kearns et al. [ 18 ], who used aphasia-accessible material to develop a questionnaire with aphasic individuals, described how they felt comfortable in contributing to workshops (although this material was time-consuming to make, see ‘Limitations of creative PPI’ ).

Despite the general inclusivity of creative PPI, it can also be exclusive, particularly if online mediums are used. Fedorowicz et al. [ 15 ], used Facebook to create a PPI group, and although this may rectify previous drawbacks about lack of generalisation of creative methods (as Facebook can reach a greater number of people, globally), it excluded those who are not digitally active or have limited internet access or knowledge of technology. Online methods have other issues too. Maintaining the online group was cited as challenging and the volume of responses required researchers to interact outside of their working hours. Despite this, online methods like Facebook are very accessible for people who are physically disabled.

Creative PPI methods are engaging

The process of creative PPI is typically more engaging and produces more colourful data than traditional methods [ 13 ]. Individuals are permitted and encouraged to explore a creative self [ 19 ], which can lead to the exploration of new ideas and an overall increased enjoyment of the process. This increased engagement is particularly beneficial for younger PPI groups. For example, to involve children in the development of health food products, Galler et al. [ 16 ] asked 9-12-year-olds to take photos of their food and present it to other children in a “show and tell” fashion. They then created a newspaper article describing a new healthy snack. In this creative focus group, children were given lab coats to further their identity as inventors. Galler et al. [ 16 ], notes that the methods were highly engaging and facilitated teamwork and group learning. This collaborative nature of problem-solving was also observed in adults who used personas and creative worksheets to develop the resource for lower back pain [ 23 ]. Dementia patients too have been reported to enjoy the creative and informal approach to idea generation [ 20 ].

The use of cultural animation allowed people to connect with each other in a way that traditional methods do not [ 19 , 21 ]. These connections were held in place by boundary objects, such as ribbons, buttons, fabric and picture frames, which symbolised a shared meaning between people and an exchange of knowledge and emotion. Asking groups to create an art installation using these objects further fostered teamwork and collaboration, both at an individual and collective level. The exploration of a creative self increased energy levels and encouraged productive discussions and problem-solving [ 19 ]. Objects also encouraged a solution-focused approach and permitted people to think beyond their usual everyday scope [ 17 ]. They also allowed facilitators to probe deeper about the greater meanings carried by the object, which acted as a metaphor [ 21 ].

From the researcher’s point of view, co-creative methods gave rise to ideas they might not have initially considered. Valaitis et al. [ 22 ], found that over 40% of the creative outputs were novel ideas brought to light by patients, healthcare providers/community care providers, community service providers and volunteers. One researcher commented, “It [the creative methods] took me on a journey, in a way that when we do other pieces of research it can feel disconnected” [ 23 ]. Another researcher also stated they could not return to the way they used to do research, as they have learnt so much about their own health and community and how they are perceived [ 19 ]. This demonstrates that creative processes not only benefit the project outcomes and the PPI group, but also facilitators and researchers. However, although engaging, creative methods have been criticised for not demonstrating academic rigour [ 17 ]. Moreover, creative PPI may also be exclusive to people who do not like or enjoy creative activities.

Creative PPI methods are cost and time efficient

Creative PPI workshops can often produce output that is visible and tangible. This can save time and money in the long run as the output is either ready to be implemented in a healthcare setting or a first iteration has already been developed. This may also offset the time and costs it takes to implement creative PPI. For example, the prototype of the decision support tool for people with malignant pleural effusion was developed using personas and creative worksheets. The end result was two tangible prototypes to drive the initial idea forward as something to be used in practice [ 17 ]. The use of creative co-design in this case saved clinician time as well as the time it would take to develop this product without the help of its end-users. In the development of this particular prototype, analysis was iterative and informed the next stage of development, which again saved time. The same applies for the feedback questionnaire for the assessment of ICT delivered aphasia rehabilitation. The co-created questionnaire, designed with people with aphasia, was ready to be used in practice [ 18 ]. This suggests that to overcome time and resource barriers to creative PPI, researchers should aim for it to be engaging whilst also producing output.

That useable products are generated during creative workshops signals to participating patients and public members that they have been listened to and their thoughts and opinions acted upon [ 23 ]. For example, the development of the back pain resource based on patient experiences implies that their suggestions were valid and valuable. Further, those who participated in the cultural animation workshop reported that the process visualises change, and that it already feels as though the process of change has started [ 19 ].

The most cost and time efficient method of creative PPI in this review is most likely the use of Facebook to gather feedback on project methodology [ 15 ]. Although there were drawbacks to this, researchers could involve more people from a range of geographical areas at little to no cost. Feedback was instantaneous and no training was required. From the perspective of the PPI group, they could interact however much or little they wish with no time commitment.

This systematic review identified four limitations and five strengths to the use of creative PPI in health and social care research. Creative PPI is time and resource intensive, can raise ethical issues and lacks generalisability. It is also not accepted by the mainstream. These factors may act as barriers to the implementation of creative PPI. However, creative PPI disrupts traditional power hierarchies and creates a safe space for taboo or mundane topics. It is also engaging, inclusive and can be time and cost efficient in the long term.

Something that became apparent during data analysis was that these are not blanket strengths and limitations of creative PPI as a whole. The umbrella term ‘creative PPI’ is broad and encapsulates a wide range of activities, ranging from music and poems to prototype development and persona-scenarios, to more simplistic things like the use of sticky notes and ordering cards. Many different activities can be deemed ‘creative’ and the strengths and limitations of one does not necessarily apply to another. For example, cultural animation takes greater effort to prepare than the use of sticky notes and sorting cards, and the use of Facebook is cheaper and wider reaching than persona development. Researchers should use their discretion and weigh up the benefits and drawbacks of each method to decide on a technique which suits the project. What might be a limitation to creative PPI in one project may not be in another. In some cases, creative PPI may not be suitable at all.

Furthermore, the choice of creative PPI method also depends on the needs and characteristics of the PPI group. Children, adults and people living with dementia or language difficulties all have different engagement needs and capabilities. This indicates that creative PPI is not one size fits all and that the most appropriate method will change depending on the composition of the group. The choice of method will also be determined by the constraints of the research project, namely time, money and the research aim. For example, if there are time constraints, then a method which yields a lot of data and requires a lot of preparation may not be appropriate. If generalisation is important, then an online method is more suitable. Together this indicates that the choice of creative PPI method is highly individualised and dependent on multiple factors.

Although the limitations discussed in this review apply to creative PPI, they are not exclusive to creative PPI. Ethical issues are a consideration within general PPI research, especially when working with more vulnerable populations, such as children or adults living with a disability. It can also be the case that traditional PPI methods lack generalisability, as people who volunteer to be part of such a group are more likely be older, middle class and retired [ 24 ]. Most research is vulnerable to this type of bias, however, it is worth noting that generalisation is not always a goal and research remains valid and meaningful in its absence. Although online methods may somewhat combat issues related to generalisability, these methods still exclude people who do not have access to the internet/technology or who choose not to use it, implying that online PPI methods may not be wholly representative of the general population. Saying this, however, the accessibility of creative PPI techniques differs from person to person, and for some, online mediums may be more accessible (for example for those with a physical disability), and for others, this might be face-to-face. To combat this, a range of methods should be implemented. Planning multiple focus group and interviews for traditional PPI is also time and resource intensive, however the extra resources required to make this creative may be even greater. Although, the rich data provided may be worth the preparation and analysis time, which is also likely to depend on the number of participants and workshop sessions required. PPI, not just creative PPI, often requires the provision of a financial incentive, refreshments, parking and accommodation, which increase costs. These, however, are imperative and non-negotiable, as they increase the accessibility of research, especially to minority and lower-income groups less likely to participate. Adequate funding is also important for co-design studies where repeated engagement is required. One barrier to implementation, which appears to be exclusive to creative methods, however, is that creative methods are not mainstream. This cannot be said for traditional PPI as this is often a mandatory part of research applications.

Regarding the strengths of creative PPI, it could be argued that most appear to be exclusive to creative methodologies. These are inclusive by nature as multiple approaches can be taken to evoke ideas from different populations - approaches that do not necessarily rely on verbal or written communication like interviews and focus groups do. Given the anonymity provided by some creative methods, such as personas, people may be more likely to discuss their personal experiences under the guise of a general end-user, which might be more difficult to maintain when an interviewer is asking an individual questions directly. Additionally, creative methods are by nature more engaging and interactive than traditional methods, although this is a blanket statement and there may be people who find the question-and-answer/group discussion format more engaging. Creative methods have also been cited to eliminate power imbalances which exist in traditional research [ 12 , 13 , 17 , 19 , 23 ]. These imbalances exist between researchers and policy makers and adolescents, adults and the community. Lastly, although this may occur to a greater extent in creative methods like prototype development, it could be suggested that PPI in general – regardless of whether it is creative - is more time and cost efficient in the long-term than not using any PPI to guide or refine the research process. It must be noted that these are observations based on the literature. To be certain these differences exist between creative and traditional methods of PPI, direct empirical evaluation of both should be conducted.

To the best of our knowledge, this is the first review to identify the strengths and limitations to creative PPI, however, similar literature has identified barriers and facilitators to PPI in general. In the context of clinical trials, recruitment difficulties were cited as a barrier, as well as finding public contributors who were free during work/school hours. Trial managers reported finding group dynamics difficult to manage and the academic environment also made some public contributors feel nervous and lacking confidence to speak. Facilitators, however, included the shared ownership of the research – something that has been identified in the current review too. In addition, planning and the provision of knowledge, information and communication were also identified as facilitators [ 25 ]. Other research on the barriers to meaningful PPI in trial oversight committees included trialist confusion or scepticism over the PPI role and the difficulties in finding PPI members who had a basic understanding of research [ 26 ]. However, it could be argued that this is not representative of the average patient or public member. The formality of oversight meetings and the technical language used also acted as a barrier, which may imply that the informal nature of creative methods and its lack of dependency on literacy skills could overcome this. Further, a review of 42 reviews on PPI in health and social care identified financial compensation, resources, training and general support as necessary to conduct PPI, much like in the current review where the resource intensiveness of creative PPI was identified as a limitation. However, others were identified too, such as recruitment and representativeness of public contributors [ 27 ]. Like in the current review, power imbalances were also noted, however this was included as both a barrier and facilitator. Collaboration seemed to diminish hierarchies but not always, as sometimes these imbalances remained between public contributors and healthcare staff, described as a ‘them and us’ culture [ 27 ]. Although these studies compliment the findings of the current review, a direct comparison cannot be made as they do not concern creative methods. However, it does suggest that some strengths and weaknesses are shared between creative and traditional methods of PPI.

Strengths and limitations of this review

Although a general definition of creative PPI exists, it was up to our discretion to decide exactly which activities were deemed as such for this review. For example, we included sorting cards, the use of interactive whiteboards and sticky notes. Other researchers may have a more or less stringent criteria. However, two reviewers were involved in this decision which aids the reliability of the included articles. Further, it may be that some of the strengths and limitations cannot fully be attributed to the creative nature of the PPI process, but rather their co-created nature, however this is hard to disentangle as the included papers involved both these aspects.

During screening, it was difficult to decide whether the article was utilising creative qualitative methodology or creative PPI , as it was often not explicitly labelled as such. Regardless, both approaches involved the public/patients refining a healthcare product/service. This implies that if this review were to be replicated, others may do it differently. This may call for greater standardisation in the reporting of the public’s involvement in research. For example, the NIHR outlines different approaches to PPI, namely “consultation”, “collaboration”, “co-production” and “user-controlled”, which each signify an increased level of public power and influence [ 28 ]. Papers with elements of PPI could use these labels to clarify the extent of public involvement, or even explicitly state that there was no PPI. Further, given our decision to include only scholarly peer-reviewed literature, it is possible that data were missed within the grey literature. Similarly, the literature search will not have identified all papers relating to different types of accessible inclusion. However, the intent of the review was to focus solely on those within the definition of creative.

This review fills a gap in the literature and helps circulate and promote the concept of creative PPI. Each stage of this review, namely screening and quality appraisal, was conducted by two independent reviewers. However, four full texts could not be accessed during the full text reading stage, meaning there are missing data that could have altered or contributed to the findings of this review.

Research recommendations

Given that creative PPI can require effort to prepare, perform and analyse, sufficient time and funding should be allocated in the research protocol to enable meaningful and continuous PPI. This is worthwhile as PPI can significantly change the research output so that it aligns closely with the needs of the group it is to benefit. Researchers should also consider prototype development as a creative PPI activity as this might reduce future time/resource constraints. Shifting from a top-down approach within research to a bottom-up can be advantageous to all stakeholders and can help move creative PPI towards the mainstream. This, however, is the collective responsibility of funding bodies, universities and researchers, as well as committees who approve research bids.

A few of the included studies used creative techniques alongside traditional methods, such as interviews, which could also be used as a hybrid method of PPI, perhaps by researchers who are unfamiliar with creative techniques or to those who wish to reap the benefits of both. Often the characteristics of the PPI group were not included, including age, gender and ethnicity. It would be useful to include such information to assess how representative the PPI group is of the population of interest.

Creative PPI is a relatively novel approach of engaging the public and patients in research and it has both advantages and disadvantages compared to more traditional methods. There are many approaches to implementing creative PPI and the choice of technique will be unique to each piece of research and is reliant on several factors. These include the age and ability of the PPI group as well as the resource limitations of the project. Each method has benefits and drawbacks, which should be considered at the protocol-writing stage. However, given adequate funding, time and planning, creative PPI is a worthwhile and engaging method of generating ideas with end-users of research – ideas which may not be otherwise generated using traditional methods.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

Critical Appraisal Skills Programme

The Joanna Briggs Institute

National Institute of Health and Care Research

Public Advisory Group

Public and Patient Involvement

Web of Science

National Institute for Health and Care Research. What Is Patient and Public Involvement and Public Engagement? https://www.spcr.nihr.ac.uk/PPI/what-is-patient-and-public-involvement-and-engagement Accessed 01 Sept 2023.

Department of Health. Personal and Public Involvement (PPI) https://www.health-ni.gov.uk/topics/safety-and-quality-standards/personal-and-public-involvement-ppi#:~:text=The Health and Social Care Reform Act (NI) 2009 placed,delivery and evaluation of services . Accessed 01 Sept 2023.

National Institute for Health and Care Research. Policy Research Programme – Guidance for Stage 1 Applications https://www.nihr.ac.uk/documents/policy-research-programme-guidance-for-stage-1-applications-updated/26398 Accessed 01 Sept 2023.

Greenhalgh T, Hinton L, Finlay T, Macfarlane A, Fahy N, Clyde B, Chant A. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019. https://doi.org/10.1111/hex.12888

Article   PubMed   PubMed Central   Google Scholar  

Street JM, Stafinski T, Lopes E, Menon D. Defining the role of the public in health technology assessment (HTA) and HTA-informed decision-making processes. Int J Technol Assess Health Care. 2020. https://doi.org/10.1017/S0266462320000094

Article   PubMed   Google Scholar  

Morrison C, Dearden A. Beyond tokenistic participation: using representational artefacts to enable meaningful public participation in health service design. Health Policy. 2013. https://doi.org/10.1016/j.healthpol.2013.05.008

Leavy P. Method meets art: arts-Based Research Practice. New York: Guilford; 2020.

Google Scholar  

Seers K. Qualitative systematic reviews: their importance for our understanding of research relevant to pain. Br J Pain. 2015. https://doi.org/10.1177/2049463714549777

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis JBI. 2020. https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03

CASP. CASP Checklists https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf (2022).

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006. https://doi.org/10.1191/1478088706qp063oa

Article   Google Scholar  

Byrne E, Elliott E, Saltus R, Angharad J. The creative turn in evidence for public health: community and arts-based methodologies. J Public Health. 2018. https://doi.org/10.1093/pubmed/fdx151

Cook S, Grozdanovski L, Renda G, Santoso D, Gorkin R, Senior K. Can you design the perfect condom? Engaging young people to inform safe sexual health practice and innovation. Sex Educ. 2022. https://doi.org/10.1080/14681811.2021.1891040

Craven MP, Goodwin R, Rawsthorne M, Butler D, Waddingham P, Brown S, Jamieson M. Try to see it my way: exploring the co-design of visual presentations of wellbeing through a workshop process. Perspect Public Health. 2019. https://doi.org/10.1177/1757913919835231

Fedorowicz S, Riley V, Cowap L, Ellis NJ, Chambers R, Grogan S, Crone D, Cottrell E, Clark-Carter D, Roberts L, Gidlow CJ. Using social media for patient and public involvement and engagement in health research: the process and impact of a closed Facebook group. Health Expect. 2022. https://doi.org/10.1111/hex.13515

Galler M, Myhrer K, Ares G, Varela P. Listening to children voices in early stages of new product development through co-creation – creative focus group and online platform. Food Res Int. 2022. https://doi.org/10.1016/j.foodres.2022.111000

Grindell C, Tod A, Bec R, Wolstenholme D, Bhatnagar R, Sivakumar P, Morley A, Holme J, Lyons J, Ahmed M, Jackson S, Wallace D, Noorzad F, Kamalanathan M, Ahmed L, Evison M. Using creative co-design to develop a decision support tool for people with malignant pleural effusion. BMC Med Inf Decis Mak. 2020. https://doi.org/10.1186/s12911-020-01200-3

Kearns Á, Kelly H, Pitt I. Rating experience of ICT-delivered aphasia rehabilitation: co-design of a feedback questionnaire. Aphasiology. 2020. https://doi.org/10.1080/02687038.2019.1649913

Kelemen M, Surman E, Dikomitis L. Cultural animation in health research: an innovative methodology for patient and public involvement and engagement. Health Expect. 2018. https://doi.org/10.1111/hex.12677

Keogh F, Carney P, O’Shea E. Innovative methods for involving people with dementia and carers in the policymaking process. Health Expect. 2021. https://doi.org/10.1111/hex.13213

Micsinszki SK, Buettgen A, Mulvale G, Moll S, Wyndham-West M, Bruce E, Rogerson K, Murray-Leung L, Fleisig R, Park S, Phoenix M. Creative processes in co-designing a co-design hub: towards system change in health and social services in collaboration with structurally vulnerable populations. Evid Policy. 2022. https://doi.org/10.1332/174426421X16366319768599

Valaitis R, Longaphy J, Ploeg J, Agarwal G, Oliver D, Nair K, Kastner M, Avilla E, Dolovich L. Health TAPESTRY: co-designing interprofessional primary care programs for older adults using the persona-scenario method. BMC Fam Pract. 2019. https://doi.org/10.1186/s12875-019-1013-9

Webber R, Partridge R, Grindell C. The creative co-design of low back pain education resources. Evid Policy. 2022. https://doi.org/10.1332/174426421X16437342906266

National Institute for Health and Care Research. A Researcher’s Guide to Patient and Public Involvement. https://oxfordbrc.nihr.ac.uk/wp-content/uploads/2017/03/A-Researchers-Guide-to-PPI.pdf Accessed 01 Nov 2023.

Selman L, Clement C, Douglas M, Douglas K, Taylor J, Metcalfe C, Lane J, Horwood J. Patient and public involvement in randomised clinical trials: a mixed-methods study of a clinical trials unit to identify good practice, barriers and facilitators. Trials. 2021 https://doi.org/10.1186/s13063-021-05701-y

Coulman K, Nicholson A, Shaw A, Daykin A, Selman L, Macefield R, Shorter G, Cramer H, Sydes M, Gamble C, Pick M, Taylor G, Lane J. Understanding and optimising patient and public involvement in trial oversight: an ethnographic study of eight clinical trials. Trials. 2020. https://doi.org/10.1186/s13063-020-04495-9

Ocloo J, Garfield S, Franklin B, Dawson S. Exploring the theory, barriers and enablers for patient and public involvement across health, social care and patient safety: a systematic review of reviews. Health Res Policy Sys. 2021. https://doi.org/10.1186/s12961-020-00644-3

National Institute for Health and Care Research. Briefing notes for researchers - public involvement in NHS, health and social care research. https://www.nihr.ac.uk/documents/briefing-notes-for-researchers-public-involvement-in-nhs-health-and-social-care-research/27371 Accessed 01 Nov 2023.

Download references

Acknowledgements

With thanks to the PHIRST-LIGHT public advisory group and consortium for their thoughts and contributions to the design of this work.

The research team is supported by a National Institute for Health and Care Research grant (PHIRST-LIGHT Reference NIHR 135190).

Author information

Olivia R. Phillips and Cerian Harries share joint first authorship.

Authors and Affiliations

Nottingham Centre for Public Health and Epidemiology, Lifespan and Population Health, School of Medicine, University of Nottingham, Clinical Sciences Building, City Hospital Campus, Hucknall Road, Nottingham, NG5 1PB, UK

Olivia R. Phillips, Jo Leonardi-Bee, Holly Knight & Joanne R. Morling

National Institute for Health and Care Research (NIHR) PHIRST-LIGHT, Nottingham, UK

Olivia R. Phillips, Cerian Harries, Jo Leonardi-Bee, Holly Knight, Lauren B. Sherar, Veronica Varela-Mato & Joanne R. Morling

School of Sport, Exercise and Health Sciences, Loughborough University, Epinal Way, Loughborough, Leicestershire, LE11 3TU, UK

Cerian Harries, Lauren B. Sherar & Veronica Varela-Mato

Nottingham Centre for Evidence Based Healthcare, School of Medicine, University of Nottingham, Nottingham, UK

Jo Leonardi-Bee

NIHR Nottingham Biomedical Research Centre (BRC), Nottingham University Hospitals NHS Trust, University of Nottingham, Nottingham, NG7 2UH, UK

Joanne R. Morling

You can also search for this author in PubMed   Google Scholar

Contributions

Author contributions: study design: ORP, CH, JRM, JLB, HK, LBS, VVM, literature searching and screening: ORP, CH, JRM, data curation: ORP, CH, analysis: ORP, CH, JRM, manuscript draft: ORP, CH, JRM, Plain English Summary: ORP, manuscript critical review and editing: ORP, CH, JRM, JLB, HK, LBS, VVM.

Corresponding author

Correspondence to Olivia R. Phillips .

Ethics declarations

Ethics approval and consent to participate.

The Ethics Committee of the Faculty of Medicine and Health Sciences, University of Nottingham advised that approval from the ethics committee and consent to participate was not required for systematic review studies.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

40900_2024_580_MOESM1_ESM.docx

Additional file 1: Search strings: Description of data: the search strings and filters used in each of the 5 databases in this review

Additional file 2: Quality appraisal questions: Description of data: CASP quality appraisal questions

40900_2024_580_moesm3_esm.docx.

Additional file 3: Table 1: Description of data: elements of the data extraction table that are not in the main manuscript

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Phillips, O.R., Harries, C., Leonardi-Bee, J. et al. What are the strengths and limitations to utilising creative methods in public and patient involvement in health and social care research? A qualitative systematic review. Res Involv Engagem 10 , 48 (2024). https://doi.org/10.1186/s40900-024-00580-4

Download citation

Received : 28 November 2023

Accepted : 25 April 2024

Published : 13 May 2024

DOI : https://doi.org/10.1186/s40900-024-00580-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public and patient involvement
  • Creative PPI
  • Qualitative systematic review

Research Involvement and Engagement

ISSN: 2056-7529

problem with the peer review process in scientific research

  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, effectiveness of intervention programs in reducing plagiarism by university students: a systematic review.

problem with the peer review process in scientific research

  • Facultad de Estudios Superiores Zaragoza, Universidad Nacional Autónoma de México, Mexico City, Mexico

Introduction: Plagiarism in universities is a problem with potential academic, social, ethical, and legal implications. Systematic review research on academic integrity programs, including plagiarism, has been conducted, but few studies have assessed plagiarism. Therefore, this review synthesizes knowledge on the effect of educational interventions designed to prevent or reduce plagiarism by university students.

Method: A systematic review was performed using the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) criteria to analyze experimental or quasi-experimental studies aimed at reducing plagiarism through objective assessments. The search strategy was implemented in Web of Science, PubMed, Scopus, PsycArticles, ProQuest, ERIC, Redalyc, SciELO, and Tesiunam.

Results: Six interventions were evaluated, and 1,631 undergraduate students were included pursuing different majors from different universities. The intervention and assessment strategies varied considerably between studies, 5 of which reported a lower plagiarism frequency in the intervention group than in the control group.

Conclusion: The results suggest that interventions with practical elements, such as plagiarism detection, paraphrasing, citation skills, in addition to using software to identify similarities, may reduce plagiarism. However, few studies include an objective evaluation, so more research is needed.

Systematic review registration: https://inplasy.com/inplasy-2023-7-0104/ .

1 Introduction

University education involves multiple challenges related to students’ professional development. In addition to providing students with theoretical and practical knowledge on a discipline, universities must promote ethical principles ( Mason, 2001 ; Illingworth, 2004 ). For this purpose, ethics should be included in professional training to prevent university students from using their skills and knowledge to place their interests above their professional codes of conduct ( Moore, 2006 ; Mion and Bonfanti, 2019 ). In these contexts, authors such as McCabe and Stephens have highlighted how fundamental it is to preserve academic integrity through the behavioral exercise of honor codes, accompanied by various commitments and responsibilities of different people who fulfill the role of promoters of quality professional training, so that problems of academic dishonesty are prevented ( McCabe and Trevino, 1993 ; McCabe et al., 2006 ; Stephens et al., 2021 ). Unfortunately, issues such as plagiarism is one of the prevalent problems which entails a breach of professional ethics.

Plagiarism is the appropriation of words or ideas of other authors without giving them due credit, which also has academic, social, and legal repercussions ( Park, 2003 ; Awasthi, 2019 ). Recently, research on university education highlights the risk of plagiarism due to its increasing ease of use to “solve” academic tasks, due to quick access to information that can be copied from one document and pasted into another ( Kampa et al., 2024 ; Zhang, 2024 ), mainly in stressful school situations ( Tindall et al., 2021 ). In addition to this, the ethical debate around plagiarism has been exacerbated by the arrival of artificial intelligence, so it is a phenomenon that must continue to be studied ( Eaton, 2023 ; King, 2023 ). Plagiarism is a multifactorial phenomenon comprising cognitive, affective, contextual, sociocultural, and institutional variables ( Husain et al., 2017 ; Moss et al., 2018 ), also found in professional scientific research ( Pupovac and Fanelli, 2015 ). Systematic review (SR) studies on plagiarism have shown a wide range of computer tools for understanding this issue in depth ( Moss et al., 2018 ; Awasthi, 2019 ). These SR studies have gathered evidence to describe and explain plagiarism, albeit without investigating educational interventions aimed at preventing or reducing plagiarism.

Interventions aimed at avoiding or reducing the incidence of plagiarism by university students primarily consists of conceptually raising awareness of the phenomenon and developing academic writing skills ( Marusic et al., 2016 ). Some studies measured the effectiveness of their interventions in terms of the increase in students’ unfavorable attitudes toward plagiarism, knowledge about plagiarism, and plagiarism detection skills ( Curtis et al., 2013 ; Rathore et al., 2018 ; Giuliano, 2022 ). However, these evaluations indirectly measured these variables, which may entail self-report or social desirability biases. Therefore, evidence on plagiarism prevention or reduction must be based on objective measurement criteria ( Martin et al., 2009 ).

Objectively assessing plagiarism should involve detecting coincidences between paragraphs and words in texts prepared by participants and published documents, primarily using software specialized in this task, such as Turnitin ( Dahl, 2007 ; Halgamuge, 2017 ; Meo and Talha, 2019 ). Experts should also identify paraphrasing problems in texts, lack of citations or mosaic plagiarism (directly copying and pasting the text and replacing only some words by synonyms to differentiate the text, also known as patchwriting) ( Vieyra et al., 2013 ; Rogerson and McCarthy, 2017 ; Memon, 2020 ). These proposals enable us to more easily assess the effectiveness of interventions aimed at reducing plagiarism objectively.

Marusic et al. (2016) performed an SR assessing the effectiveness of interventions aimed at preventing research misconduct and promoting academic integrity in scientific publishing. Among the misconduct topics that they evaluated, plagiarism was found in university students, especially in undergraduate students. Their results showed that interventions based on information defining plagiarism and its consequences, academic integrity modules, feedback from plagiarism detection software, and practical academic writing exercises to promote citation and paraphrasing skills help to mitigate this problem. However, the authors also noted the low quality of evidence. This low quality derives from the lack of homogeneity in intervention techniques and from the use of self-report assessments in multiple studies.

In addition to the lack of objective evaluations and the heterogeneity in interventions to prevent or reduce plagiarism, recent technological advances have had a considerable impact on education in general and university education in particular, in terms of both academic integrity and misconduct ( Turnbull et al., 2021 ). The increase in plagiarism has even been associated with the lack of direct supervision given the conditions of distance education in recent years ( Eshet, 2023 ). Nevertheless, currently available computer tools facilitate efforts to objectively assess the effect of interventions. Accordingly, an SR should be conducted to synthesize knowledge on the effect of educational interventions aimed at preventing or reducing plagiarism by university students. In specific terms, it was also proposed to review how the effectiveness of such interventions has been evaluated objectively and which intervention strategies can be considered the most appropriate.

This review was organized based on the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria ( Page et al., 2021 ), in addition to a protocol registered in International Platform of Registered Systematic Review and Meta-analysis Protocols (Registration number INPLASY202370104).

2.1 Search strategy

The search strategy was based on the research question “What are the effects of interventions to reduce plagiarism in university students?” An interval of years of publication was not specified for the search of the articles. The search was conducted on July 27, 2023. The Web of Science, PubMed, Scopus, PsycArticles, ProQuest, and ERIC databases were searched using the following strategy: (plagiarism OR misconduct OR cheating OR academic dishonesty) AND (student OR university students) AND (intervention OR training) . On July 28, 2023, the Redalyc, SciELO, and Tesiunam databases were searched using the following strategy: plagio AND estudiantes universitarios AND (intervención OR entrenamiento) [plagiarism AND university students AND (intervention OR training)] in Redalyc; Plagio AND estudiante [Plagiarism AND student] in SciELO; and Plagio [Plagiarism] in Tesiunam. The last search was performed to identify gray literature.

2.2 Eligibility criteria

The studies were selected based on the population, interventions, comparators, outcomes, and study designs (PICOS) framework. Population: undergraduate university students pursuing any major or from any university; intervention: direct strategies aimed at preventing or reducing plagiarism, encompassing academic integrity modules, training and instructions on plagiarism, Turnitin use, referencing tasks, preventive tutorials, and warnings on plagiarism detection for assignment evaluation, among others; comparators : no plagiarism intervention, normal lessons or any other intervention not directly related to plagiarism; outcomes: objectively assessing plagiarism detection in writing assignments using software or expert review; study designs: randomized controlled trials or studies with quasi-experimental designs. Articles in English, Spanish, or Portuguese were included in this SR. Studies with graduate students, academics, or professional researchers, with interventions unrelated to plagiarism or without specifying the type of intervention conducted in the study, without a comparison group, with plagiarism indicators based only on self-report tests of attitudes or knowledge, or observational studies were excluded.

2.3 Data collection

The studies were independently reviewed by two researchers (RAMR and JMSN) based on the inclusion and exclusion criteria. Using Microsoft Excel tools, a database was constructed, organizing the articles by title and abstract and identifying duplicates. Once the list of articles was complete, the duplicates and studies that failed to meet the eligibility criteria based on their title and abstract were excluded. After reviewing the full text of the selected articles, the two researchers selected those that met the eligibility criteria for qualitative review. Meta-analysis was not performed given the high variability among plagiarism criteria and strategies used in intervention programs.

2.4 Data analysis and synthesis

The data from the studies selected in this SR, namely authors, year, design, sample size (n), participants’ sex, age and major, type of intervention, comparator, plagiarism assessment strategy, and main outcomes were filled into a Microsoft Excel spreadsheet (and subsequently transferred to Microsoft Word).

The quality of the studies was evaluated using the tool for assessing risk of bias in randomized trials (RoB 2; Sterne et al., 2019 ) and the tool for assessing risk of bias in non-randomized studies of interventions (ROBINS-I; Sterne et al., 2016 ) criteria. More specifically, we analyzed the studies for selection, performance, detection, attrition, and reporting biases (RoB 2 criteria) and for biases due to confounding, due to selection of participants, in classification of interventions, due to deviations from intended interventions, due to lack of data, in measurement of outcomes and in selection of reported outcomes (ROBINS-I criteria). Risk of bias was illustrated using RevMan software version 5.4.1 for RoB 2 criteria and the robvis digital tool ( McGuinness and Higgins, 2021 ) for ROBINS-I criteria.

3.1 Study selection process

The initial search yielded 3,098 articles. When duplicates were removed, 2,619 remained in the sample of articles. Upon title and abstract review, 2,577 articles were excluded with 97% of agreement (Kappa = 0.579, p < 0.001); therefore, a total of 42 articles were selected for a slow review. After discussing disagreements, six articles met the eligibility criteria to carry out a systematic review, but not to carry out a meta-analysis due to considerable heterogeneity observed between these studies (see Figure 1 ). There was no need for a third reviewer.

www.frontiersin.org

Figure 1 . PRISMA flow diagram of the process of the systematic review on plagiarism.

3.2 Synthesis of the selected studies

Three of the selected studies had an experimental design, and the other three had a quasi-experimental design ( Belter and du Pré, 2009 ; Dee and Jacob, 2012 ; Newton et al., 2014 ; Henslee et al., 2015 ; Obeid and Hill, 2017 ; Yang et al., 2019 ). The six studies included only post-test evaluation. The number of participants of all studies totaled 1,631 undergraduate university students, ranging from 697 in the study with the most participants to 33 in the study with the smallest sample. The majors included Social Sciences, Psychology, Business, and Biology (see Table 1 ).

www.frontiersin.org

Table 1 . Synthesis of selected studies.

All study interventions included a definition of plagiarism (as the appropriation of words or ideas of other authors without giving them due credit) but varied in methodology. The interventions were based on citation tasks and on the review of common writing problems ( Yang et al., 2019 ), plagiarism detection strategies ( Obeid and Hill, 2017 ), examples of plagiarism and specific tips on how to avoid plagiarism (for example, paraphrasing, using quotation marks, recording group members’ contributions, and not procrastinating, among other strategies) ( Newton et al., 2014 ; Henslee et al., 2015 ), tutorials with examples of plagiarism, proper citation, and general strategies (e.g., not procrastinating and careful notetaking); questionnaires with examples ( Dee and Jacob, 2012 ); and general discussions on academic integrity, including strategies to avoid plagiarism, identifying sanctions for misconduct, and evaluating academic integrity ( Belter and du Pré, 2009 ). Most studies failed to specify the length of the intervention. Only Newton et al. (2014) indicated that the intervention lasted 1 h.

As for comparators, no intervention was conducted in three of the studies ( Dee and Jacob, 2012 ; Newton et al., 2014 ; Yang et al., 2019 ). In the other three studies, the participants included in the control group attended the usual classes ( Obeid and Hill, 2017 ), watched pre-recorded lectures on academic integrity ( Henslee et al., 2015 ), and either did not complete the intervention or attend the course in the previous academic year ( Belter and du Pré, 2009 ).

3.3 Effectiveness evaluation

To assess plagiarism, five studies used specialized software, particularly Turnitin or SafeAssign. However, the plagiarism assessment tasks and strategies varied among studies. Yang et al. (2019) asked the students to write two research reports from 3 and 6 months after the course and classified the type (copying with and without referencing and patchwriting) and severity of plagiarism and the corresponding section of the document. Belter and du Pré (2009) required each student to discuss a clinical psychology case, referencing sources; they measured the number of times the students committed plagiarism. Dee and Jacob (2012) compared written reports, setting the plagiarism threshold at 11% similarity. In two studies, the documents that were used to analyze plagiarism were not clearly identified, with one study reporting the percentage of plagiarism ( Obeid and Hill, 2017 ) and the other, the number of cases of plagiarism ( Henslee et al., 2015 ). In the only study without a specialized plagiarism detection software, the authors relied on a pen-and-paper survey, including a paraphrasing task with a 174-word text ( Newton et al., 2014 ). Five studies reported a lower percentage or number of cases of plagiarism in the experimental group than in the control group; only one study failed to find significant differences between the groups ( Henslee et al., 2015 ).

3.4 The most appropriate strategies

As three studies randomly selected the participants ( Dee and Jacob, 2012 ; Newton et al., 2014 ; Henslee et al., 2015 ) and the other three were quasi-experimental studies ( Belter and du Pré, 2009 ; Obeid and Hill, 2017 ; Yang et al., 2019 ), their risks of bias were assessed based on the RoB 2 and ROBINS-I criteria, respectively. Broadly speaking, the risk of bias was moderated because the studies not only used appropriate methodological strategies to avoid biases in their data collection and interpretation but also identified difficulties in assigning the participants to the control and experimental groups and limitations in intervention procedures. Table 2 shows the risks of bias of each study and their explanations, and Figures 2 , 3 show the corresponding risk of bias graphs. Based on the analysis of these six studies, there is a considerable effectiveness in reducing and preventing plagiarism in university students, where the intervention strategies that can be considered most appropriate are those that include the definition of plagiarism, practical strategies, such as citation training, plagiarism and paraphrasing exercises, and similarity detection tools. However, it is recommended that greater homogeneity be sought both in the ways in which interventions are implemented and in evaluating the incidence of plagiarism in students.

www.frontiersin.org

Table 2 . Description of risks of bias.

www.frontiersin.org

Figure 2 . Risk of bias graph (RoB 2).

www.frontiersin.org

Figure 3 . Risk of bias graph (ROBINS-I).

4 Discussion

We performed an SR to synthesize knowledge on the effectiveness of intervention programs aimed at reducing plagiarism by university students. In most studies, the intervention programs decreased the frequency of plagiarism, as shown by objective evaluation. However, the assessment and intervention procedures considerably varied across studies. Thus far, no SR on the effect of programs intended for reducing plagiarism by university students had been conducted specifically using objective evaluation. The closest topics on which similar reviews had been performed were related to academic dishonesty and included behaviors, such as cheating, data fabrication, and facilitation, in addition to plagiarism ( Marusic et al., 2016 ; Chiang et al., 2022 ). Furthermore, prior studies have also differed in their design and evaluation procedures. In fact, our results corroborate the findings of two reviews showing that interventions can reduce plagiarism, albeit flagging considerable heterogeneity.

We identified Turnitin and SafeAssign as the most commonly used software programs in the studies reviewed. Other software programs are available to detect plagiarism, such as Viper, Grammarly, Plagiarisma, and Copygator, several of which recommend a percentage lower than 15 or 20% as the plagiarism threshold. However, finding similarity between texts does not necessarily equate plagiarism. Several classifications and types of plagiarism have been proposed, including copying and pasting, patchwriting, failing to add references, or misattribution, among others ( Meo and Talha, 2019 ; Vrbanec and Meštrović, 2021 ). Additionally, the frequency of similarity may vary in different sections of a document. For example, a higher number of similarities to other documents are expected in the Introduction and Methods than in the Results and Discussion. While no consensus on the elements of plagiarism has been reached yet, researchers should use objective measures to avoid self-report bias, thereby improving the quality of research ( Martin et al., 2009 ), and describe in detail the type and severity of plagiarism, in addition to identifying the section of the document under analysis ( Belyy et al., 2018 ).

The intervention programs reviewed here similarly specified the definition of plagiarism and used specific and practical plagiarism prevention strategies, such as training in paraphrasing and referencing, providing examples, as reported in previous reviews. Those reviews have indicated that the most effective strategies are based on practical ( Marusic et al., 2016 ), motivational, and environmental ( Chiang et al., 2022 ) elements.

In contrast, intervention programs based on the theory of reasoned action advocate that fostering attitudes and subjective norms associating plagiarism with negative behavior and decreasing self-perceived control over the ease of plagiarism can reduce the intention to plagiarize and plagiarism. Nevertheless, these strategies may be inefficient given the complexity of the phenomenon. Students may plagiarize because they feel anxious, consider other values (for example, time) more relevant than plagiarism, overestimate their plagiarizing abilities, perform activities with tight deadlines, or disregard the usefulness of the academic activity, among other reasons. Moreover, environmental conditions may promote plagiarism, such as extenuating circumstances (for example, a sick family member), cultural factors (for example, deeming paraphrasing the author’s words disrespectful), implied consent, particularly if the conduct is prohibited, and arguing that others plagiarize as well. Plagiarism may even occur unconsciously, for instance, when reading a document and later thinking that your ideas are your own ( Moss et al., 2018 ).

Sorea et al. (2021) summarized five categories of solutions to the problem of plagiarism, namely improving student training, empowering more engaged teachers, using anti-plagiarism software, enforcing clear anti-plagiarism policies, and educating young people on ethics. These solutions may be translated into general elements of the academic field, such as improving learning and teaching strategies, valuing activities promoting personal and professional development, encouraging collaboration and reducing competition, in addition to including specific elements to reduce plagiarism, such as conceptually defining plagiarism, teaching students appropriate referencing and paraphrasing strategies, highlighting the study reviewed to avoid plagiarism and using similarity detection software. Establishing the minimum number of elements that an intervention should include to reduce plagiarism requires conducting further research detailing its procedures and intervention length as well as using objective evaluation measures ( Lendrum and Humphrey, 2012 ; Schultes, 2023 ).

In terms of limitations, although the six studies were carefully selected, the evidence derived from this SR may need to be complemented with other studies that expand the number of empirical foundations. In this sense, we recommend considering the results of this study and contrasting them with those that arise in future research, so that this knowledge can be strengthened or expanded. Other limitation of the present study lies in including only undergraduate students. We included only this population of students with the intention of providing the most precise evidence possible in terms of a target, but the effects of these interventions on graduate students and professional researchers should be assessed because plagiarism has also been reported in these populations ( Pupovac and Fanelli, 2015 ). We also recommend assessing the effect of universities’ educational policies and sanctions and differentiating between voluntary and involuntary plagiarism ( Bruton and Childers, 2016 ). Additionally, a key factor is the increasing incursion of artificial intelligence (AI) in education in recent years ( Mijwil et al., 2023 ). Using AI, anyone can produce an apparently genuine document, which has not been previously published, albeit with a striking overlap with similar documents also produced using the same AI and for the same purpose ( Misra and Chandwar, 2023 ). Therefore, future studies should also assess the impact of AI on plagiarism.

5 Conclusion

The results of the present review suggest that university education programs that share information about the characteristics and consequences of plagiarism; include academic integrity modules; promote plagiarism detection, citation, paraphrasing skills; and use similarity detection tools can reduce the frequency of plagiarism from literary sources by undergraduate university students. However, there is little research that evaluates plagiarism objectively and the interventions present a lot of heterogeneity, so it is necessary to carry out more research to reach a conclusion on the effectiveness of interventions to prevent or reduce plagiarism.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

RM-R: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing. JS-N: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. AR-R: Conceptualization, Methodology, Supervision, Writing – original draft, Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

The authors thank the Academic Advisory Network for Systematic Reviews ( Red Académica Asesora de Revisiones Sistemáticas – RAARS), FES Zaragoza, UNAM (DGAPA Proyecto PAPIME PE210523), for the lessons and methodological advice that helped to complete this study.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Awasthi, S. (2019). Plagiarism and academic misconduct: a systematic review. J. Libr. Inf. Technol. 39, 94–100. doi: 10.14429/djlit.39.2.13622

Crossref Full Text | Google Scholar

Belter, R. W., and du Pré, A. (2009). A strategy to reduce plagiarism in an undergraduate course. Teach. Psychol. 36, 257–261. doi: 10.1080/00986280903173165

Belyy, A., Dubova, M., and Nekrasov, D. (2018). Improved evaluation framework for complex plagiarism detection. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Melbourne, Australia: Association for Computational Linguistics. 157–162.

Google Scholar

Bruton, S., and Childers, D. (2016). The ethics and politics of policing plagiarism: a qualitative study of faculty views on student plagiarism and Turnitin ®. Assess. Eval. High. Educ. 41, 316–330. doi: 10.1080/02602938.2015.1008981

Chiang, F., Zhu, D., and Yu, W. (2022). A systematic review of academic dishonesty in online learning environments. J. Comput. Assist. Learn. 38, 907–928. doi: 10.1111/jcal.12656

Curtis, G. J., Gouldthorp, B., Thomas, E. F., O’Brien, G. M., and Correia, H. M. (2013). Online academic-integrity mastery training may improve students’ awareness of, and attitudes toward, plagiarism. Psychol. Learn. Teach. 12, 282–289. doi: 10.2304/plat.2013.12.3.282

Dahl, S. (2007). Turnitin®. Act. Learn. High. Educ. 8, 173–191. doi: 10.1177/1469787407074110

Dee, T. S., and Jacob, B. A. (2012). Rational ignorance in education. J. Hum. Resour. 47, 397–434. doi: 10.3368/jhr.47.2.397

Eaton, S. E. (2023). Postplagiarism: transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. Int. J. Educ. Integr. 19:23. doi: 10.1007/s40979-023-00144-1

Eshet, Y. (2023). The plagiarism pandemic: inspection of academic dishonesty during the COVID-19 outbreak using originality software. Educ. Inf. Technol. 29, 3279–3299. doi: 10.1007/s10639-023-11967-3

Giuliano, T. A. (2022). A 3-pronged approach for teaching psychology students to understand and avoid plagiarism. Teach. Psychol. :009862832211168. doi: 10.1177/00986283221116882

Halgamuge, M. N. (2017). The use and analysis of anti-plagiarism software: Turnitin tool for formative assessment and feedback. Comput. Appl. Eng. Educ. 25, 895–909. doi: 10.1002/cae.21842

Henslee, A. M., Goldsmith, J., Stone, N. J., and Krueger, M. (2015). An online tutorial vs. pre-recorded lecture for reducing incidents of plagiarism. Am. J. Eng. Educ. 6:1. doi: 10.19030/ajee.v6i1.9249

Husain, F. M., Al-Shaibani, G. K. S., and Mahfoodh, O. H. A. (2017). Perceptions of and attitudes toward plagiarism and factors contributing to plagiarism: a review of studies. J. Acad. Ethics. 15, 167–195. doi: 10.1007/s10805-017-9274-1

Illingworth, S. (2004). Approaches to ethics in higher education. Teaching ethics across the curriculum . Leeds, UK: Philosophical and Religious Studies Subject Centre, Learning and Teaching Support Network (PRS-LTSN).

Kampa, R. K., Padhan, D. K., Karna, N., and Gouda, J. (2024). Identifying the factors influencing plagiarism in higher education: an evidence-based review of the literature. Account. Res. 30, 1–16. doi: 10.1080/08989621.2024.2311212

PubMed Abstract | Crossref Full Text | Google Scholar

King, M. R. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cell. Mol. Bioeng. 16, 1–2. doi: 10.1007/s12195-022-00754-8

Lendrum, A., and Humphrey, N. (2012). The importance of studying the implementation of interventions in school settings. Oxf. Rev. Educ. 38, 635–652. doi: 10.1080/03054985.2012.734800

Martin, D. E., Rao, A., and Sloan, L. R. (2009). Plagiarism, integrity, and workplace deviance: a criterion study. Ethics Behav. 19, 36–50. doi: 10.1080/10508420802623666

Marusic, A., Wager, E., Utrobicic, A., Rothstein, H. R., and Sambunjak, D. (2016). Interventions to prevent misconduct and promote integrity in research and publication. Cochrane Database Syst. Rev. 2016:MR000038. doi: 10.1002/14651858.MR000038.pub2

Mason, M. (2001). The ethics of integrity: educational values beyond postmodern ethics. J. Philos. Educ. 35, 47–69. doi: 10.1111/1467-9752.00209

McCabe, D. L., Butterfield, K. D., and Treviño, L. K. (2006). Academic dishonesty in graduate business programs: prevalence, causes, and proposed action. Acad. Manag. Learn. Edu. 5, 294–305. doi: 10.5465/amle.2006.22697018

McCabe, D. L., and Trevino, L. K. (1993). Academic dishonesty. J. High. Educ. 64, 522–538. doi: 10.1080/00221546.1993.11778446

McGuinness, L. A., and Higgins, J. P. T. (2021). Risk-of-bias VISualization (robvis): an R package and shiny web app for visualizing risk-of-bias assessments. Res. Synth. Methods 12, 55–61. doi: 10.1002/jrsm.1411

Memon, A. R. (2020). Similarity and plagiarism in scholarly journal submissions: bringing clarity to the concept for authors, reviewers and editors. J. Korean Med. Sci. 35:27. doi: 10.3346/jkms.2020.35.e217

Meo, S., and Talha, M. (2019). Turnitin: is it a text matching or plagiarism detection tool? Saudi J Anaesth 13:48. doi: 10.4103/sja.SJA_772_18

Mijwil, M., Kant Hiran, K., Doshi, R., Dadhich, M., Al-Mistarehi, A., and Bala, I. (2023). ChatGPT and the future of academic integrity in the artificial intelligence era: a new frontier. Al-Salam. J. Eng. Technol. 2, 116–127. doi: 10.55145/ajest.2023.02.02.015

Mion, G., and Bonfanti, A. (2019). Drawing up codes of ethics of higher education institutions: evidence from Italian universities. Int. J. Educ. Manag. 33, 1526–1538. doi: 10.1108/IJEM-08-2018-0264

Misra, D. P., and Chandwar, K. (2023). ChatGPT, artificial intelligence and scientific writing: what authors, peer reviewers and editors should know. J. R. Coll. Physicians Edinb. 53, 90–93. doi: 10.1177/14782715231181023

Moore, G. (2006). Managing ethics in higher education: implementing a code or embedding virtue? Bus. Ethics. Eur. Rev. 15, 407–418. doi: 10.1111/j.1467-8608.2006.00462.x

Moss, S. A., White, B., and Lee, J. (2018). A systematic review into the psychological causes and correlates of plagiarism. Ethics Behav. 28, 261–283. doi: 10.1080/10508422.2017.1341837

Newton, F. J., Wright, J. D., and Newton, J. D. (2014). Skills training to avoid inadvertent plagiarism: results from a randomised control study. High. Educ. Res. Dev. 33, 1180–1193. doi: 10.1080/07294360.2014.911257

Obeid, R., and Hill, D. B. (2017). An intervention designed to reduce plagiarism in a research methods classroom. Teach. Psychol. 44, 155–159. doi: 10.1177/0098628317692620

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int. J. Surg. 88:105906. doi: 10.1016/j.ijsu.2021.105906

Park, C. (2003). In other (people’s) words: plagiarism by university students—literature and lessons. Assess. Eval. High. Educ. 28, 471–488. doi: 10.1080/02602930301677

Pupovac, V., and Fanelli, D. (2015). Scientists admitting to plagiarism: a meta-analysis of surveys. Sci. Eng. Ethics 21, 1331–1352. doi: 10.1007/s11948-014-9600-6

Rathore, F. A., Fatima, N. E., Farooq, F., and Mansoor, S. N. (2018). Combating scientific misconduct: the role of focused workshops in changing attitudes towards plagiarism. Cureus 10:e2698. doi: 10.7759/cureus.2698

Rogerson, A. M., and McCarthy, G. (2017). Using internet based paraphrasing tools: original work, patchwriting or facilitated plagiarism? Int. J. Educ. Integr. 13:1. doi: 10.1007/s40979-016-0013-y

Schultes, M. T. (2023). An introduction to implementation evaluation of school-based interventions. Eur. J. Dev. Psychol. 20, 189–201. doi: 10.1080/17405629.2021.1976633

Sorea, D., Roșculeț, G., and Bolborici, A. M. (2021). Readymade solutions and students’ appetite for plagiarism as challenges for online learning. Sustain. For. 13:7. doi: 10.3390/su13073861

Stephens, J. M., Watson, P. W. S. J., Alansari, M., Lee, G., and Turnbull, S. M. (2021). Can online academic integrity instruction affect university students’ perceptions of and engagement in academic dishonesty? Results from a natural experiment in New Zealand. Front. Psychol. 12:569133. doi: 10.3389/fpsyg.2021.569133

Sterne, J. A., Hernán, M. A., Reeves, B. C., Savović, J., Berkman, N. D., Viswanathan, M., et al. (2016). ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br. Med. J. 355:i4919. doi: 10.1136/bmj.i4919

Sterne, J. A. C., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., et al. (2019). RoB 2: a revised tool for assessing risk of bias in randomised trials. Br. Med. J. 366:l4898. doi: 10.1136/bmj.l4898

Tindall, I. K., Fu, K. W., Tremayne, K., and Curtis, G. J. (2021). Can negative emotions increase students’ plagiarism and cheating? Int. J. Educ. Integr. 17:25. doi: 10.1007/s40979-021-00093-7

Turnbull, D., Chugh, R., and Luck, J. (2021). Transitioning to e-learning during the COVID-19 pandemic: how have higher education institutions responded to the challenge? Educ. Inform. Technol. 26, 6401–6419. doi: 10.1007/s10639-021-10633-w

Vieyra, M., Strickland, D., and Timmerman, B. (2013). Patterns in plagiarism and patchwriting in science and engineering graduate students’ research proposals. Int. J. Educ. Integr. 9:1. doi: 10.21913/IJEI.v9i1.846

Vrbanec, T., and Meštrović, A. (2021). Taxonomy of academic plagiarism methods. Zbornik Veleučilišta u Rijeci. 9, 283–300. doi: 10.31784/zvr.9.1.17

Yang, A., Stockwell, S., and McDonnell, L. (2019). Writing in your own voice: an intervention that reduces plagiarism and common writing problems in students’ scientific writing. Biochem. Mol. Biol. Educ. 47, 589–598. doi: 10.1002/bmb.21282

Zhang, Y. (2024). “Plagiarism issues in higher education” in Understanding-oriented pedagogy to strengthen plagiarism-free academic writing (Singapore: Springer Nature), 11–20.

Keywords: plagiarism, university students, academic dishonesty, academic integrity, cheating

Citation: Miranda-Rodríguez RA, Sánchez-Nieto JM and Ruiz-Rodríguez AK (2024) Effectiveness of intervention programs in reducing plagiarism by university students: a systematic review. Front. Educ . 9:1357853. doi: 10.3389/feduc.2024.1357853

Received: 19 December 2023; Accepted: 29 April 2024; Published: 14 May 2024.

Reviewed by:

Copyright © 2024 Miranda-Rodríguez, Sánchez-Nieto and Ruiz-Rodríguez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rubén Andrés Miranda-Rodríguez, [email protected] José Miguel Sánchez-Nieto, [email protected] Ana Karen Ruiz-Rodríguez, [email protected]

  • Open access
  • Published: 05 May 2024

The quality of life of men experiencing infertility: a systematic review

  • Zahra Kiani   ORCID: orcid.org/0000-0002-4548-7305 1 ,
  • Masoumeh Simbar   ORCID: orcid.org/0000-0003-2843-3150 2 ,
  • Farzaneh Rashidi   ORCID: orcid.org/0000-0001-7497-4180 3 ,
  • Farid Zayeri   ORCID: orcid.org/0000-0002-7791-8122 4 &
  • Homayoon Banaderakhsh   ORCID: orcid.org/0000-0001-8982-9381 5  

BMC Public Health volume  24 , Article number:  1236 ( 2024 ) Cite this article

414 Accesses

1 Altmetric

Metrics details

Men experiencing infertility encounter numerous problems at the individual, family, and social levels as well as quality of life (QOL). This study was designed to investigate the QOL of men experiencing infertility through a systematic review.

Materials and methods

This systematic review was conducted without any time limitation (Retrieval date: July 1, 2023) in international databases such as Scopus, Web of Science, PubMed, and Google Scholar. The search was performed by two reviewers separately using keywords such as QOL, infertility, and men. Studies were selected based on inclusion and exclusion criteria. The quality of the articles were evaluated based on the Newcastle-Ottawa Scale. In the initial search, 308 studies were reviewed, and after removing duplicates and checking the title and abstract, the full text of 87 studies were evaluated.

Finally, 24 studies were included in the final review based on the research objectives. Based on the results, men’s QOL scores in different studies varied from 55.15 ± 13.52 to 91.45 ± 13.66%. Of the total reviewed articles, the lowest and highest scores were related to mental health problems and physical dimensions, respectively.

The reported findings vary across various studies conducted in different countries. Analysis of the factors affecting these differences is necessary, and it is recommended to design a standard tool for assessing the quality of life of infertile men. Given the importance of the QOL in men experiencing infertility, it is crucial to consider it in the health system. Moreover, a plan should be designed, implemented and evaluated according to each country’s contex to improve the quality of life of infertile men.

Peer Review reports

Introduction

Defined as the absence of pregnancy after one or two years of unprotected sexual intercourse (without the use of contraceptive methods) [ 1 ], infertility is recognized as both a medical and social issue [ 2 ]. Based on the latest Word Health Organization (WHO) report in 2023, the pooled lifetime and period prevalence of infrtility are reported as 17.5% and 12.6%, respectively [ 3 ]. In this regard, male factors play a role in 50% of infertilities [ 4 ].

Complicated treatment protocol, difficult treatment process, semen analysis, multiple ultrasounds, invasive treatments, long waiting lists, and high financial costs for the clients who seek assisted reproductive techniques have been described as psychological stresses for infertile couples [ 5 , 6 ]. Moreover, the diagnosis and treatment of infertility can have negative impact on the frequency of sexual intercourse, self-esteem, and body image [ 5 ]. However, these men usually tend to suppress or deny their problems which may diminish their quality of life (QOL) over time [ 7 ]. This decreased QOL, in turn, can have a detrimental effect on their response to treatment [ 8 ].

The function of infertile people is under the influence of society, family, and the society culture. In many societies, infertility is primarily viewed as a medical problem, often neglecting its individual and social dimensions [ 9 ]. In other words, despite having the right attitude toward infertility, infertile people sometimes cannot adapt to the problem. Thus, non-compliance during the behavioral process may lead to additional problems and impair one’s QOL [ 10 ].

The WHO describes the QOL as people’s perspective of their life circumstances in terms of the cultural systems and standards of their environment, and how these perspectives are associated with their objectives, prospects, ideals, and apprehensions [ 11 ]. Recently, the QOL of men experiencing infertility as a main subject has been carefully considered by health investigators. Furthermore, because of men’s essential role in future phases of life, their QOL can significantly affect their health at both individual and societal levels [ 12 ].

Given the significance of QOL, its precise measurement is substantially important. In this regard, various tools have been designed and used in studies to examine this concept. A systematic study used the World Health Organization Quality Of Life )WHOQOL), 36-Item Short Form Survey (SF-36 ), and general QOL questionnaires. Based on the results, the QOL of men experiencing infertility was reported to be low in two studies that had used the SF-36 questionnaire. By contrast, the QOL of these men was high in a study that used the WHOQOL questionnaire. It was noted in this systematic review that although infertility has a negative effect on the mental health and sexual relationships of couples, there is no consensus regarding its effect on the QOL of infertile couples [ 13 ].

In Almutawa et al.‘s systematic review and meta-analysis 2023, it has been shown that the psychological disturbances in infertile women are higher than in men, and this difference in couples needs further investigation [ 14 ]. Chachavomich et al. 2010 showed that women’s quality of life is more affected by infertility than men study, which was a systematic review [ 12 ], . This study was conducted 14 years ago and due to the increase in the number of articles in this field, it needs to be re-examined.Given that no systematic review had been conducted to address the QOL of men experiencing infertility and considering the significance of this issue in therapeutic responses, this study examined the QOL of men experiencing infertility in the form of a systematic review.

Search strategy

To search and review the studies, reputable international databases and sites such as Scopus, Web of Science, PubMed, and Google Scholar were used. The search was performed using keywords such as QOL, infertility, and men (Table  1 ), without time limitation (Retrieval date: July 1, 2023), and using AND and OR operators, and specific search strategies were used for each database.

The search strategy of PubMed, Web of Science, and Scopus databases is as follows:

Pubmed (retrieval date: July 1, 2023)

Male [tiab] OR Males [tiab] OR Men [tiab] OR Man [tiab] OR Boy [tiab] OR Boys [tiab] AND Quality of Life [tiab] OR Health-Related Quality of Life [tiab] AND Infertility [tiab] OR Sterility OR Reproductive [tiab] OR Reproductive Sterility [tiab] OR Subfertility [tiab] Sub-Fertility [tiab].

Web of science (retrieval date: July 1, 2023)

((TI=(male OR males OR man OR men OR boy OR boys)) AND TI=(Quality of Life OR Health-Related Quality of Life OR Health-Related Quality of Life)) AND TI=(Infertility OR Sterility OR Reproductive OR Reproductive Sterility).

Scopus (retrieval date: July 1, 2023)

TITLE ( male OR males OR men OR man OR boy OR boys ) AND TITLE (quality AND of AND life OR health-related AND quality AND of AND life ) AND TITLE ( infertility OR sterility OR reproductive).

The method of presenting the article, describing the problem, data collection, data analysis, discussion, and conclusion of the findings were based on preferred reporting items for systematic reviews and meta-analyses (PRISMA) 2020 [ 15 ]. The reviews were conducted separately by two reviewers, and the third reviewer was also used in case of disagreement between them.

Inclusion and exclusion criteria

Those studies with the following criteria were included in the review: (1) Observational studies; (2) Cross-sectional data from longitudinal studies; (3) Using valid tools for measuring the QOL; (4) Studies conducted on men of infertile couples (by men experiencing infertility we mean those men whose unprotected sexual intercourse during the past year did not lead to any pregnancy); (5) Minimum sample size of 30 subjects; (6) Subjects with no chronic disease, and (7) those men of infertile couples who were within the diagnostic process for infertility and before starting infertility treatment. The search and review process for this study were conducted in English, and there were no restrictions imposed on the inclusion of open-access studies.

Exclusion criteria included: (1) Case report studies; (2) Review studies; (3) Animal studies; (4) Studies on mental syndromes; (5) Studies not written in English; (6) Lack of access to the full text of the article, and (7) Unrelated reports.

The patient, intervention, comparison, outcome, and study design (PICOS)

PICOS model was used to help break down the searchable elements of the research question into (P) participants: men experiencing infertility (primary or secondary infertility) (I) intervention/exposure: not applicable; (C) control group: not applicable; (O) outcomes: evaluate infertile men’s QOL, which was measured using standard tools such as general or specific QOL questionnaire and (S) study type: Observational studies and Cross-sectional data from longitudinal studies.

Data extraction

The two reviewers independently reviewed the titles and abstracts of the articles following the inclusion criteria, and the studies which did not have the required criteria were excluded. Then, the full text of the articles with inclusion criteria was reviewed and if appropriate, they were included in the study.

Required information, including authors’ names, year of publication, research location, sample size, QOL score, type of tool, type of infertility, mean age of men, and duration of infertility, were extracted from the studies.

Outcome measurement

The main outcome of this study was to evaluate QOL of men experiencing infertility, which was measured using standard tools such as a general or specific QOL questionnaire.

Quality evaluation

The Newcastle-Ottawa Scale checklist was used to assess the quality of nonrandomized studies in meta-analyses [ 16 ]. This checklist consists of 5 parts that are representativeness of the sample, sample size, non-respondents, ascertainment of anxiety, and quality of descriptive statistics reporting. Each part gets a score of zero and one. Given the fact that the checklist has 5 items, the minimum, and maximum scores are 0 and 5, respectively. Then, studies were divided into high- and low-risk groups if their scores were ≤ 3 and more than 3 [ 16 ]. The quality assessment in this study was performed by two reviewers independently, and in case of disagreement between them, the third reviewer was asked to help. The coefficient of agreement of 0.7 and more among the reviewers was acceptable.

Ethical consideration

Ethics approval was obtained from the Ethics Committee, Faculty of Pharmacy and Nursing.

Midwifery, Shahid Beheshti University (Ethical code: IR.SBMU.PHARMACY.REC.1400.214). All methods were carried out in accordance with relevant guidelines and regulations.

After reviewing the title, abstract, and text of the articles in different stages (Fig.  1 ), finally, 24 articles were reviewed based on the inclusion criteria and research objectives and the coefficient of agreement among the reviewers was K = 0.81 (Table  2 ).

figure 1

Flowchart for selection of studies

The smallest and largest sample size were 30 [ 19 ] and 1,000 [ 40 ], respectively. Seven studies were conducted in low- and middle-income countries, two studies in upper-middle-income, and 15 studies in high-income countries. High-income countries had a higher quality of life score compared to low- and middle-income countries countries. In all studies, QOL scores were calculated based on 100, and the highest score (91.45 ± 13.66%) obtained from the Fertility quality of life (FertiQoL) questionnaire in South Korea as a high-income country [ 25 ]. Most of the studies showed that education, family income and proper marital relations improved the quality of life of infertile men. Out of 24 reviewed articles, 12 articles used the FertiQoL questionnaire, 7 articles SF-36, and 6 articles WHOQOL-BREF. One study [ 36 ] used SF-36 and WHOQOL-BREF questionnaires simultaneously.

Out of the total articles reviewed, the lowest scores were attributed to different domains. Accordingly, the lowest score in 11 articles was related to mental health problems, in 8 articles it was related to social problems, and 3 articles to communication problems.Some articles did not report the scores based on the dimensions. Based on the results, men’s QOL scores in different studies varied from 55.15 ± 13.52 to 91.45 ± 13.66%. In the total reviewed articles, the lowest and highest scores were related to mental health problems and physical dimensions, respectively.

In most of the studies using the FertiQoL questionnaire, it was observed that the lowest scores belonged to the social and communication dimensions. The FertiQoL questionnaire was developed and psychometrically evaluated in a survey study conducted in the United States. FertiQoL is a 36-item scale with Six dimension: (1) Emotional; (2) Mind-body; (3) Relational; (4) Social; (5) Environment; and (6) Treatment tolerability. A 5-point Likert scale (0–4) was used in the questionnaire, and the total score was between 0 and 100, where the higher the score, the better was the QOL [ 41 ]. This questionnaire has been translated into different languages in the world and has obtained the required validity (content, face, and construct) and reliability (with Cronbach’s alpha of 0.7–0.9) in different populations [ 42 , 43 , 44 , 45 ].

In the studies where the SF-36 and WHOQOL-BREF questionnaires had been used, the lowest scores belonged to the dimensions of limitations in usual role activities because of emotional problems and social relationships. On the other hand, the highest scores in the questionnaires were related to physical dimensions. The SF-36 questionnaire has been considered for clinical investigation, health policy assessments, and surveys. The 8 dimensions of this questionnaire are as follows: Restrictions in physical activities; Restrictions in social activities; Restrictions in standard role activities; Physical pain; General mental health; Restrictions in standard role activities; Vitality; and Common health perceptions. The final scores of the questionnaire are standardized based on 100 [ 46 ]. This questionnaire has been translated into different languages in the world and has obtained validity (content and face) and reliability (Cronbach’s alpha of 0.8–0.95) in different populations [ 47 , 48 , 49 , 50 , 51 , 52 ]. The 26-item version of WHOQOL-BREF was developed in the following four dimensions: physical health, mental health, social connections, and environmental health, and two items associated with common QOL and general health [ 53 ]. The questionnaire has been translated into different languages of the world and has obtained validity (content and face) and reliability (Cronbach’s alpha of 0.74–0.88) in different populations [ 54 , 55 , 56 , 57 ].

This systematic review study investigated the quality of life of infertile men. Based on the results, men’s quality of life scores in different studies varied from 55.15 ± 13.52 to 91.45 ± 13.66%. However, men’s quality of life scores was reported to be between 70 and 80% in the majority of the studies. As one of the health indicators with a combination of each person’s knowledge in different aspects of life and performance in human, work and social relations, quality of life is essentially important for the continuation of an optimal life and well-being of the individuals. Moreover, quality of life is strongly influenced by demographic, social, economic, and cultural variables, as well as the variables related to health and disease, and its measurement is, thus, substantially important [ 58 ]. Quality of life is a reflection of the desires, hopes, and expectations of individuals regarding their current and future life situation, and is influenced by factors such as age, personal and family characteristics, socio-economic status, and time [ 59 ].

In this systematic review, the lowest scores of men’s quality of life belonged to the psychological and emotional dimensions and then to the social and communication dimensions. Although the reviewed studies had used different tools, these tools were essentially similar in these dimensions, indicating the problems of men in these areas. Fertility is highly valued in most cultures and the desire for having a child is one of the human stimuli in the continuation of life. If efforts for fertility do not lead to success, they can have adverse effects on mental health as well as family and social relationships [ 60 ].

The reviewed studies indicated that education has a significantly positive effect on the quality of life of infertile men. Higher levels of education are associated with increased awareness and better decision-making abilities [ 25 ], and improved coping strategies for dealing with infertility-related challenges [ 38 ]. Infertile men with higher education are also more likely to seek treatment, and remain hopeful that treatment will improve their quality of life [ 28 ].

The results of most studies showed the positive and significant relationship between family income and quality of life.The costs of infertility treatment and the potential need for repeated treatment can lead to concerns and anxieties among men and reduce their quality of life [ 61 ]. If men have fewer concerns about the cost of treatment, they are more inclined to pursue infertility treatment. In the International Conference on Population and Development held in Cairo in 1994, addressing the issue of infertility was emphasized as an important health priority. However, it is unfortunate that infertility problems have been overlooked not only in developing countries but also at various levels of international health management [ 62 ].

The results of the study regarding the countries’ income showed that the quality of life score of men in infertile couples residing in low-income countries was lower compared to those in high-income countries. Current infertility policies in the treatment and distribution sector are uncoordinated, which has led to improper distribution of public and private centers in low- and middle-income countery [ 63 ]. This point of view is a kind of simplistic calculation of the problem of infertility that justifies the lack of public centers, inadwquate finantial sources, specialists and affordable treatment options [ 64 ], requireing serious attention and careful planning, especially in low- and middle-income countries.

The results of the studies showed that marital relationships have a positive and significant impact on the quality of life of infertile men. Sometimes, infertile men may experience a lack of sexual attraction, and due to irrational thoughts, they might abstain from having sexual relations with their partners or try to suppress their sexual desires. Sexual desire is a significant aspect of life that can affect the quality of life [ 65 ]. Some studies have indicated that the quality of marital relations is higher among infertile couples than the fertile ones, and infertility can bring couples closer together and encourage more open communication about their concerns and plans for the future [ 33 , 66 ]. Further research is recommended to gain a deeper understanding in this area.

Infertility presents people with a new and challenging world [ 28 ]. In this regard, infertility is characterized as a long-term process that involves time-consuming treatments, fluctuations between hope and disappointment, loss of control over reproductive outcomes, inability to plan for future, and significant shifts in personal identity and worldview [ 28 , 32 , 63 ]. Long working hours, work-caused exhaustion, along with infertility, can exacerbate men’s problems. These problems affect their quality of life, though they may deny the problems [ 67 ].

Given the significance of quality of life, its accurate measurement is essentially important. In this regard, various tools have been designed to investigate this concept and have been used in several studies. The noteworthy point in this systematic review was the use of different measurement tools in various studies. In the majority of the studies, Boivin’s FertiQoL [ 41 ] was used as a specific tool for measuring the quality of life of infertile couples. Covering emotional, physical, communicational, social, environmental, and acceptability dimensions, this questionnaire has been designed for infertile couples and does not specifically assess the quality of life of infertile men. Other studies have used a general quality-of-life questionnaire (SF-36 and WHOQOL-BREF). WHOQOL questionnaire has been designed in 4 dimensions of physical health, psychological health, social relationships, and environmental health [ 53 ]. SF-36 questionnaire also has 8 dimensions of Limitations in physical activities because of health problems; 2) Limitations in social activities because of physical or emotional problems; 3) Limitations in usual role activities because of physical health problems; 4) Bodily pain; 5) General mental health (psychological distress and well-being); 6) Limitations in usual role activities because of emotional problems; 7) Vitality (energy and fatigue); and 8) General health perceptions [ 46 ]. The main drawback of these tools is that they ignore significant dimensions such as sexual and socio-economic dimensions which are important for certain groups including infertile men. Additionally, the other dimensions of the questionnaire are not sensitive enough to measure changes in the quality of life of people with various diseases [ 68 ].

Health researchers have recently paid much attention to the examination of the quality of life and the design of a questionnaire to measure this concept. This measurement can improve clinical decision-making, estimate healthcare in a particular population, perceive different health causes and consequences, and, finally, promote health policy. All of these objectives will be achieved in light of a specific tool in this regard. However, according to the review, no questionnaire has hitherto been designed to measure the quality of life in infertile men. Specific questionnaires for infertile couples or general quality of life questionnaires have been used in different studies. Given the concept of quality of life and its changes over time as well as the expansion of tool-making knowledge, there is a need to design specific tools to measure the quality of life of infertile men by using mixed methods. We hope that more attention will be given to this significant issue in future. Polit and Beck argue that one of the main applications of exploratory mixed methods is in instrument making. They maintain that when a new tool is developed to explain a health-related concept, the complexity of this concept must be carefully explained [ 69 ].

Furthermore, it seems that the concept of men’s quality of life needs more investigation and also this concept may change over time and impact on their life. Besides, the studies demonstrated specific concerns among infertile men such as decreased self-esteem, Fertility- related stress, masculinity identity, hiding the infertility problem, resistance to the treatment, and cost of treatment [ 70 , 71 ]. These concerns could be the specific items for the infertile men-related quality of life questionnaire.

Research limitations

The impossibility of meta-analysis was because of several limitations in the study: (1) Variety of tools and small sample size in each subgroup; (2) Inaccurate report of information; and (3) -heterogeneity of the studies. Other limitation in this systematic review was that the reviewed papers were confined to English literature; thus, it is possible that some relevant non-English language studies were missed.

The systematic review strategies and solutions

The quality of life of men is one of the basic issues in their life. Assessing the quality of life of men should be done during the initial evaluation of infertility, and if necessary, interventions should be made to improve their quality of life. It is recommended that researchers, using qualitative-quantitative methods, first explain the concept of the QOL of men with infertility and then design and psychometrically evaluate the QOL tool for men experiencing infertility. Based on its context, each country should design a suitable program to improve the quality of life of men.

Data availability

All data related to this review is included in the result section of the manuscript. If any further data is needed it can be accessible via the corresponding author on request.

Abbreviations

World Health Organization

The Newcastle-Ottawa Scale

Preferred reporting items for systematic reviews and meta-analyses

Not reported

The Health Survey Short Form

World Health Organization Quality of Life Instruments

The Fertility Quality of Life tool

Berek JS. Berek & Novak’s gynecology. Lippincott Williams & Wilkins; 2019.

Kiani Z, Simbar M, Hajian S, Zayeri F. The prevalence of depression symptoms among infertile women: a systematic review and meta-analysis. Fertility Res Pract. 2021;7(1):1–10.

Google Scholar  

World Health Organization. Infertility prevalence estimates: 1990–2021. 2023.

Santi D, Granata A, Simoni M. FSH treatment of male idiopathic infertility improves pregnancy rate: a meta-analysis. Endocr Connections. 2015;4(3):R46–58.

Article   CAS   Google Scholar  

Hayden RP, Flannigan R, Schlegel PN. The role of lifestyle in male infertility: diet, physical activity, and body habitus. Curr Urol Rep. 2018;19(7):1–10.

Article   Google Scholar  

Kiani Z, Simbar M, Hajian S, Zayeri F, Shahidi M, Saei Ghare Naz M, et al. The prevalence of anxiety symptoms in infertile women: a systematic review and meta-analysis. Fertility Res Pract. 2020;6(1):1–10.

Ilacqua A, Izzo G, Emerenziani GP, Baldari C, Aversa A. Lifestyle and fertility: the influence of stress and quality of life on male fertility. Reproductive Biology Endocrinol. 2018;16(1):1–11.

https:// www.skillsyouneed.com/ips/relationship-skills.html,Accessed , 14 September 2023.

Hasanpoor-Azghady SB, Simbar M, Abou Ali Vedadhir SAA, Amiri-Farahani L. The social construction of infertility among Iranian infertile women: a qualitative study. J Reprod Infertility. 2019;20(3):178.

Lawshe CH. A quantitative approach to content validity 1. Pers Psychol. 1975;28(4):563–75.

World Health Organization. The World Health Organization quality of life assessment (WHOQOL): development and general psychometric properties. Soc Sci Med. 1998;46(12):1569–85.

Chachamovich JR, Chachamovich E, Ezer H, Fleck MP, Knauth D, Passos EP. Investigating quality of life and health-related quality of life in infertility: a systematic review. J Psychosom Obstet Gynecol. 2010;31(2):101–10.

Luk BH-K, Loke AY. The impact of infertility on the psychological well-being, marital relationships, sexual relationships, and quality of life of couples: a systematic review. J Sex Marital Ther. 2015;41(6):610–25.

Article   PubMed   Google Scholar  

Almutawa YM, AlGhareeb M, Daraj LR, Karaidi N, Jahrami H, Karaidi NA. A systematic review and Meta-analysis of the Psychiatric morbidities and Quality of Life Differences between Men and Women in infertile couples. Cureus. 2023;15(4).

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372.

Zhang L, Fu T, Yin R, Zhang Q, Shen B. Prevalence of depression and anxiety in systemic lupus erythematosus: a systematic review and meta-analysis. BMC Psychiatry. 2017;17(1):1–14.

Andrei F, Salvatori P, Cipriani L, Damiano G, Dirodi M, Trombini E, et al. Self-efficacy, coping strategies and quality of life in women and men requiring assisted reproductive technology treatments for anatomical or non-anatomical infertility. Eur J Obstet Gynecol Reproductive Biology. 2021;264:241–6.

Warchol-Biedermann K. The etiology of infertility affects fertility quality of life of males undergoing fertility workup and treatment. Am J Men’s Health. 2021;15(2):1557988320982167.

Asazawa K, Jitsuzaki M, Mori A, Ichikawa T, Shinozaki K. Effectiveness of a spousal support program in improving the quality of life of male patients undergoing infertility treatment: a pilot study. Int J Community Based Nurs Midwifery. 2020;8(1):23.

PubMed   Google Scholar  

Cusatis R, Fergestrom N, Cooper A, Schoyer KD, Kruper A, Sandlow J, et al. Too much time? Time use and fertility-specific quality of life among men and women seeking specialty care for infertility. BMC Psychol. 2019;7(1):1–9.

Asazawa K, Jitsuzaki M, Mori A, Ichikawa T, Shinozaki K, Porter SE. Quality-of‐life predictors for men undergoing infertility treatment in Japan. Japan J Nurs Sci. 2019;16(3):329–41.

Shahraki Z, Afshari M, Ghajarzadeh M, Tanha FD. How different are men with infertility-related problems from fertile men in prevalence of Depression, anxiety and quality of life? Maedica. 2019;14(1):26.

PubMed   PubMed Central   Google Scholar  

Jahromi BN, Mansouri M, Forouhari S, Poordast T, Salehi A. Quality of life and its influencing factors of couples referred to an infertility center in Shiraz. Iran Int J Fertility Steril. 2018;11(4):293.

Goker A, Yanikkerem E, Birge O, Kuscu NK. Quality of life in Turkish infertile couples and related factors. Hum Fertility. 2018;21(3):195–203.

Kim JH, Shin HS, Yun EK. A dyadic approach to infertility stress, marital adjustment, and depression on quality of life in infertile couples. J Holist Nurs. 2018;36(1):6–14.

Casu G, Ulivi G, Zaia V, Fernandes Martins MdC, Parente Barbosa C, Gremigni P. Spirituality, infertility-related stress, and quality of life in Brazilian infertile couples: analysis using the actor‐partner interdependence mediation model. Res Nurs Health. 2018;41(2):156–65.

Maroufizadeh S, Hosseini M, Foroushani AR, Omani-Samani R, Amini P. The effect of depression on quality of life in infertile couples: an actor-partner interdependence model approach. Health Qual Life Outcomes. 2018;16(1):1–7.

Zurlo MC, Della Volta MFC, Vallone F. Predictors of quality of life and psychological health in infertile couples: the moderating role of duration of infertility. Qual Life Res. 2018;27(4):945–54.

Madero S, Gameiro S, García D, Cirera D, Vassena R, Rodríguez A. Quality of life, anxiety and depression of German, Italian and French couples undergoing cross-border oocyte donation in Spain. Hum Reprod. 2017;32(9):1862–70.

Article   CAS   PubMed   Google Scholar  

Agostini F, Monti F, Andrei F, Paterlini M, Palomba S, La Sala GB. Assisted reproductive technology treatments and quality of life: a longitudinal study among subfertile women and men. J Assist Reprod Genet. 2017;34:1307–15.

Article   PubMed   PubMed Central   Google Scholar  

El Kissi Y, Amamou B, Hidar S, Idrissi KA, Khairi H, Ali BBH. Quality of life of infertile Tunisian couples and differences according to gender. Int J Gynecol Obstet. 2014;125(2):134–7.

Huppelschoten AG, Van Dongen A, Verhaak C, Smeenk J, Kremer J, Nelen W. Differences in quality of life and emotional status between infertile women and their partners. Hum Reprod. 2013;28(8):2168–76.

Onat G, Beji NK. Effects of infertility on gender differences in marital relationship and quality of life: a case-control study of Turkish couples. Eur J Obstet Gynecol Reproductive Biology. 2012;165(2):243–8.

Herrmann D, Scherg H, Verres R, Von Hagens C, Strowitzki T, Wischmann T. Resilience in infertile couples acts as a protective factor against infertility-specific distress and impaired quality of life. J Assist Reprod Genet. 2011;28(11):1111–7.

Bolsoy N, Taspinar A, Kavlak O, Sirin A. Differences in quality of life between infertile women and men in Turkey. J Obstetric Gynecologic Neonatal Nurs. 2010;39(2):191–8.

Chachamovich JL, Chachamovich E, Ezer H, Cordova FP, Fleck MM, Knauth DR, et al. Psychological distress as predictor of quality of life in men experiencing infertility: a cross-sectional survey. Reproductive Health. 2010;7(1):1–9.

Chachamovich J, Chachamovich E, Fleck M, Cordova FP, Knauth D, Passos E. Congruence of quality of life among infertile men and women: findings from a couple-based study. Hum Reprod. 2009;24(9):2151–7.

Drosdzol A, Skrzypulec V. Quality of life and sexual functioning of Polish infertile couples. Eur J Contracept Reproductive Health Care. 2008;13(3):271–81.

Rashidi B, Montazeri A, Ramezanzadeh F, Shariat M, Abedinia N, Ashrafi M. Health-related quality of life in infertile couples receiving IVF or ICSI treatment. BMC Health Serv Res. 2008;8(1):1–6.

Ragni G, Mosconi P, Baldini MP, Somigliana E, Vegetti W, Caliari I, et al. Health-related quality of life and need for IVF in 1000 Italian infertile couples. Hum Reprod. 2005;20(5):1286–91.

Boivin J, Takefman J, Braverman A. The fertility quality of life (FertiQoL) tool: development and general psychometric properties. Hum Reprod. 2011;26(8):2084–91.

Hsu P-Y, Lin M-W, Hwang J-L, Lee M-S, Wu M-H. The fertility quality of life (FertiQoL) questionnaire in Taiwanese infertile couples. Taiwan J Obstet Gynecol. 2013;52(2):204–9.

Maroufizadeh S, Ghaheri A, Amini P, Samani RO. Psychometric properties of the fertility quality of life instrument in infertile Iranian women. Int J Fertility Steril. 2017;10(4):371.

Asazawa K, Jitsuzaki M, Mori A, Ichikawa T, Shinozaki K, Yoshida A, et al. Validity and reliability of the Japanese version of the fertility quality of life (FertiQoL) tool for couples undergoing fertility treatment. Open J Nurs. 2018;8(9):616–28.

Gao M, Ji X, Zhou L, Zhang Z. AB084. The fertility quality of life (FertiQol) in Chinese infertile women. Translational Androl Urol. 2016;5(Suppl 1).

Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36): I. Conceptual framework and item selection. Medical care. 1992:473 – 83.

Jenkinson C, Stewart-Brown S, Petersen S, Paice C. Assessment of the SF-36 version 2 in the United Kingdom. J Epidemiol Community Health. 1999;53(1):46–50.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alonso J, Prieto L, Anto J. The Spanish version of the SF-36 Health Survey (the SF-36 health questionnaire): an instrument for measuring clinical results. Medicina Clínica. 1995;104(20):771–6.

CAS   PubMed   Google Scholar  

Shayan NA, Arslan UE, Hooshmand AM, Arshad MZ, Ozcebe H. The short form health survey (SF-36): translation and validation study in Afghanistan. East Mediterr Health J. 2020;26(8):899–908.

HÅvard Loge J, Kaasa S. Short form 36 (SF-36) health survey: normative data from the general Norwegian population. Scand J Soc Med. 1998;26(4):250–8.

Bjorner JB, Thunedborg K, Kristensen TS, Modvig J, Bech P. The Danish SF-36 Health Survey: translation and preliminary validity studies. J Clin Epidemiol. 1998;51(11):991–9.

Apolone G, Mosconi P. The Italian SF-36 Health Survey: translation, validation and norming. J Clin Epidemiol. 1998;51(11):1025–36.

Organization WH. WHOQOL-BREF: introduction, administration, scoring and generic version of the assessment: field trial version, December 1996. World Health Organization; 1996.

Kim WH, Hahn SJ, Im HJ, Yang KS. Reliability and validity of the Korean World Health Organization Quality of Life (WHOQOL)-BREF in people with physical impairments. Annals Rehabilitation Med. 2013;37(4):488.

Yao G, Chung C-W, Yu C-F, Wang J-D. Development and verification of validity and reliability of the WHOQOL-BREF Taiwan version. J Formos Med Assoc. 2002;101(5):342–51.

Usefy A, Ghassemi GR, Sarrafzadegan N, Mallik S, Baghaei A, Rabiei K. Psychometric properties of the WHOQOL-BREF in an Iranian adult sample. Commun Ment Health J. 2010;46(2):139–47.

Jaracz K, Kalfoss M, Górna K, Bączyk G. Quality of life in Polish respondents: psychometric properties of the Polish WHOQOL–Bref. Scand J Caring Sci. 2006;20(3):251–60.

Khayata G, Rizk D, Hasan M, Ghazal-Aswad S, Asaad M. Factors influencing the quality of life of infertile women in United Arab Emirates. Int J Gynecol Obstet. 2003;80(2):183–8.

Li Y, Zhang X, Shi M, Guo S, Wang L. Resilience acts as a moderator in the relationship between infertility-related stress and fertility quality of life among women with infertility: a cross-sectional study. Health Qual Life Outcomes. 2019;17(1):1–9.

Dyer S, Chambers GM, Adamson GD, Banker M, De Mouzon J, Ishihara O, et al. ART utilization: an indicator of access to infertility care. Reprod Biomed Online. 2020;41(1):6–9.

Kiani Z, Fakari FR, Hakimzadeh A, Hajian S, Fakari FR, Nasiri M. Prevalence of depression in infertile men: a systematic review and meta-analysis. BMC Public Health. 2023;23(1):1972.

Widge A, Cleland J. The public sector’s role in infertility management in India. Health Policy Plann. 2009;24(2):108–15.

Kiani Z, Simbar M, Hajian S, Zayeri F. Quality of life among infertile women living in a paradox of concerns and dealing strategies: a qualitative study. Nurs Open. 2021;8(1):251–61.

De Berardis D, Mazza M, Marini S, Del Nibletto L, Serroni N, Pino M, et al. Psychopathology, emotional aspects and psychological counselling in infertility: a review. Clin Ter. 2014;165(3):163–9.

Starc A, Trampuš M, Pavan Jukić D, Grgas-Bile C, Jukić T. Polona Mivšek A. Infertility and sexual dysfunctions: a systematic literature review. Acta Clin Croatica. 2019;58(3):508–15.

Drosdzol A, Skrzypulec V. Evaluation of marital and sexual interactions of Polish infertile couples. J Sex Med. 2009;6(12):3335–46.

Wischmann T, Thorn P. (Male) infertility: what does it mean to men? New evidence from quantitative and qualitative studies. Reprod Biomed Online. 2013;27(3):236–43.

Streiner D, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. Aust NZJ Public Health. 2016.

Polit D, Beck C. Essentials of nursing research: appraising evidence for nursing practice. Lippincott Williams & Wilkins; 2020.

Wu W, La J, Schubach KM, Lantsberg D, Katz DJ. Psychological, social, and sexual challenges affecting men receiving male infertility treatment: a systematic review and implications for clinical care. Asian J Androl. 2023;25(4):448–53.

Biggs SN, Halliday J, Hammarberg K. Psychological consequences of a diagnosis of infertility in men: a systematic analysis. Asian J Androl. 2024;26(1):10–9.

Download references

Acknowledgements

The authors would like to express their gratitude and thank to the cooperation and assistance of the officials of faculty, library and computer ward at Shahid Beheshti University of Medical Sciences.

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and affiliations.

Midwifery and Reproductive Health Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Zahra Kiani

Midwifery and Reproductive Health Research Center, Department of Midwifery, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Masoumeh Simbar

Department of Midwifery, School of Medicine, North Khorasan University of Medical Sciences, Bojnurd, Iran

Farzaneh Rashidi

Proteomics Research Center, Department of Biostatistics, Faculty of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Farid Zayeri

Department of Anesthesia and Operating Room, School of Nursing and Midwifery, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Homayoon Banaderakhsh

You can also search for this author in PubMed   Google Scholar

Contributions

ZK: Project development, Data Collection, Manuscript writing MS: Project administration, writing-review, and editing, supervision FR: Project administration, writing-review, and editing, supervision FZ: Project development, Data Collection, Manuscript writing HB: Project development, Data Collection, Manuscript writing All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Masoumeh Simbar .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kiani, Z., Simbar, M., Rashidi, F. et al. The quality of life of men experiencing infertility: a systematic review. BMC Public Health 24 , 1236 (2024). https://doi.org/10.1186/s12889-024-18758-6

Download citation

Received : 21 September 2023

Accepted : 02 May 2024

Published : 05 May 2024

DOI : https://doi.org/10.1186/s12889-024-18758-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quality of life
  • Infertility
  • Systematic review

BMC Public Health

ISSN: 1471-2458

problem with the peer review process in scientific research

IMAGES

  1. Peer Review Process

    problem with the peer review process in scientific research

  2. Understanding Peer Review in Science

    problem with the peer review process in scientific research

  3. Understand the peer review process

    problem with the peer review process in scientific research

  4. Peer Review in Science: An Honest Guide to How it Works

    problem with the peer review process in scientific research

  5. Peer Review

    problem with the peer review process in scientific research

  6. A Complete Overview Of The Typical Peer Review Process In Scientific

    problem with the peer review process in scientific research

VIDEO

  1. 10th ACSE Annual Meeting

  2. The Peer Review Process: Importance and Reasons

  3. PUBLISHING AN OBGYN PAPER IN A JOURNAL

  4. 1st Round of Poster Presenters ES-2024

  5. THIS Got Through Peer Review?!

  6. The Peer Review Process (Prof Matthew England)

COMMENTS

  1. Peer Review in Science: the pains and problems

    Unfortunately, however, the world of peer-reviewing has many issues that can potentially impact its credibility. To fully understand this potential, let's first look at how the reviewing process works. The peer-review process. Different peer-review journals have different reviewing processes, but they usually follow a similar structure.

  2. The 7 biggest problems facing science, according to 270 scientists

    "The current peer review process embraces a concept that a paper is final," says Nosek. "The review process is [a form of] certification, and that a paper is done." But science doesn't work that way.

  3. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  4. The peer review system is broken. We asked academics how to fix it

    The peer review process is a cornerstone of modern scholarship. Before new work is published in an academic journal, experts scrutinise the evidence, research and arguments to make sure they stack up.

  5. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it. 2) Be pleasant. If the paper is of low quality, suggest ...

  6. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  7. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  8. The peer review process

    The History of Scientific Peer Review. The introduction of peer review has been popularly attributed to the Royal Society of Edinburg, which compiled a collection of articles that had undergone peer review in 1731.[2,3] However, this initial process did not meet the criteria of peer review in its modern form, and well into the twentieth century, external and blinded peer review was still far ...

  9. To fix peer review, break it into stages

    To fix peer review, break it into stages. All data should get checked, but not every article needs an expert. By. Olavo B. Amaral. Peer review is not the best way to detect errors and problematic ...

  10. Problems with Peer Review Shine a Light on Gaps in Scientific Training

    The term "peer review" often brings to mind the process by which scientific journals solicit experts to evaluate manuscripts prior to their publication. Scientists have debated the utility and value of peer review since its modern deployment in the 1970s (1). Problems with peer review and concerns about its role in scientific gatekeeping have led some to propose, in scientific manuscripts ...

  11. The limitations to our understanding of peer review

    Peer review is a ubiquitous element of scholarly research quality assurance and assessment. It forms a critical part of a research and development enterprise that annually invests $2 trillion US dollars (USD) globally [] and produces more than 3 million peer-reviewed research articles [].As an institutional norm governing scientific legitimacy, it plays a central role in defining the ...

  12. The Role of Peer Review in the Scientific Process

    1 Ethical and Scientific Importance of Peer Review. Peer review has been defined as a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field [ 1 ]. It is intended to set higher scientific standards and improve the quality of suitable manuscripts.

  13. Problems with peer review

    Several recent high profile cases have raised questions about the effectiveness of peer review in ensuring the quality of published research. Mark Henderson investigates Mention peer review to any researcher and the chances are that he or she will soon start to grumble. Although the system by which research papers and grant applications are vetted is often described as science's "gold ...

  14. Scrutinizing science: Peer review

    Scrutinizing science: Peer review. In science, peer review helps provide assurance that published research meets minimum standards for scientific quality. Peer review typically works something like this: A group of scientists completes a study and writes it up in the form of an article. They submit it to a journal for publication.

  15. Preserving the Quality of Scientific Research: Peer Review of Research

    This is considered as the beginning of the formal journal peer review process, intended as a mechanism to preserve the trustworthiness of reporting scientific findings. Peer reviewing during the next 100 years, according to Spier ( 2002 ), was mainly each journal editor's opinion supported by special committees set up by societies when ...

  16. Let's stop pretending peer review works

    In the early 1980s, there was growing concern about the quality of peer review at scientific journals. So two researchers at Cornell and the University of North Dakota decided to run a little ...

  17. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  18. Peer-review crisis creates problems for journals and scholars

    A separate Publons survey on global peer review found that 85 percent of respondents think institutions should more explicitly require and recognize peer review. This survey also found that article submissions grew 6 percent between 2013 and 2017, and article publication volumes grew by about 3 percent over the same period.

  19. Understanding the peer review process: A step-by-step guide for

    Step 1: Submission The first step in the peer review process begins with the submission of a research manuscript to a scholarly journal. Researchers should carefully select a journal that aligns with the scope and focus of their study. It is essential to review the journal's guidelines for authors and formatting requirements to ensure compliance.

  20. Research Methods: How to Perform an Effective Peer Review

    Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1,2 It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3 In 2012, there were more than 28 000 scholarly ...

  21. Reviewing Peer Review

    Peer review influences more than just science. The Intergovernmental Panel on Climate Change and other similar advisory groups base their judgments on peer-reviewed literature, and this is part of their success. Many legal decisions and regulations also depend on peer-reviewed science. Thus, thorough, expert review of research results—without ...

  22. Evaluating peer review at NIH

    The NIH's Center for Scientific Review (CSR) has published research showing that Black applicants are less likely than white applicants to receive funding for comparable proposals because of bias in peer scoring ().CSR then designed and implemented a randomized experimental study demonstrating that double-blinding the review process would reduce that bias ().

  23. Peer review: a flawed process at the heart of science and journals

    Peer review: a flawed process at the heart of science and journals. Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied.

  24. Promoting equality, diversity and inclusion in research and funding

    Equal, diverse, and inclusive teams lead to higher productivity, creativity, and greater problem-solving ability resulting in more impactful research. However, there is a gap between equality, diversity, and inclusion (EDI) research and practices to create an inclusive research culture. Research networks are vital to the research ecosystem, creating valuable opportunities for researchers to ...

  25. What are the strengths and limitations to utilising creative methods in

    There is increasing interest in using patient and public involvement (PPI) in research to improve the quality of healthcare. Ordinarily, traditional methods have been used such as interviews or focus groups. However, these methods tend to engage a similar demographic of people. Thus, creative methods are being developed to involve patients for whom traditional methods are inaccessible or non ...

  26. Common misconceptions about the nature of science and scientific research

    Finally, scientific research involves interpreting and trying to make sense of those data — going beyond the data themselves to draw conclusions about the underlying phenomenon being studied. Driving the whole research endeavor are one or more research problems or questions that the researcher is trying to address and potentially solve.

  27. Frontiers

    IntroductionPlagiarism in universities is a problem with potential academic, social, ethical, and legal implications. Systematic review research on academic integrity programs, including plagiarism, has been conducted, but few studies have assessed plagiarism. Therefore, this review synthesizes knowledge on the effect of educational interventions designed to prevent or reduce plagiarism by ...

  28. The quality of life of men experiencing infertility: a systematic review

    Background Men experiencing infertility encounter numerous problems at the individual, family, and social levels as well as quality of life (QOL). This study was designed to investigate the QOL of men experiencing infertility through a systematic review. Materials and methods This systematic review was conducted without any time limitation (Retrieval date: July 1, 2023) in international ...

  29. Qualitative Research Journal

    Book review: Qualitative research in education: a review for physics education and other sub-sciences. Abd Aziz Ardiansyah. Pages 337-340. Read the latest articles of Qualitative Research Journal at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.