Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism. Run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved April 15, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

peer review in academic research

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

peer review in academic research

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

peer review in academic research

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

peer review in academic research

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

Peer Review in Academia

  • Open Access
  • First Online: 03 January 2022

Cite this chapter

You have full access to this open access chapter

Book cover

  • Eva Forsberg 5 ,
  • Lars Geschwind 6 ,
  • Sara Levander 5 &
  • Wieland Wermke 7  

4237 Accesses

4 Citations

In this chapter, we outline the notion of peer review and its relation to the autonomy of the academic profession and the contract between science and society. This is followed by an introduction of some key themes regarding the practices of peer review. Next, we specify some reasons to further explore different practices of peer review. Briefly, the state of the art is presented. Finally, the structure of this volume and its individual contributions are presented.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

peer review in academic research

Reviewing Peer Review: A Flawed System: With Immense Potential

Mark Lauria

peer review in academic research

Guidelines for Peer Review. A Survey of International Practices

peer review in academic research

How to Do a Peer Review?

  • Peer review
  • Scientific communication

Introduction

Over the past few decades, peer review has become an object of great professional and managerial interest (Oancea, 2019 ) and, increasingly, academic scrutiny (Bornmann, 2011 ; Grimaldo et al., 2018 ). Nevertheless, calls for further research are numerous (Tennant & Ross-Hellauer, 2020 ). This volume is in answer to such interest and appeals. We aim to present a variety of peer-review practices in contemporary academic life as well as the principled foundation of peer review in scientific communication and authorship. This volume is unique in that it covers many different practices of peer review and their theoretical foundations, providing both an introduction into the very complex field and new empirical and conceptual accounts of peer review for the interested reader. The contributions are produced by internationally recognized scholars, almost all of whom participated in the conference ‘Scientific Communication and Gatekeeping in Academia in the 21st Century’, held in 2018 at Uppsala University, Sweden. Footnote 1 The overall objective of this volume is explorative; framings relevant to the specific contexts, practices and discourses examined are set by the authors of each chapter. However, some common conceptual points of departure may be laid down at the outset.

Peer review is a context-dependent, relational concept that is increasingly used to denote a vast number of evaluative activities engaged in by a wide variety of actors both inside and outside of academia. By peer review, we refer to peers’ assessments and valuations of the merits and performances of academics, higher education institutions, research organizations and higher education systems. Mostly, these activities are part of more encompassing social evaluation practices, such as reviews of manuscripts, grant proposals, tenure and promotion and quality evaluations of institutions and their research and educational programmes. Thus, scholarly peer review comprises evaluation practices within both the wider international scientific community and higher education systems. Depending on differences related to scientific communities and national cultures, these evaluations may include additional gatekeepers, internal as well as external to academia, and thus the role of the peer may vary.

The roots of peer review can be found in the assessment practices of reviewers and editors of scholarly journals in deciding on the acceptance of papers submitted for publishing. Traditionally, only peers (also known as referees) with recognized scholarly standing in a relevant field of research were acknowledged as experts (Merton, 1942/ 1973 ). Due to the differentiation and increased use of peer review, the notion of a peer employed in various evaluation practices may be extended. Who qualifies as an expert in different peer-review practices and with what implications are empirical issues.

Even though peer review is a familiar phenomenon in most scholarly evaluations, there is a paucity of studies on peer review within the research field of evaluation. Peer review has, however, been described as the most familiar collegial evaluation model, with academic research and higher education as its paradigm area of application and with an ability to capture and judge qualities as its main advantage (Vedung, 2002 ). Following Scriven ( 2003 ), we define evaluation as a practice ‘determining the merit, worth or significance of things’ (p. 15). Scriven ( 1980 ) identifies four steps involved in evaluation practices, which are also frequently used in peer review, either implicitly enacted and negotiated or explicitly stated (Ozeki, 2016 ). These steps concern (1) the criteria of merit, that is, the dimensions of an object being evaluated; (2) the standards of merit, that is, the level of performance in a given dimension; (3) the measuring of performance relative to standards; and (4) a value judgement of the overall worth.

Consequently, the notion of peer review refers to evaluative activities in academia conducted by equals that distribute merit, value and worth. In these processes of selection and legitimation, issues referring to criteria, standards, rating and ranking are significant. Often, peer reviews are embedded in wider evaluation practices of research, education and public outreach. To capture contemporary evaluations of academic work, we will include a number of different review practices, including some in which the term peer is employed in a more extended sense.

The Many Face(t)s of Peer-Review Practices

Depending on the site in which peer review is used, the actors involved differ, as do their roles. The same applies to potential guidelines, purposes, discourses, use of professional judgement and metrics, processes and outcome of the specific peer-review practice. These are all relative to the site in which the review is used and will briefly be commented upon below.

The Interplay of Primary and Secondary Peer Review

It is possible to make a distinction between primary and secondary peer reviews (British Academy, 2007 ). As stated, the primary role of peer review is to assess manuscripts for publishing, followed by the examination and judgement of grant applications. Typically, many other peer-review practices, so-called secondary peer review, involve summaries of outcomes of primary reviews. Thus, we might view primary and secondary reviews as folded into each other, where, for example, reviews of journal articles are prerequisite to later evaluation of the research quality of an institution, in recruitment and promotion, and so forth (Helgesson, 2016 ). Hence, the consequences of primary reviews can hardly be overstated.

Traditionally, both forms of primary peer review (assessment of manuscripts and grant applications) are ex ante evaluations; that is, they are conducted prior to the activity (e.g. publishing and research). With open science, open access journals and changes in the transparency of peer review, open and public peer reviews have partly opened the black box of reviews and the secrecy of the process and its actors (Sabaj Meruane et al., 2016 ). Accordingly, publishing may include both ex ante and ex post evaluations. These forms of evaluation can also be found among secondary reviews, with degree-awarding accreditation an example of the former and reviews of disciplines an example of the latter.

Sites and Reviewer Tasks and Roles

Without being exhaustive, we can list a number of sites where peer review is conducted as part of more comprehensive evaluations: international, regional and national higher education agencies conduct accreditation, quality audits and evaluations of higher education institutions; funding agencies distribute grants for projects and fellowships; higher education institutions evaluate their research, education and public outreach at different levels and assess applications for recruitment, tenure and promotion; the scientific community assesses manuscripts for publication, evaluates doctoral theses and conference papers and allocates awards. The evaluation roles are concerned with the provision of human and financial resources, the evaluation of research products and the assessment of future strategies as a basis for policy and priorities. All of these activities are regularly performed by researchers and interlinked in an evaluation spiral in which the same research may be reviewed more than once (Langfeldt & Kyvik, 2015 ). If we consider valuation and assessment more generally, the list can be extended almost infinitely, with supervision and seminar discussions being typical activities in which valuation plays a central part. Hence, scholars are accustomed to being assessed and to evaluating others.

The role and the task of the reviewer differ also in relation to whether the act of reviewing is performed individually, in teams or in a blending of the two forms. In the evaluation of research grants, the latter is often the case, with reviewers first individually rating or ranking the applications, followed by panel discussions and joint rankings as bases for the final decision made by a committee. In peer review for publishing, there might be a desk rejection by the editor, but if not, two or more external reviewers assess a manuscript and recommend that the editor accept, revise or reject it. It is then up to the editor to decide what to do next and to make the final decision. The process and the expected roles of the involved editor, reviewer and authors may vary depending on whether it is a private publisher or a journal linked to a scientific association, for example. Whether the reviewer should be considered an advisor, an independent assessor, a juror or a judge depends on the context and the task set for the reviewer within the specific site and its policies and practices as well as on various praxes developed over time (Tennant & Ross-Hellauer, 2020 ).

Power-making in the Selection of Expertise

The selection process is at the heart of peer review. Through valuations and judgements, peers are participants in decisions on inclusion and exclusion: What project has the right qualities to be allocated funding? Which paper is good enough to be published? And who has the right track record to be promoted or offered a fellowship? When higher education institutions and scholars increasingly depend on external funding, peer review becomes key in who gets an opportunity to conduct research and enter or continue a career trajectory as a researcher and, in many systems, a higher education teacher. In other words, peer review is a cornerstone of the academic career system (Merton, 1968 ; Boyer, 1990 ) and heavily influences what kinds of scientific knowledge will be furthered (Lamont, 2009 ; Aagaard et al., 2015 ).

The interaction involved in peer review may be remote, online or local, including face-to-face collaboration, and it may involve actors with different interests. Moreover, interaction may be extended to the whole evaluation enterprise. For example, evaluations of higher education institutions and their research and education often include members of national agencies, scholarly experts and external stakeholders. Scholarly experts may be internal or external to the higher education institutions and of lower, comparable or higher rank than the subjects of evaluation, and reviewers may be blind or known to those being evaluated and vice versa. Scholarly expertise may also refer to a variety of specialists, for example, to scholars with expertise in a specific research topic, in evaluation technology, in pedagogy or public outreach. A more elaborated list of features to be considered in the allocation of experts to various review practices can be found in a peer-review guide by the European Science Foundation ( 2011 ). At times the notion of peer is extended beyond the classical idea to one with demonstrated competence to make judgements within a particular research field. Who qualifies as a reviewer is contingent on who has the authority to regulate the activity in which the evaluation takes place and who is in the position to suggest and, not least, to select reviewers. This is a delicate issue, imbued with power, and one that we need to further explore, preferably through comparative studies involving different peer-review practices in varying contexts.

Acting as a peer reviewer has become a valuable asset in the scholarly track record. This makes participating as a reviewer important for junior researchers. Therefore, such participation not only is a question of being selected but also increasingly involves self-election. More opportunities are provided by ever more review activities and the prevalence of evaluation fatigue among senior researchers. The limited credit, recognition and rewards for reviewers may also contribute to limited enthusiasm amongst seniors (Research Information Network CIC, 2015 ). Moreover, several tensions embedded in review practices can add to the complexity of the process and influence the readiness to review. The tensions involve potential conflicts between the role of the reviewer or evaluator and the researcher’s role: time conflict (research or evaluate), peer expertise versus impartiality (especially qualified colleagues are often excluded under conflict-of-interest rules), neutral judge versus promoter of research interests (double expectation, deviant assessments versus unanimous conclusions, peer review versus quantitative indicators, and scientific autonomy versus social responsibility) (Langfeldt & Kyvik, 2015 ). Despite noted challenges, classical peer review is still the key mechanism by which professional autonomy and the guarding of research quality are achieved. Thus, it is argued that it is an academic duty and an obligation, in particular for senior scholars, to accept tasks as reviewers (Caputo, 2019 ). Nevertheless, the scholarly exchange value should be addressed in future discussions of gatekeeping in academia.

The Academic Genres of Peer Review

Peer reviews are rooted in more encompassing discourses, such as those concerning norms of science, involving notions of quality and excellence founded in different sites endogenous or exogenous to science. Texts subject to or employed or produced in peer-review practices represent a variety of academic genres, including review reports, editors’ letters, applicants’ proposals, submitted manuscripts, guidelines, applicant dossiers and curriculum vitae (CVs), testimonials, portfolios and so on. Different genres are interlinked in chains, creating systems of genres. A significant aspect of systems is intertextuality, or the fact that texts within a specific system refer to, anticipate and shape each other. The interdependence of texts is about how they relate to situational and formal expectations—in this case, of the specific peer-review practice. It is also about how one text makes references to another text; for example, review reports often refer to guidelines, calls, announcements or texts in application dossiers. The interdependence can also be seen in how the texts interact in academic communities (Chen & Hyon, 2005 ): who the intended readers of a given text are, what the purpose of the text is, how the text is used in the review and decision process, and so on. Conclusively, the genre systems of peer review vary depending on epistemic traditions, national culture and regulations of higher education systems and institutions.

Given this diversity, we are dealing with a great number of genre systems involving different kinds of texts and interrelations embedded in power and hierarchies. A significant feature of peer-review texts as a category is the occluded genres, that is, genres that are more or less closed to the public (Swales, 1996 ). Depending on the context, the list of occluded genres varies. For example, the submission letters, submitted manuscripts, review reports and editor–author correspondence involved in the eventual publication of articles in academic journals are not made publicly available, while in the context of recruitment and promotion, occluded genres include application letters, testimonials and evaluation letters to committees. And for research grants, the research proposals, individual review reports and panel reports tend to remain entirely internal to the grant-making process. However, in some countries (e.g. in Sweden, due to the principle of openness, or offentlighetsprincipen ), several of these types of texts may be publicly available.

The request for open science has also initiated changes to the occluded genres of peer review. After a systematic examination, Ross-Hellauer ( 2017 ) proposed ‘open peer review’ as an umbrella term for a variety of review models in line with open science, ‘including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process’ (p. 1). From 2005 onwards, there has been a big upswing of these definitions. This correlates with the rise of the openness agenda, most visible in the review of journal articles and within STEM and interdisciplinary research.

Time and space are central categories in most peer-review genres and the systems to which they belong. While review practices often look to the past, imagined futures also form the background for valuation. The future orientation is definitely present in audits, in assessments of grant proposals and in reviews of candidates’ track records. The CV, a key text in many review practices, may be interpreted in terms of an applicant’s career trajectory, thus emphasizing how temporality and spatiality interact within a narrative infrastructure, for example how scholars move between different academic institutions over time (Hammarfelt et al., 2020 ). Texts may also feed both backwards and forwards in the peer-review process. For example, guidelines and policy on grant evaluations and distribution may be negotiated and acted upon by both applicants and reviewers. Candidates may also address reviewers as significant others in anticipating the forthcoming reviewer report (Serrano Velarde, 2018 ). These expectations on the part of the applicant can include prior experiences and perceptions of specific review practices, processes and outcomes in specific circumstances.

Turning to the reviewer report, it is worth noting that they are often written in English, especially ones assessing manuscripts and frequently those on research proposals and recruitment applications as well. Commonly seen within the academic genre of peer review is the use of indirect speech, which can be linked to the review report’s significance as related to the identity of the person being evaluated (Paltridge, 2017 ). Two key notions, politeness and face, have been used to describe the evaluative language of review reports and how reviewers interact with evaluees. There are differences related to content and to whether a report is positive or negative overall in its evaluation. For example, reviewers of manuscripts invoke certain structures of knowledge, using different structures when suggesting to reject, revise or accept and when asking for changes. To maintain social relationships, reviewers draw on different politeness strategies to save an author’s face. Strategies employed may include ‘apologizing (‘I am sorry to have to’) and impersonalizing an issue (‘It is generally not acceptable to’)’ (Paltridge, 2017 , p. 91). Largely, requests for changes are made as directions, suggestions, clarifications and recommendations. Thus, for both evaluees and researchers of peer reviews, particular genre competences are required to decode and act upon the reports. For beginning scholars unfamiliar with the world of peer review or for scholars from a different language or cultural background than the reviewer, it might be challenging to interpret, negotiate and act upon reviewer reports.

Criteria and the Professional Judgement of Quality

According to the classical idea of peer review, only a peer can properly recognize quality within a given field. Although, in both research and scholarly debate, shortcomings have been emphasized regarding the trustworthiness, efficacy, expense, burden and delay of peer review (Bornmann, 2013 ; Research Information Network CIC, 2015 ), many critics still find peer review as the least-worst system, in the absence of viable alternatives. Overall, scholars stand behind the idea of peer review even though they often have concerns regarding the different practices of peer review (Publons, 2018 ).

Calls for accountability and social relevance have been made, and there have been requests for formalization, standardization, transparency and openness (Tennant & Ross-Hellauer, 2020 ). While the idea of formalization of peer review refers to rules, including the development of policy and guidelines for different forms of peer review, standardization rather emphasizes the setting of standards through the employment of specific tools for evaluation (i.e. criteria and indicators used for assessment, rating or ranking and decision-making). An interesting question is whether standardization will impact the extent and the way peers are used in different sites of evaluation (Westerheijden et al., 2007 ). We may add, who will be considered a peer and what will the matching between the evaluator and the evaluation object or evaluee look like?

It is widely acknowledged that criteria is an essential element of any procedure for judging merit (Scriven, 1980 ; Hug & Aeschbach, 2020 ). This is the case regardless of whether criteria are determined in advance or if they are explicitly expressed or implicitly manifested in the process of assessment. The notion of peer review has been supplemented in various ways, implicating changes to the practice and records of peer review. Increasingly, review reports combine classical peer review with metrics of different kinds. Accordingly, quantitative measures, taken as proxies for quality, have entered practices of peer review. Today, blended forms are rather common, especially in evaluations of higher education institutions, where narrative and metric summaries often supplement each other and inform a judgement.

In general, quantitative indicators (e.g. number of publications, journal impact factors, citations) are increasingly applied, even though their capacity to capture quality is questioned, especially within the social sciences, humanities and the arts. Among the main reasons given for the rapid growth of demands for metrics, one of the arguments we find is that classic peer review alone cannot meet the quest for accountability and transparency, and bibliometric evaluations may appear cheaper, more objective and legitimated. Moreover, metrics may give an impression of accessibility for policy and management (Gläser & Laudel, 2007 ; Söderlind & Geschwind, 2019 ). However, tensions between classical peer review and quantitative indicators have been identified and are hotly debated (Langfeldt & Kyvik, 2011 ). The dramatic expansion of the use of metrics has brought with it gaming and manipulation practices to enhance reputation and status, ‘including coercive citation, forced joint authorship, ghostwriting, h-index manipulation, and many others’ (Oravec, 2019 , p. 859). Warnings are also issued against the use of bibliometric indicators at the individual level. A combination of peer narratives and metrics is, however, considered a possibility to improve an overall evaluation, given due awareness of the limitations of quantitative data as proxies for quality.

The literature on peer review has focused more on the weighting of criteria than on the meaning referees assign to the criteria they use (Lamont, 2009 ). Even though some criteria, such as originality, trustworthiness and relevance, are frequently used in the assessment of academic work and proposals, our knowledge of how reviewers ascribe value to, assess and negotiate them remains limited (Hug & Aeschbach, 2020 ). However, Joshua Guetzkow, Michèle Lamont and Grégoire Mallard ( 2004 ) show that panellists in the humanities, history and the social sciences define originality much more broadly than what is usually the case in the natural sciences.

Criteria, indicators and comparisons are unstable: they are situational and dependent on context and a referee’s personal experience of scientific work (Kaltenbrunner & de Rijcke, 2020 ). We are dealing here with assessments in situations of uncertainty and of entities not easily judged or compared. The concept of judgement devices has been used to capture how reviewers delegate the judgement of quality to proxies, reducing the complexity of comparison. For example, the employment of central categories in a CV, which references both temporal and spatial aspects of scholars’ trajectories, makes comparison possible (Hammarfelt, 2017 ). In a similar way, the theory of anchoring effects has been used to explore reviewers’ abilities to discern, assess, compare and communicate what scientific quality is or may be (Roumbanis, 2017 ). Anchoring effects have their roots in heuristic principles used as shortcuts in everyday problem solving, especially when a judgement involves intuition. Reduction of complexity is visible also in how reviewers first collect criteria that consist of information that has an eliminatory function. Next, they search for positive signs of evidence in order to make a final judgement (Musselin, 2002 ). Dependent on context and situations, reviewers tend to select different criteria from a repertoire of criteria (Hug & Aeschbach, 2020 ).

On the one hand, the complexity of academic evaluations requires professional judgement: scholars sufficiently grounded in a field of research and higher education are entrusted with interpreting and negotiating criteria, indicators and merits. Still, the practice of peer review has to be safeguarded against the risk of conservatism as well as epistemic and social biases (Kaltenbrunner & de Rijcke, 2020 ). On the other hand, changes in the governance of higher education institutions and research, as well as marketization, managerialism, digitalization and calls for accountability, have increased the diversity of peer review and introduced new ways to capture and employ criteria and indicators. The long-term consequences of these changes need to be monitored, not least because of how they challenge the self-regulation and autonomy of the academic profession (Oancea, 2019 ).

How to understand, assess, measure and value quality in research, the career of a scholar or the performances of a higher education institution are complex issues. Turning to the notion of quality in a general sense will not solve the problem, since it has so many facets and has been perceived in so many different ways, including as fitness for purpose, as eligible, as excellent and as value for money (Westerheijden et al., 2007 ), all notions in need of contextualization and further elaboration to achieve some sense (see also Elken & Wollscheid, 2016 ).

When presenting a framework to study research quality, Langfeldt et al. ( 2020 ) identify three key dimensions: (1) quality notions originating in research fields and in research policy spaces; (2) three attributes important for good research and drawn on existing studies, namely, originality/novelty, plausibility/reliability and value or usefulness; and (3) five sites where notions of research quality emerge, are contested and are institutionalized, comprising researchers, knowledge communities, research organizations, funding agencies and national policy arenas. This multidimensional framework and its components highlight issues that are especially relevant to studies of peer review. The sites identify arenas where peer review functions as a mechanism through which notions of research quality are negotiated and established. The consideration of notions of quality endogenous and exogenous to scientific communities and the various attributes of good research can also be directly linked to referees’ distribution of merit, value and worth in peer-review practices under changing circumstances.

The Autonomy of a Profession and a Challenged Contract

Historical analyses link peer review to the distribution of authority and the negotiations and reformulations of the public status of science (Csiszar, 2016 ). At stake in renegotiations of the contract between science and society are the professional autonomy of scholars and their work. Peer review is contingent on the prevailing contract and is critical in maintaining the credibility and legitimacy of research and higher education (Bornmann, 2011 ). The professional autonomy of scholars raises the issue of self-regulation. Its legitimacy ultimately comes down to who decides what, particularly concerning issues of research quality and scientific communication (Clark, 1989 ).

Over the past 40 years, major changes have taken place in many OECD (Organisation for Economic Co-operation and Development) countries in the governance of public science and higher education, changes which have altered the relative authority of different groups and organizations (Whitley, 2011 ). The former ability of scientific elites to exercise endogenous control over science has, particularly since the 1960s, become more contested and subject to public policy priorities. A more heterogeneous and complex higher education system has been followed by the exogeneity of governance mechanisms, formal rules and procedures, and the institutionalization of quality assurance procedures and performance monitoring. Expectations of excellence, competition for resources and reputation, and the coordination of research priorities and intellectual judgement have changed across disciplinary and national boundaries to varying degrees (Whitley, 2011 ). These developments can be seen as expressions of the evaluative state (Neave, 1998 ), the audit society (Power, 1997 ) and as part of an institutionalized evaluation machinery (Dahler Larsen, 2012 ).

Changes in the principles of governance are underpinned by persistent tensions around accountability, evaluation, measurement, demarcation, legitimation, agency and identity in research (Oancea, 2019 ). Besides the primary form of recognition through peer review, the weakened autonomy of academic fields has added new evaluative procedures and institutions. Academic evaluations, such as accreditations, audits and quality assurances, and evaluations of research performance and social impact now exist alongside more traditional forms (Hansen et al., 2019 ).

Higher education institutions worldwide have experienced the emergence and manifestations of the quality movement, which is part of interrelated processes such as massification, marketization and managerialism. Through organizations at international, national and institutional levels, a variety of technologies have been introduced to identify, measure and compare the performance of higher education institutions (Westerheijden et al., 2007 ). These developments have emphasized external standards and the use of bibliometrics and citation indexes, which have been criticized for rendering the evaluations more mechanical (Hamann & Beljean, 2017 ). Mostly, peer review, often in combination with self-evaluation, is also employed in the more recently introduced forms of evaluation (Musselin, 2013 ). Accordingly, peer review, in one form or another, is still a key mechanism monitoring the flow of scientific knowledge, ideas and people through the gates of the scientific community and higher education institutions (Lamont, 2009 ).

Autonomy may be defined as ‘the quality or state of being self-governing’ (Ballou, 1998 , p. 105). Autonomy is thus the capacity of an agent to determine their own actions through independent choice, in this case within a system of principles and laws to which the agent is dedicated. The academic profession governs itself by controlling its members. Academics control academics, peers control peers, in order to maintain the status and indeed the autonomy of the profession. Fundamentally, professionals are licensed to act within a valuable knowledge domain. By training, examination and acknowledgement, professionals are legitimated (at least politically) experts of their domain. The rationale of licence and the esotericism of professional knowledge raise the question of how professionals and their work can be evaluated and by which standards. There are rules of conduct and ethical norms, but these are ultimately owned and controlled by the academic profession. From this perspective, we can understand peer review as the structural element that holds academia together.

The increase of peer-review practices in academia can be compared with other professions that also must work harder than before to maintain their status and autonomy. In many cases, their competence and quality must be displayed much more visibly today. Pluralism and individualism in society have also resulted in a plurality of expertise and a decrease of mono-vocational functional systems. A mystique of academic knowledge (as in ‘the research says’) is not as acceptable in public opinion today as it once was. The term ‘postmodern professionals’ is suggested to describe experts who expend more effort in the dramaturgy of their competences than people in their positions might have in the past in order to generate trust in clients and in society (Pfadenhauer, 2003 ). Media makes professional competences, performances and failures much more visible and contributes to trust or mistrust in professions. In a pluralist society, extensive use of peer review may indeed function as a strategy to make apparent quality visible and secure the autonomy of the academic profession, which owns the practice of peer review and knows how to adjust it to its needs.

While most academic evaluations exist across scientific communities and disciplines, the criteria of evaluation can differ substantially between and within communities (Hamann & Beljean, 2017 ). Thus, research on peer review needs to take disciplinary and interdisciplinary similarities and differences seriously. Obviously, the impact of the intellectual and social organization of the sciences (Whitley, 1984 ), the mode of research (Nowotny et al., 2001 ), the tribes and territories (Becher, 1989 ; Becher & Trowler, 2001 ; Trowler et al., 2014 ) and the epistemic cultures (Knorr Cetina, 1999 ) need to be better represented in future research. Then, examinations of peer review may contribute also to a fuller understanding of the contract between science and society and the challenges directed towards the professional autonomy of academics.

Why Study Peer Review?

As an ideal, peer review has been described as ‘the linchpin of science’ (Ziman, 1968 , p. 148) and a key mechanism in the distribution of status and recognition (Merton, 1968 ) as well as part and parcel of collegiality and meritocracy (Cole & Cole, 1973 ). Above all, peer review is considered a gatekeeper regarding the quality of science both in various specialized knowledge communities and in research policy spaces (Langfeldt et al., 2020 ). Peer review is often taken as a hallmark of quality, expected to both guard and enhance quality. Early on, peer review, or refereeing, was linked to moral institutionalized imperatives. Perhaps most known are those formulated in the Ethos of Science by Merton (1942/ 1973 ): communism, universalism, disinterestedness and organized scepticism, or CUDOS. These norms and their counter-norms (individualism, particularism, interestedness and dogmatism) have frequently been the focus of peer-review studies. Norms on how scientific work is or should be carried out and how researchers should behave reflect the purpose of science, and ideas of how science should be governed, and are thus directly linked to the autonomy of the academic profession (Panofski, 2010 ). In short, research into peer review goes to the very heart of academia and its relation to society. This calls for scrutiny.

With changing circumstances, peer review is more often employed, and its purposes, forms and functions are increasingly diversified. Today, academic evaluations permeate every corner of the scientific enterprise, and the traditional form of peer review, rooted in scientific communication, has migrated. Thus, we have seen peer review evolve to be undertaken in all key aspects of academic life: research, teaching, service and collaboration with society (Tennant & Ross-Hellauer, 2020 ). Increasingly, peer review is regarded as the standard, not only for published scholarship but also for academic evaluations in general. Ideally, peer review is considered to guarantee quality in research and education while upholding the norms of science and preserving the contract between science and society. The diversity and the migration of review practices and its consequences should be followed closely.

In the course of a career, scholars are recurrently involved as both reviewers and reviewees, and this is becoming more and more frequent. As stated in a report on peer review by the British Academy ( 2007 ), the principle of judge not, that ye be not judged is impossible to follow in academic life. On the contrary, the selection of work for publishing, the allocation of grants and fellowships, decisions on tenure and promotion, and quality evaluations all depend upon the exercise of judgement. ‘The distinctive feature of this academic judgement is that it is reciprocal. Its guiding motto is: judge only if you in turn are prepared to be judged’ (British Academy, 2007 , p. vii).

Indeed, we lack comprehensive statistics on peer review and the involvement of scholars in its diverse practices. However, investigations like the Wiley study (Warne, 2016 ) and Publons’ ( 2018 ) Global State of Peer Review (2018), both focused on reviews of manuscripts, implicate the widespread and increasing use of peer review. In 2016, roughly 2.9 million peer-reviewed articles were indexed in Web of Science, and a total of 2.5 million manuscripts were rejected. Estimated reviews each year amount to 13.7 million. Together, the continuous rise of submissions and the increase in evaluations using peer reviews expose the system and its actors to ever more pressure.

Peer-review activities produce an incredible amount of talk and gossip in academia. In particular, academic appointments have contributed to the organizational ‘sagas’ described by Clark ( 1972 ). In systems where fierce competition for a limited number of chairs (professorships) is the norm, much is at stake. A single decision, one way or another, can make or break an academic career, and the same is true in relation to recurring judgements and decisions on tenure and promotion (Gunneriusson, 2002 ). Research on the emotional and socio-psychological consequences of peer rejection or low ratings and rankings is seldom conducted. While rejection may function as either a threat or a challenge to scholarly identities, Horn ( 2016 ) argues that rejection is a source of stigmatization pervading the entire academic community. In a similar vein, scholars have to adjust to the maxim of ‘publish or perish’ and the demands of reviewers, even when these are against the scholars’ own convictions. Some researchers consider this a form of ‘intellectual prostitution’ (Frey, 2003 ), and reviewer fatigue is spreading through the scientific community. For example, it is widely recognized that editors sometimes have trouble finding reviewers. Obviously, peer review has become a concern to scholars of all kinds and to their identities and everyday practices and careers.

The mundane reality of peer-review practice is quite different from the ideology of peer review, and our knowledge is rather restricted and fragmented (Grimaldo et al., 2018 ). The roots of peer review can be traced through the seventeenth century and book censorship, the development of academic journals in the eighteenth century and the gatekeeping of scientific communication. As a regular activity, peer review is, however, a latecomer in the scientific community, and it is unevenly distributed across nations and disciplines (Biagioli, 2002 ). For example, publication practices, discourses and the lingua franca differ between knowledge communities. Traditional peer review is a more prominent feature of the natural sciences and medicine than of the humanities, the social sciences and the arts. This is also reflected in research on peer review. In a similar way, data show that US researchers supply by far the most reviews of manuscripts for journals, while China reviews substantially less. Nevertheless, review output is increasing in all regions and especially so in emerging regions (Publons, 2018 ).

Even though there are differences, peer review is a fundamental tool in the negotiation and establishment of a scholars’ merits and research, of higher education quality and of excellence. Peer review is also considered a tool to prevent misconduct, such as the fraudulent presentation of findings or plagiarism. Thus, peer review may fulfil functions of gatekeeping, maintenance and enhancement. Peer reviews can also be linked to struggles over which form of capital should be the gold standard and over gaining as much capital as possible (Maton, 2005 ). At stake is, on the one hand, scholastic capital, and on the other hand, academic capital linked to administrative power and control over resources (Bourdieu, 1996 ).

The introduction of ever new sites for peer review, changing qualifications of reviewers and calls for open science, as well as the increased use of metrics, increase the need for further research. Moreover, the cost and the amount of time spent on different kinds of reviews and their potential impact on the identity, recognition and status of scholars and higher education institutions make peer review especially worthy of systematic studies beyond professional narratives and anecdotes. Peer review has both advocates and critics, although the great majority of researchers are positive to the idea of peer review. Many critics find peer review costly, time consuming, conservative and socially and epistemically biased. In sum, there are numerous reasons to study peer review. It is almost impossible to overstate the central role of peer review in the academic enterprise, and the results of empirical evidence are inconclusive and the research field emergent and fragmented (Bornmann, 2011 ; Batagelj et al., 2017 ).

State of the Art of Research on Peer Review

There is a lack of consensus on what peer review is and on its purposes, practices, outcomes and impact on the academic enterprise (Tennant & Ross-Hellauer, 2020 ). The term peer review was relatively unknown before 1970. Referee was the more commonly applied notion, used primarily in relation to the evaluation of manuscripts and scientific communication (Batagelj et al., 2017 ). This lack of clarity has affected how the research field of peer review has been identified and described.

During the past few decades, a number of researchers have provided syntheses of research on peer review in the forms of quantitative meta- and network analyses as well as qualitative configurative analyses. Some are more general in character (Sabaj Meruane et al., 2016 ; Batagelj et al., 2017 ; Grimaldo et al., 2018 ), though the main focus is often research in the natural and medical sciences and peer review for publishing and, to some extent, for grant funding. Others are more concerned with either a specific practice of peer review or different critical topics. Below, we mainly use these recent systematic reviews to depict the research field of peer review, to identify the limits of our knowledge on the subject and to elaborate why we need to study it further.

Academic evaluations, like peer reviews, have been examined from a number of perspectives (Hamann & Beljean, 2017 ). From a functionalist approach, we can explore how well evaluative procedures serve their purposes—especially those of validity, reliability and fairness—and how well they handle various potential biases. The power-analytical perspective makes critical inquiries into dysfunctional effects of structural inequalities like nepotism and unequal opportunities for resource accumulation. The perspective on the performativity of evaluations and evaluative devices focuses on the organizational impact of the devices, on ranking and on the ways indicators incite strategic behaviour. The social-constructive perspective on evaluation emphasizes that ideas such as merits and originality are socially and historically context dependent. There is also a pragmatist perspective that stresses the situatedness of evaluative practices and interactions (e.g. how panellists reach consensus). More and more frequently used are analytical tools from the field of the sociology of valuation and evaluation, which emphasizes knowledge production as contextualization and the existence and impact of insecurities in the performative situations (Lamont, 2012 ; Mallard et al., 2009 ; Serrano Velarde, 2018 ). Some researchers highlight the variety of academic communities and the intradisciplinary, interdisciplinary and transdisciplinary aspects of research today as significant explanatory factors for evaluative practices (Hamann & Beljean, 2017 ). We may add changes in the governance of higher education institutions and research and the introduction of new evaluation practices as equally important (Whitley, 2011 ; Oancea, 2019 ).

In a network analysis of research on peer review from 1950 to 2016 Batagelj et al. ( 2017 ) identified 23,000 indexed records in Web of Science and, above all, a main corpus of 47 articles and books. These texts, which were cited in the most influential publications on peer review, focus on science, scholarship, systematic reviews, peers, peer reviews and quantitative and qualitative analysis. The most cited article allows for an expansion of this list to include the institutionalization of evaluation in science, open peer reviews, bias and the effects of peer review on the quality of research. Most items belonging to the corpus were published relatively early, with only a few published after the year 2000. However, overview papers were published more recently, mainly in the past decade.

The research field of peer review has been described as an emergent field marked by three development stages (Batagelj et al., 2017 ). The first stage, before 1983, includes seminal work mostly presented in social science and philosophy journals. Main topics include scientific productivity, bibliographies, knowledge, citation measures as measures of scientific accomplishment, scientific output and recognition, evaluations in science, referee systems, journal evaluations, the peer-evaluation system, review processes and peer-review practices. During the second stage, 1983–2002, biomedical journals were influential. Key topics focused on the effects of blinding on review quality, research into peer review, guidelines for peer reviewing, monitoring peer-review performance, open peer review, bias in the peer-review system, measuring the quality of editorial peer review, and the development of meta-analysis and systematic reviews approaches. Finally, in the third stage, 2003–2016, we find research on peer review mainly in specialized science studies journals such as Scientometrics . The most frequent topics include peer review of grant proposals, bias, referee selection and links between editors, referees and authors.

Another quantitative analysis (Grimaldo et al., 2018 ) of articles published in English from 1969 to 2015 and indexed in the citation database Scopus found very few publications before 1970, and fewer than around 100 per year until 2004. Then, from 2004 to 2015 the numbers increased rapidly, 12% per year on average. Half the records were journal articles, books, chapters and conference papers, and the rest were mostly editorial notes, commentaries, letters and literature reviews. Scholars from English-speaking countries, especially the United States, predominated, but authors from prominent European institutions were also found. A fragmented, potentially interdisciplinary research field dominated by medicine, sociology and behavioural sciences and with signs of uneven sharing of knowledge was identified. The research was typically pursued in small collaborative networks. Articles on peer reviews were published mostly by JAMA , Behavioral and Brain Science and Scientometrics . The most important topics were peer review in relation to quality assurance and improvement, publishing, research, open access, evaluation and assessment, bibliometrics and ethics. Among the authors of the top five most influential articles we find Merton, Zuckermann, Horrobin, Bornmann and Siegelmann. Grimaldo et al.’s ( 2018 ) analysis revealed the presence of structural problems, such as difficulties in accessing data, partly due to confidentiality and lack of interest from editorial boards, administrative bodies and funding agencies. More positively, the analysis pointed to digitalization and open science as favourable tools for increases in research, cooperation and knowledge sharing.

In an overview (Sabaj Meruane et al., 2016 ) of empirical studies on peer-review processes, almost two thirds of the first-named authors had doctoral backgrounds in medicine, psychology, bibliometrics or scientometrics, and around one fifth in sociology of science or science and technology studies. There is definitely a lack of integration of other fields, such as those within the social sciences, the humanities and the arts and education in the study of peer-review processes. The following topics were empirically researched, in descending order: sociodemographic variables (83%), sociometric or scientometric data (47%), evaluation criteria (36%), bias (31%), rates of acceptance/rejection/revision (25%), predictive validity (24%), consensus among reviewers (17%) and discourse analysis of isolated or related texts (14%). The analysis indicates that ‘the texts interchanged by the actors in the process are not prominent objects of study in the field’ (Sabaj Meruane et al., 2016 , p. 188). Further, the authors identified a number of gaps in the research: The field conceives of peer review more as a system than as a process. Moreover, bibliometric studies constitute an independent field of empirical research on peer review. Only a few studies combine analysis of indicators with content or functional analysis. In a similar way, research on science production, reward systems and evaluation patterns rarely includes actual texts that are interchanged in the peer-review process. Discourse analysis, in turn, rarely uses data other than the reviewer report and socio-demographics. Due to ethical issues and confidentiality, discourse studies and text analyses of reviewer reports are less frequent.

It might be risky to state that peer review is an under-studied object of research, considering the vast number of publications devoted to the topic. Nevertheless, it appears that the field of peer-review research has yet to be fully defined, and empirical research in the field has to be more comprehensively done. A common problem the authors consider important to examine is the consequences of the same actor being able to fulfil different roles (e.g. author, reviewer, editor) in various single reviews. Above all, the field requires not only further but also more comprehensive approaches, and in addition, the black box of peer review needs to be fully open (Sabaj Meruane et al., 2016 ).

Among syntheses focusing on specific topics, those of trustworthiness and bias as well as the employment and negotiation of and the meaning ascribed to criteria in various evaluation practices or in different disciplines are relatively common. In a review of literature published on the topic of peer review, the state of research on journal, fellowship and grant peer review is analysed, focusing on three quality criteria: reliability, fairness and predictive validity (Bornmann, 2011 ). The interest was directed towards the norms of science, ensuring that results were not incidental, that certain groups or individuals were not favoured or disadvantaged, and that selection of publications and scholars were aligned to scientific performances. Predictive validity was far less studied in primary research than reliability and fairness. Another overview articulates and critiques conceptions and normative claims of bias (Lee et al., 2013 ). The authors raise questions about existing norms and conclude that peer review is social and that a diversity of norms and opinions among communities and referees may be desirable and beneficial. Bias is also studied in research on who gets tenure with respect to both meritocratic and non-meritocratic factors, such as ascription and social and academic capital (Lutter & Schröder, 2016 ). These authors show that network size, individual reputation and gender matter.

Epistemic differences point to the necessity of studying peer review within a variety of disciplines and transdisciplinary contexts. An interview study of panellists serving on fellowship grants within the social sciences and humanities shows that evaluators generally draw on four epistemological styles: constructivist, comprehensive, positivist and utilitarian (Mallard et al., 2009 ). Moreover, peer reviewers employ the epistemological style most appropriate to the field of the proposal under review. In the future, more attention has to be paid to procedural fairness, including from a comparative perspective. In another systematic review of criteria used to assess grant applications, it is suggested that forthcoming research should also focus on the applicant, include data from non-Western countries and examine a broad spectrum of research fields (Hug & Aeschbach, 2020 ).

As shown in this introductory chapter, the research field devoted to peer review covers a great number of evaluation practices embedded in different contexts. As it is an emergent and fragmented field in need of integration, there are certainly many possible ways to make contributions to the research field of peer review. On the agenda we find issues related to the foundation of science: the ethos of science and the ideology of peer review, the production and dissemination of knowledge, professional self-regulation and open science. There are also questions concerning the development of theoretical framing and methodological tools adapted to the study of diverse review practices in shifting contexts and at various interacting levels. Not least, in response to calls for more comprehensive and integrated research, it is necessary to open the black boxes of peer review and analyse, in empirical studies, the different purposes, discourses, genres, relations and processes involved.

A single book cannot take on all the above-mentioned challenges ahead of us. However, following this brief introduction to the field, the volume brings together research on review practices often studied in isolation. We include studies ranging from the practice of assessing manuscripts submitted for publication to the more recent practice of open review. In addition, more encompassing and general issues are considered, as well as specificities of different peer-review practices. This is further developed below, where the structure of the volume and the contributions of each chapter are presented.

The Structure and Content of the Volume

The structure of the volume falls into three main parts. In the first part, Rudolf Stichweh and Raf Vanderstraeten continue the introduction begun in this chapter. They discuss the term peer review and the contexts of its emergence. In Chap. 2 , Rudolf Stichweh explains the genesis of inequalities and hierarchies in modern science. He illuminates the forms and mechanisms of scientific communication on the basis of which the social structures of science are built: publications, co-authorships and multiple authorships, citations as units of information and as social rewards, and peer review as an evaluation of publications (and of projects and careers). Stichweh demonstrates how, in all institutional dimensions of higher education, differences arise between successful and less successful participations. Success generates influence and social attractiveness (e.g. as a co-author). Influential and attractive participants are recruited into positions where they assess the achievements of others and thereby limit and control inclusion in publications, funding and careers.

Vanderstraeten, in Chap. 3 , puts forward that with the expansion of educational research in the twentieth century, interested ‘amateurs’ have been driven out of the field, and the scientific community of peers has become the dominant point of orientation. Authorship and authority became more widely distributed; peer review was institutionalized to monitor the flow of ideas within scientific literature. Reference lists in journals demonstrated the adoption of cumulative ideals about science. Vanderstraeten’s historical analysis of education journals shows the social changes that contributed to the ascent of an ‘imagined’ community of expert peers in the course of the twentieth century.

Part II of this volume focuses mainly on how peer-review practices have emerged in many parts of higher education institutions. From being scholarly publication practices in early times, peer review appears to be internationally the most significant performative practice in higher education and research. In this part, the various scholars provide insight into such processes. Don F. Westerheijden, in Chap. 4 , revisits the policy issue of the balance between peer review and performance indicators as the means to assess quality in higher education. He shows the paradoxes and unintended effects that emerge when peer review is the main method in the quality assurance procedures of higher education institutions as a whole. Westerheijden argues that attempted solutions of using self-assessments and performance indicators as well as specifically trained assessors increase complaints about bureaucracy from within the academic community.

In Chap. 5 , Hanne Foss Hansen sheds light on how peer review as an evaluation concept has developed over time and discusses which roles peer review plays today. She presents a typology distinguishing between classical peer review, informed and standard-based peer review, modified peer review and extended peer review. Peer review today can be found with all these faces. Peter Dahler Larsen argues in Chap. 6 that gatekeepers in institutional review processes who know the future and use this knowledge in a pre-emptive or precautionary way play a key role in the construction of reality, which comes out of Bibliometric Research Indicators, widely used internationally. By showing that human judgement sometimes enhances or multiplies the effects of ‘evaluation machineries’, this chapter contributes to an understanding of mechanisms that lead to constitutive effects of evaluation systems in research.

In Chap. 7 , Agnes Ers and Kristina Tegler Jerselius explore a national framework for quality assurance in higher education and argue that such systems’ forms are dynamic, since they change over time. Ers and Tegler Jerselius show how the method of peer review has evolved over time and in what way it has been affected by changes made in the system. Gustaf Nelhans engages in Chap. 8 with the performative nature of bibliometric indicators and explores how they influence scholarly practice at macro levels (in national funding systems), meso levels (within universities) and individual levels (in the university employees’ practice). Nelhans puts forward that the common-sense ‘representational model of bibliometric indicators’ is questionable in practice, since it cannot capture the qualities of research in any unambiguous way.

In Chap. 9 , Lars Geschwind and Kristina Edström discuss the loyalty of academic staff to their disciplines or scientific fields. They show how this loyalty is reflected in evaluation practices. They elaborate on the extent to which peer reviewers act as advocates for those they evaluate. By doing so, Geschwind and Edström problematize potential evaluator roles. In Chap. 10 , Malcom Tight closes Part II of this book. Drawing on his extensive review experiences in various areas of higher education institutions, he assesses how ‘fit for purpose’ peer review is in twenty-first-century academe. He focuses on different practices of peer review in the contemporary higher education system and questions how well they work, how they might be improved and what the alternatives are.

Whereas Part II of this volume focuses on the relation and impact of higher education institutions considering education quality and research output, Part III illuminates different particular peer-review practices. Eva Forsberg, Sara Levander and Maja Elmgren examine in Chap. 11 peer-review practices in the promotion of what is called ‘excellent’ or ‘distinguished’ university teachers. While research merits have long been the prioritized criteria in the recognition of institutions and scholars, teaching is often downplayed. To counteract this tendency, various systems to upgrade the value of education and to promote teaching excellence have been introduced by higher education institutions on a global scale. The authors show that the intersection between promotion, peer review and excellent teaching affects not only the peer-review process but also the notion of the excellent or distinguished university teacher.

In Chap. 12 , Tine S. Prøitz discusses how the role of scholarly peers in systematic review is analysed and presented. Peer evaluation is an essential element of quality assurance of the strictly defined methods of systematic review. The involvement of scholarly peers in the systematic review processes has similarities with traditional peer-review processes in academic publishing, but there are also important differences. In systematic review, peers are not only re-judging already reviewed and published research, but also gatekeeping the given standards, guidelines and procedures of the review method.

Liv Langfeldt presents in Chap. 13 processes of grant peer review. There are no clear norms for assessments, and there may be a large variation in what criteria reviewers emphasize and how they are emphasized. Langfeldt argues that rating scales and budget restrictions can be more important than review guidelines for the kind of criteria applied by the reviewers. The decision-making methods applied by the review panels when ranking proposals are found to have substantial effects on the outcome. Chapters 14 and 15 focus on peer-review practices in the recruitment of professors. First, Sara Levander, Eva Forsberg, Sverker Lindblad and Gustav Jansson Bjurhammer analyse the initial step of the typecasting process in the recruitment of full professors. They show that the field of professorial recruitment is characterized by heterogeneity and no longer has a basis in one single discipline. New relations between research, teaching and society have emerged. Moreover, the authority of the professorship has narrowed and the amount of responsibilities have increased. Then, Björn Hammarfeldt focuses on discipline—specific practices for evaluating publications oeuvres. He examines how ‘value’ is enacted with special attention to the kind of tools, judgements, indicators and metrics that are used. Value is indeed enacted differently in the various disciplines.

In the last chapter of the book, Chap. 16 , Tea Vellamo, Jonna Kosonen, Taru Siekkinen and Elias Pekkola investigate practices of tenure track recruitment. They show that criteria of this process can exceed notions of individual merits and include assessments of the strategic visions of universities and departments. The use of the tenure track model can be seen as a shift both for identity building related to a university’s strategy and for using more managerial power in recruitment more generally.

We dedicate this book to our beloved colleague and friend, professor Rita Foss Lindblad, who was involved in the project but passed away in 2018.

Funded by Riksbankens Jubileumsfond (F17-1350:1). The keynotes of the conference are accessible on video at https://media.medfarm.uu.se/play/kanal/417 . For more information on the conference, see www.konferens.edu.uu.se/scga2018-en .

Aagaard, K., Bloch, C., & Schneider, J. W. (2015). Impacts of performance-based research funding systems: The case of the Norwegian Publication Indicator. Research Evaluation, 24 (2), 106–117.

Article   Google Scholar  

Ballou, K. A. (1998). A concept analysis of autonomy. Journal of Professional Nursing, 14 (2), 102–110.

Batagelj, V., Ferligoj, A., & Squazzoni, F. (2017). The emergence of a field: A network analysis of research on peer review. Scientometrics, 113 (1), 503–532. https://doi.org/10.1007/s11192-017-2522-8

Becher, T. (1989). Academic tribes and territories: Intellectual inquiry and the cultures of disciplines . Society for Research into Higher Education.

Google Scholar  

Becher, T., & Trowler, P. R. (2001). Academic tribes and territories. Intellectual inquiry and the culture of disciplines . Open University Press.

Biagioli, M. (2002). From book censorship to academic peer review. Emergences Journal for the Study of Media & Composite Cultures, 12 (1), 11–45. https://doi.org/10.1080/1045722022000003435

Bornmann, L. (2011). Scientific peer review. Annual Review of Information, Science and Technology, 45 , 197–245. https://doi.org/10.1002/aris.2011.1440450112

Bornmann, L. (2013). Evaluations by peer review in science. Springer Science Reviews, 1 (1–4). https://doi.org/10.1007/s40362-012-0002-3

Bourdieu, P. (1996). Homo academicus . Polity.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate . The Carnegie Foundation for the Advancement of Teaching.

British Academy. (2007). Peer review: The challenges for the humanities and social sciences. Retrieved December 1, 2020, from https://www.thebritishacademy.ac.uk/documents/197/Peer-review-challenges-for-humanities-social-sciences.pdf

Caputo, R. K. (2019). Peer review: A vital gatekeeping function and obligation of professional scholarly practice. Families in Society: The Journal of Contemporary Social Services, 100 (1), 6–16. https://doi.org/10.1177/1044389418808155

Chen, R., & Hyon, S. (2005). Faculty evaluation as a genre system: Negotiating intertextuality and interpersonality. Journal of Applied Linguistics, 2 ( 2 ), 153–184. https://doi.org/10.1558/japl.v2i2.153

Clark, B. R. (1972). The organizational saga in higher education. Administrative Science Quarterly, 17 , 178–184.

Clark, B. R. (1989). The academic life: Small worlds, different worlds. Educational Researcher, 18 (5), 4–8. https://doi.org/10.2307/1176126

Cole, J. R., & Cole, S. (1973). Social stratification in science . University of Chicago Press.

Csiszar, A. (2016). Peer review: Troubled from the start. Nature, 532 (7599), 306–308. https://doi.org/10.1038/532306a

Dahler Larsen, P. (2012). The evaluation society . Stanford University Press.

Elken, M., & Wollscheid, S. (2016). The relationship between research and education: Typologies and indicators. A literature review . Nordic Institute for Innovative Studies in Research and Education (NIFU).

European Science Foundation. (2011). European peer review guide. Integrating policies and practices into coherent procedures .

Frey, B. S. (2003). Publishing as prostitution? Choosing between one’s own ideas and academic success. Public Choice, 116 (1/2), 205–223. https://doi.org/10.1023/A:1024208701874

Gläser, J., & Laudel, G. (2007). The social construction of bibliometric evaluations. In R. Whitley & J. Gläser (Eds.), The changing governance of the sciences. The advent of research evaluation systems . Springer.

Grimaldo, F., Marušić, A., & Squazzoni, F. (2018). Fragments of peer review: A quantitative analysis of the literature (1969–2015). PLOS ONE, 13 (2), e0193148. https://doi.org/10.1371/journal.pone.0193148

Guetzkow, J., Lamont, M., & Mallard, G. (2004). What is originality in the humanities and the social sciences? American Sociological Review 2004, 69 , 190. https://doi.org/10.1177/000312240406900203

Gunneriusson, H. (2002). Det historiska fältet: svensk historievetenskap från 1920-tal till 1957 . Uppsala: Acta Universitatis Upsaliensis.

Hamann, J., & Beljean, S. (2017). Academic evaluation in higher education. In J. C. Shin & P. Teixeira (Eds.), Encyclopedia of international higher education systems and institutions . https://doi.org/10.1007/978-94-017-9553-1_295-1

Chapter   Google Scholar  

Hammarfelt, B. (2017). Recognition and reward in the academy: Valuing publication oeuvres in biomedicine, economics and history. Aslib Journal of Information Management, 69 (5), 607–623. https://doi.org/10.1108/AJIM-01-2017-0006

Hammarfelt, B., Rushforth, D., & de Rijcke, S. (2020). Temporality in academic evaluation: ‘Trajectoral thinking’ in the assessment of biomedical researchers. Valuation Studies, 7 (1), 33–63. https://doi.org/10.3384/VS.2001-5992.2020.7.1.33

Hansen, H. F., Aarrevaara, T., Geschwind, L., & Stensaker, B. (2019). Evaluation practices and impact: Overload? In R. Pinheiro, L. Geschwind, H. Foss Hansen, & K. Pulkkinen (Eds.), Reforms, organizational change and performance in higher education: A comparative account from the Nordic countries . Palgrave Macmillan.

Helgesson, C.-F. (2016). Folded valuations? Valuation Studies, 4 (2), 93–102. https://doi.org/10.3384/VS.2001-5992.164293

Horn, S. A. (2016). The social and psychological costs of peer review: Stress and coping with manuscript rejection. Journal of Management Inquiry, 25 (1), 11–26. https://doi.org/10.1177/1056492615586597

Hug, S. E., & Aeschbach, M. (2020). Criteria for assessing grant applications: A systematic review. Palgrave Communications, 6 (30). https://doi.org/10.1057/s41599-020-0412-9

Kaltenbrunner, W., & de Rijcke, S. (2020). Filling in the gaps: The interpretation of curricula vitae in peer review. Social Studies of Science, 49 (6), 863–883. https://doi.org/10.1177/0306312719864164

Knorr Cetina, K. (1999). Epistemic cultures . Harvard University Press.

Book   Google Scholar  

Lamont, M. (2009). How professors think. Inside the curious world of academic judgment . Harvard University Press.

Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38 (21), 201–221. https://doi.org/10.1146/annurev-soc-070308-120022

Langfeldt, L., & Kyvik, S. (2011). Researchers as evaluators: Tasks, tensions and politics. Higher Education, 62 (2), 199–212. https://doi.org/10.1007/s10734-010-9382-y

Langfeldt, L., & Kyvik, S. (2015). Intrinsic tensions and future challenges of peer review. In RJ Yearbook 2015/2016 . Riksbankens Jubileumsfond & Makadam Publishers.

Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58 , 115–137. https://doi.org/10.1007/s11024-019-09385-2

Lee, C. J., Sugimoto, G. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64 (1), 2–17. https://doi.org/10.1002/asi.22784

Lutter, M., & Schröder, M. (2016). Who becomes a tenured professor, and why? Panel data evidence from German sociology, 1980–2013. Research Policy, 45 , 999–1013. https://doi.org/10.1016/j.respol.2016.01.019

Mallard, G., Lamont, M., & Guetskow, J. (2009). Fairness as appropriateness: Negotiating epistemological differences in peer review. Science Technology Human Values . https://doi.org/10.1177/0162243908329381

Maton, K. (2005). A question of autonomy: Bourdieu’s field approach and higher education policy. Journal of Education Policy, 20 (6), 687–704. https://doi.org/10.1080/02680930500238861

Merton, R. K. (1968). The Matthew effect in science. Science, 159 (3810), 56–63. https://doi.org/10.1126/science.159.3810.56

Merton R. K. (1973). The sociology of science: Theoretical and empirical investigations (Norman W. Storer, Ed.). University of Chicago Press. (Original work published 1942)

Musselin, C. (2002). Diversity around the profile of the ‘good’ candidate within French and German universities. Tertiary Education and Management, 8 (3), 243–258. https://doi.org/10.1080/13583883.2002.9967082

Musselin, C. (2013). How peer review empowers the academic profession and university managers: Changes in relationships between the state, universities and the professoriate. Research Policy, 42 (5), 1165–1173. https://doi.org/10.1016/j.respol.2013.02.002

Neave, G. (1998). The evaluative state reconsidered. European Journal of Education, 33 (3), 265–284. https://www.jstor.org/stable/1503583

Nowotny, H., Scott, P. B., & Gibbons, M. T. (2001). Re-thinking science: Knowledge and the public in an age of uncertainty . Polity Press.

Oancea, A. (2019). Research governance and the future(s) of research assessment. Palgrave Communications, 5 , 27. https://doi.org/10.1057/s41599-018-0213-6

Oravec, A. (2019). Academic metrics and the community engagement of tertiary education institutions: Emerging issues in gaming, manipulation, and trust. Tertiary Education and Management. https://doi.org/10.1007/s11233-019-09026-z

Ozeki, S. (2016). Three Empirical Investigations into the Logic of Evaluation and Valuing Practices . Dissertations. 2470. https://scholarworks.wmich.edu/dissertations/2470

Paltridge, B. (2017). The discourse of peer review. Reviewing submission to academic journals . Macmillan Publishers.

Panofski, A. L. (2010). In C. J. Calhoun (Ed.), Robert K. Merton: Sociology of science and sociology as science . Columbia University Press.

Pfadenhauer, M. (2003). Professionalität. Eine wissenssoziologische Rekonstruktion institutionalisierter Kompetenzdarstellungskompetenz [Professionalism. A reconstruction of institutionalized proficiency in displaying competence]. Springer.

Power, M. (1997). The Audit Society. Rituals of verification . Oxford University Press.

Publons. (2018). Global state of peer review. Online.

Research Information Network CIC. (2015). Scholarly communication and peer review. The current landscape and future trends. A report commissioned by the Wellcome Trust. Retrieved May 2015, from https://wellcome.org/sites/default/files/scholarly-communication-and-peer-review-mar15.pdf

Ross-Hellauer, T. (2017). What is open peer review? A systematic review (version 2; peer review: 4 approved). F1000Research, 2017, 6 (588). Last updated: 17 May 2019. Included in Science Policy Research Gateway. https://doi.org/10.12688/f1000research.11369.2

Roumbanis, L. (2017). Academic judgments under uncertainty: A study of collective anchoring effects in Swedish research council panel groups. Social Studies of Science, 47 (1), 95–116. https://doi.org/10.1177/0306312716659789

Sabaj Meruane, O., González Vergara, C., & Pina-Stranger, Á. (2016). What we still don’t know about peer review. Journal of Scholarly Publishing, 47 (2), 180–212. https://doi.org/10.3138/jsp.47.2.180

Scriven, M. (1980). The logic of evaluation . Edgepress.

Scriven, M. (2003). Evaluation theory and metatheory. In T. Kellaghan, D. L. Stufflebeam, & L. A. Wingate (Eds.), International handbook of educational evaluation (pp. 15–30). Kluwer Academic Publishers.

Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56 (1), 85–107. https://doi.org/10.1007/s11024-018-9346-4

Söderlind, J., & Geschwind, L. (2019). Making sense of academic work: The influence of performance measurement in Swedish universities. Policy Reviews in Higher Education, 3 (1), 75–93. https://doi.org/10.1080/23322969.2018.1564354

Swales, J. M. (1996). Occluded genres in the academy. The case of the submission letter. In E. Ventola & A. Mauranen (Eds.), Academic writing: Intercultural and textual issues . ProQuest Ebook Central. http://ebookcentral.proquest.com/lib/uu/detail.action?docID=680373

Tennant, J. P., & Ross-Hellauer, T. (2020). The limitations to our understanding of peer review. Research Integrity and Peer Review, 5 (6). https://doi.org/10.1186/s41073-020-00092-1

Trowler, P., Saunders, M., & Bamber, V. (Eds.). (2014). Tribes and territories in the 21st century. Rethinking the significance of disciplines in higher education . Routledge.

Vedung, E. (2002). Utvärderingsmodeller [Evaluation models]. Socialvetenskaplig tidskrift, 9 (2–3), 118–143.

Warne, V. (2016). Rewarding reviewers—sense or sensibility? A Wiley study explained. Learned Publishing, 29 , 41–50. https://doi.org/10.1002/leap.1002

Westerheijden, D. F., Stensaker, B., & Joao Rosa, M. (Eds.). (2007). Quality assurance in higher education. Trends in regulation, translation and transformation . Springer.

Whitley, R. (1984). The intellectual and social organization of the sciences . Clarendon Press.

Whitley, R. (2011). Changing governance and authority relationships in the public sciences. Minerva, 49 , 359–385. https://doi.org/10.1007/s11024-011-9182-2

Ziman, J. M. (1968). Public knowledge . The University of Chicago Press.

Download references

Author information

Authors and affiliations.

Department of Education, Uppsala University, Uppsala, Sweden

Eva Forsberg & Sara Levander

Department of Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden

Lars Geschwind

Department of Special Education, Stockholm University, Stockholm, Sweden

Wieland Wermke

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Eva Forsberg

Department Learning in Engineering Sciences, KTH Royal Institute of Technology, Stockholm, Sweden

Sara Levander

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Forsberg, E., Geschwind, L., Levander, S., Wermke, W. (2022). Peer Review in Academia. In: Forsberg, E., Geschwind, L., Levander, S., Wermke, W. (eds) Peer review in an Era of Evaluation. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-75263-7_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-75263-7_1

Published : 03 January 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-75262-0

Online ISBN : 978-3-030-75263-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Technical Support
  • Find My Rep

You are here

What is peer review.

Peer review is ‘a process where scientists (“peers”) evaluate the quality of other scientists’ work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.’ You can learn more in this explainer from the Social Science Space.  

A picture showing a manuscript with annotations, a notebook, and a journal.

Peer review brings academic research to publication in the following ways:

  • Evaluation – Peer review is an effective form of research evaluation to help select the highest quality articles for publication.
  • Integrity – Peer review ensures the integrity of the publishing process and the scholarly record. Reviewers are independent of journal publications and the research being conducted.
  • Quality – The filtering process and revision advice improve the quality of the final research article as well as offering the author new insights into their research methods and the results that they have compiled. Peer review gives authors access to the opinions of experts in the field who can provide support and insight.

Types of peer review

  • Single-anonymized  – the name of the reviewer is hidden from the author.
  • Double-anonymized  – names are hidden from both reviewers and the authors.
  • Triple-anonymized  – names are hidden from authors, reviewers, and the editor.
  • Open peer review comes in many forms . At Sage we offer a form of open peer review on some journals via our Transparent Peer Review program , whereby the reviews are published alongside the article. The names of the reviewers may also be published, depending on the reviewers’ preference.
  • Post publication peer review can offer useful interaction and a discussion forum for the research community. This form of peer review is not usual or appropriate in all fields.

To learn more about the different types of peer review, see page 14 of ‘ The Nuts and Bolts of Peer Review ’ from Sense about Science.

Please double check the manuscript submission guidelines of the journal you are reviewing in order to ensure that you understand the method of peer review being used.

  • Journal Author Gateway
  • Journal Editor Gateway
  • Transparent Peer Review
  • How to Review Articles
  • Using Sage Track
  • Peer Review Ethics
  • Resources for Reviewers
  • Reviewer Rewards
  • Ethics & Responsibility
  • Sage Editorial Policies
  • Publication Ethics Policies
  • Sage Chinese Author Gateway 中国作者资源
  • Open Resources & Current Initiatives
  • Discipline Hubs

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 16 April 2024

Structure peer review to make it more robust

peer review in academic research

  • Mario Malički 0

Mario Malički is associate director of the Stanford Program on Research Rigor and Reproducibility (SPORR) and co-editor-in-chief of the Research Integrity and Peer Review journal.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

In February, I received two peer-review reports for a manuscript I’d submitted to a journal. One report contained 3 comments, the other 11. Apart from one point, all the feedback was different. It focused on expanding the discussion and some methodological details — there were no remarks about the study’s objectives, analyses or limitations.

My co-authors and I duly replied, working under two assumptions that are common in scholarly publishing: first, that anything the reviewers didn’t comment on they had found acceptable for publication; second, that they had the expertise to assess all aspects of our manuscript. But, as history has shown, those assumptions are not always accurate (see Lancet 396 , 1056; 2020 ). And through the cracks, inaccurate, sloppy and falsified research can slip.

As co-editor-in-chief of the journal Research Integrity and Peer Review (an open-access journal published by BMC, which is part of Springer Nature), I’m invested in ensuring that the scholarly peer-review system is as trustworthy as possible. And I think that to be robust, peer review needs to be more structured. By that, I mean that journals should provide reviewers with a transparent set of questions to answer that focus on methodological, analytical and interpretative aspects of a paper.

For example, editors might ask peer reviewers to consider whether the methods are described in sufficient detail to allow another researcher to reproduce the work, whether extra statistical analyses are needed, and whether the authors’ interpretation of the results is supported by the data and the study methods. Should a reviewer find anything unsatisfactory, they should provide constructive criticism to the authors. And if reviewers lack the expertise to assess any part of the manuscript, they should be asked to declare this.

peer review in academic research

Anonymizing peer review makes the process more just

Other aspects of a study, such as novelty, potential impact, language and formatting, should be handled by editors, journal staff or even machines, reducing the workload for reviewers.

The list of questions reviewers will be asked should be published on the journal’s website, allowing authors to prepare their manuscripts with this process in mind. And, as others have argued before, review reports should be published in full. This would allow readers to judge for themselves how a paper was assessed, and would enable researchers to study peer-review practices.

To see how this works in practice, since 2022 I’ve been working with the publisher Elsevier on a pilot study of structured peer review in 23 of its journals, covering the health, life, physical and social sciences. The preliminary results indicate that, when guided by the same questions, reviewers made the same initial recommendation about whether to accept, revise or reject a paper 41% of the time, compared with 31% before these journals implemented structured peer review. Moreover, reviewers’ comments were in agreement about specific parts of a manuscript up to 72% of the time ( M. Malički and B. Mehmani Preprint at bioRxiv https://doi.org/mrdv; 2024 ). In my opinion, reaching such agreement is important for science, which proceeds mainly through consensus.

peer review in academic research

Stop the peer-review treadmill. I want to get off

I invite editors and publishers to follow in our footsteps and experiment with structured peer reviews. Anyone can trial our template questions (see go.nature.com/4ab2ppc ), or tailor them to suit specific fields or study types. For instance, mathematics journals might also ask whether referees agree with the logic or completeness of a proof. Some journals might ask reviewers if they have checked the raw data or the study code. Publications that employ editors who are less embedded in the research they handle than are academics might need to include questions about a paper’s novelty or impact.

Scientists can also use these questions, either as a checklist when writing papers or when they are reviewing for journals that don’t apply structured peer review.

Some journals — including Proceedings of the National Academy of Sciences , the PLOS family of journals, F1000 journals and some Springer Nature journals — already have their own sets of structured questions for peer reviewers. But, in general, these journals do not disclose the questions they ask, and do not make their questions consistent. This means that core peer-review checks are still not standardized, and reviewers are tasked with different questions when working for different journals.

Some might argue that, because different journals have different thresholds for publication, they should adhere to different standards of quality control. I disagree. Not every study is groundbreaking, but scientists should view quality control of the scientific literature in the same way as quality control in other sectors: as a way to ensure that a product is safe for use by the public. People should be able to see what types of check were done, and when, before an aeroplane was approved as safe for flying. We should apply the same rigour to scientific research.

Ultimately, I hope for a future in which all journals use the same core set of questions for specific study types and make all of their review reports public. I fear that a lack of standard practice in this area is delaying the progress of science.

Nature 628 , 476 (2024)

doi: https://doi.org/10.1038/d41586-024-01101-9

Reprints and permissions

Competing Interests

M.M. is co-editor-in-chief of the Research Integrity and Peer Review journal that publishes signed peer review reports alongside published articles. He is also the chair of the European Association of Science Editors Peer Review Committee.

Related Articles

peer review in academic research

  • Scientific community
  • Peer review

Female academics need more support — in China as elsewhere

Correspondence 16 APR 24

Shrouded in secrecy: how science is harmed by the bullying and harassment rumour mill

Shrouded in secrecy: how science is harmed by the bullying and harassment rumour mill

Career Feature 16 APR 24

Use game theory for climate models that really help reach net zero goals

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Three ways ChatGPT helps me in my academic writing

Three ways ChatGPT helps me in my academic writing

Career Column 08 APR 24

Is AI ready to mass-produce lay summaries of research articles?

Is AI ready to mass-produce lay summaries of research articles?

Nature Index 20 MAR 24

Postdoctoral Research Associate position at University of Oklahoma Health Sciences Center

Postdoctoral Research Associate position at University of Oklahoma Health Sciences Center   The Kamiya Mehla lab at the newly established Departmen...

Oklahoma City, Oklahoma

University of Oklahoma Health Sciences Center

peer review in academic research

Computational Postdoctoral Fellow with a Strong Background in Bioinformatics

Houston, Texas (US)

The University of Texas MD Anderson Cancer Center

peer review in academic research

Locum Associate or Senior Editor (Immunology), Nature Communications

The Editor in Immunology at Nature Communications will handle original research papers and work on all aspects of the editorial process.

London, Beijing or Shanghai - Hybrid working model

Springer Nature Ltd

peer review in academic research

Assistant Professor - Cell Physiology & Molecular Biophysics

Opportunity in the Department of Cell Physiology and Molecular Biophysics (CPMB) at Texas Tech University Health Sciences Center (TTUHSC)

Lubbock, Texas

Texas Tech University Health Sciences Center, School of Medicine

peer review in academic research

Postdoctoral Associate- Curing Brain Tumors

Baylor College of Medicine (BCM)

peer review in academic research

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Library databases
  • Library website

Evaluating Resources: Peer Review

What is peer review.

The term peer review can be confusing, since in some of your courses you may be asked to review the work of your peers. When we talk about peer-reviewed journal articles, this has nothing to do with your peers!

Peer-reviewed journals, also called refereed journals, are journals that use a specific scholarly review process to try to ensure the accuracy and reliability of published articles. When an article is submitted to a peer-reviewed journal for publication, the journal sends the article to other scholars/experts in that field and has them review the article for accuracy and reliability.

Find out more about peer review with our Peer Review Guide:

  • Peer Review Guide

Types of peer review

Single blind.

In this process, the names of the reviewers are not known to the author(s). The reviewers do know the name of the author(s).

Double blind

Here, neither reviewers or authors know each other's names.

In the open review process, both reviewers and authors know each other's names.

What about editorial review?

Journals also use an editorial review process. This is not the same as peer review. In an editorial review process an article is evaluated for style guidelines and for clarity. Reviewers here do not look at technical accuracy or errors in data or methodology, but instead look at grammar, style, and whether an article is well written.

What is the difference between scholarly and peer review?

Not all scholarly journals are peer reviewed, but all peer-reviewed journals are scholarly.

  • Things that are written for a scholarly or academic audience are considered scholarly writing.
  • Peer-reviewed journals are a part of the larger category of scholarly writing.
  • Scholarly writing includes many resources that are not peer reviewed, such as books, textbooks, and dissertations.

Scholarly writing does not come with a label that says scholarly . You will need to evaluate the resource to see if it is

  • aimed at a scholarly audience
  • reporting research, theories or other types of information important to scholars
  • documenting and citing sources used to help authenticate the research done

The standard peer review process only applies to journals. While scholarly writing has certainly been edited and reviewed, peer review is a specific process only used by peer-reviewed journals. Books and dissertations may be scholarly, but are not considered peer reviewed.

Check out Select the Right Source for help with what kinds of resources are appropriate for discussion posts, assignments, projects, and more:

  • Select the Right Source

How do I locate or verify peer-reviewed articles?

The peer review process is initiated by the journal publisher before an article is even published. Nowhere in the article will it tell you whether or not the article has gone through a peer review process.

You can locate peer-reviewed articles in the Library databases, typically by checking a limiter box.

  • Quick Answer: How do I find scholarly, peer reviewed journal articles?

You can verify whether a journal uses a peer review process by using Ulrich's Periodicals Directory.

  • Quick Answer: How do I verify that my article is peer reviewed?

What about resources that are not peer-reviewed?

Limiting your search to peer review is a way that you can ensure that you're looking at scholarly journal articles, and not popular or trade publications. Because peer-reviewed articles have been vetted by experts in the field, they are viewed as being held to a higher standard, and therefore are considered to be a high quality source. Professors often prefer peer-reviewed articles because they are considered to be of higher quality.

There are times, though, when the information you need may not be available in a peer-reviewed article.

  • You may need to find original work on a theory that was first published in a book.
  • You may need to find very current statistical data that comes from a government website.
  • You may need background information that comes from a scholarly encyclopedia.

You will want to evaluate these resources to make sure that they are the best source for the information you need.

Note: If you are required for an assignment to find information from a peer-reviewed journal, then you will not be able to use non-peer-reviewed sources such as books, dissertations, or government websites. It's always best to clarify any questions over assignments with your professor.

  • Previous Page: Evaluation Methods
  • Next Page: Primary & Secondary Sources
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Disclaimer » Advertising

  • HealthyChildren.org

Issue Cover

  • Previous Article
  • Next Article

What is the Purpose of Peer Review?

What makes a good peer reviewer, how do you decide whether to review a paper, how do you complete a peer review, limitations of peer review, conclusions, research methods: how to perform an effective peer review.

  • Split-Screen
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • CME Quiz Close Quiz
  • Open the PDF for in another window
  • Get Permissions
  • Cite Icon Cite
  • Search Site

Elise Peterson Lu , Brett G. Fischer , Melissa A. Plesac , Andrew P.J. Olson; Research Methods: How to Perform an Effective Peer Review. Hosp Pediatr November 2022; 12 (11): e409–e413. https://doi.org/10.1542/hpeds.2022-006764

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer review process, including discussion of the purpose and limitations of peer review, the qualities of a good peer reviewer, and a step-by-step process of how to conduct an effective peer review.

Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1 , 2   It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3   In 2012, there were more than 28 000 scholarly peer-reviewed journals and more than 3 million peer reviewed articles are now published annually. 3 , 4   However, even with this volume, most peer reviewers learn to review “on the (unpaid) job” and no standard training system exists to ensure quality and consistency. 5   Expectations and format vary between journals and most, but not all, provide basic instructions for reviewers. In this paper, we provide a general introduction to the peer review process and identify common strategies for success as well as pitfalls to avoid.

Modern peer review serves 2 primary purposes: (1) as “a screen before the diffusion of new knowledge” 6   and (2) as a method to improve the quality of published work. 1 , 5  

As screeners, peer reviewers evaluate the quality, validity, relevance, and significance of research before publication to maintain the credibility of the publications they serve and their fields of study. 1 , 2 , 7   Although peer reviewers are not the final decision makers on publication (that role belongs to the editor), their recommendations affect editorial decisions and thoughtful comments influence an article’s fate. 6 , 8  

As advisors and evaluators of manuscripts, reviewers have an opportunity and responsibility to give authors an outside expert’s perspective on their work. 9   They provide feedback that can improve methodology, enhance rigor, improve clarity, and redefine the scope of articles. 5 , 8 , 10   This often happens even if a paper is not ultimately accepted at the reviewer’s journal because peer reviewers’ comments are incorporated into revised drafts that are submitted to another journal. In a 2019 survey of authors, reviewers, and editors, 83% said that peer review helps science communication and 90% of authors reported that peer review improved their last paper. 11  

Expertise: Peer reviewers should be up to date with current literature, practice guidelines, and methodology within their subject area. However, academic rank and seniority do not define expertise and are not actually correlated with performance in peer review. 13  

Professionalism: Reviewers should be reliable and objective, aware of their own biases, and respectful of the confidentiality of the peer review process.

Critical skill : Reviewers should be organized, thorough, and detailed in their critique with the goal of improving the manuscript under their review, regardless of disposition. They should provide constructive comments that are specific and addressable, referencing literature when possible. A peer reviewer should leave a paper better than he or she found it.

Is the manuscript within your area of expertise? Generally, if you are asked to review a paper, it is because an editor felt that you were a qualified expert. In a 2019 survey, 74% of requested reviews were within the reviewer’s area of expertise. 11   This, of course, does not mean that you must be widely published in the area, only that you have enough expertise and comfort with the topic to critique and add to the paper.

Do you have any biases that may affect your review? Are there elements of the methodology, content area, or theory with which you disagree? Some disagreements between authors and reviewers are common, expected, and even helpful. However, if a reviewer fundamentally disagrees with an author’s premise such that he or she cannot be constructive, the review invitation should be declined.

Do you have the time? The average review for a clinical journal takes 5 to 6 hours, though many take longer depending on the complexity of the research and the experience of the reviewer. 1 , 14   Journals vary on the requested timeline for return of reviews, though it is usually 1 to 4 weeks. Peer review is often the longest part of the publication process and delays contribute to slower dissemination of important work and decreased author satisfaction. 15   Be mindful of your schedule and only accept a review invitation if you can reasonably return the review in the requested time.

Once you have determined that you are the right person and decided to take on the review, reply to the inviting e-mail or click the associated link to accept (or decline) the invitation. Journal editors invite a limited number of reviewers at a time and wait for responses before inviting others. A common complaint among journal editors surveyed was that reviewers would often take days to weeks to respond to requests, or not respond at all, making it difficult to find appropriate reviewers and prolonging an already long process. 5  

Now that you have decided to take on the review, it is best of have a systematic way of both evaluating the manuscript and writing the review. Various suggestions exist in the literature, but we will describe our standard procedure for review, incorporating specific do’s and don’ts summarized in Table 1 .

Dos and Don’ts of Peer Review

First, read the manuscript once without making notes or forming opinions to get a sense of the paper as whole. Assess the overall tone and flow and define what the authors identify as the main point of their work. Does the work overall make sense? Do the authors tell the story effectively?

Next, read the manuscript again with an eye toward review, taking notes and formulating thoughts on strengths and weaknesses. Consider the methodology and identify the specific type of research described. Refer to the corresponding reporting guideline if applicable (CONSORT for randomized control trials, STROBE for observational studies, PRISMA for systematic reviews). Reporting guidelines often include a checklist, flow diagram, or structured text giving a minimum list of information needed in a manuscript based on the type of research done. 16   This allows the reviewer to formulate a more nuanced and specific assessment of the manuscript.

Next, review the main findings, the significance of the work, and what contribution it makes to the field. Examine the presentation and flow of the manuscript but do not copy edit the text. At this point, you should start to write your review. Some journals provide a format for their reviews, but often it is up to the reviewer. In surveys of journal editors and reviewers, a review organized by manuscript section was the most favored, 5 , 6   so that is what we will describe here.

As you write your review, consider starting with a brief summary of the work that identifies the main topic, explains the basic approach, and describes the findings and conclusions. 12 , 17   Though not universally included in all reviews, we have found this step to be helpful in ensuring that the work is conveyed clearly enough for the reviewer to summarize it. Include brief notes on the significance of the work and what it adds to current knowledge. Critique the presentation of the work: is it clearly written? Is its length appropriate? List any major concerns with the work overall, such as major methodological flaws or inaccurate conclusions that should disqualify it from publication, though do not comment directly on disposition. Then perform your review by section:

Abstract : Is it consistent with the rest of the paper? Does it adequately describe the major points?

Introduction : This section should provide adequate background to explain the need for the study. Generally, classic or highly relevant studies should be cited, but citations do not have to be exhaustive. The research question and hypothesis should be clearly stated.

Methods: Evaluate both the methods themselves and the way in which they are explained. Does the methodology used meet the needs of the questions proposed? Is there sufficient detail to explain what the authors did and, if not, what needs to be added? For clinical research, examine the inclusion/exclusion criteria, control populations, and possible sources of bias. Reporting guidelines can be particularly helpful in determining the appropriateness of the methods and how they are reported.

Some journals will expect an evaluation of the statistics used, whereas others will have a separate statistician evaluate, and the reviewers are generally not expected to have an exhaustive knowledge of statistical methods. Clarify expectations if needed and, if you do not feel qualified to evaluate the statistics, make this clear in your review.

Results: Evaluate the presentation of the results. Is information given in sufficient detail to assess credibility? Are the results consistent with the methodology reported? Are the figures and tables consistent with the text, easy to interpret, and relevant to the work? Make note of data that could be better detailed in figures or tables, rather than included in the text. Make note of inappropriate interpretation in the results section (this should be in discussion) or rehashing of methods.

Discussion: Evaluate the authors’ interpretation of their results, how they address limitations, and the implications of their work. How does the work contribute to the field, and do the authors adequately describe those contributions? Make note of overinterpretation or conclusions not supported by the data.

The length of your review often correlates with your opinion of the quality of the work. If an article has major flaws that you think preclude publication, write a brief review that focuses on the big picture. Articles that may not be accepted but still represent quality work merit longer reviews aimed at helping the author improve the work for resubmission elsewhere.

Generally, do not include your recommendation on disposition in the body of the review itself. Acceptance or rejection is ultimately determined by the editor and including your recommendation in your comments to the authors can be confusing. A journal editor’s decision on acceptance or rejection may depend on more factors than just the quality of the work, including the subject area, journal priorities, other contemporaneous submissions, and page constraints.

Many submission sites include a separate question asking whether to accept, accept with major revision, or reject. If this specific format is not included, then add your recommendation in the “confidential notes to the editor.” Your recommendation should be consistent with the content of your review: don’t give a glowing review but recommend rejection or harshly criticize a manuscript but recommend publication. Last, regardless of your ultimate recommendation on disposition, it is imperative to use respectful and professional language and tone in your written review.

Although peer review is often described as the “gatekeeper” of science and characterized as a quality control measure, peer review is not ideally designed to detect fundamental errors, plagiarism, or fraud. In multiple studies, peer reviewers detected only 20% to 33% of intentionally inserted errors in scientific manuscripts. 18 , 19   Plagiarism similarly is not detected in peer review, largely because of the huge volume of literature available to plagiarize. Most journals now use computer software to identify plagiarism before a manuscript goes to peer review. Finally, outright fraud often goes undetected in peer review. Reviewers start from a position of respect for the authors and trust the data they are given barring obvious inconsistencies. Ultimately, reviewers are “gatekeepers, not detectives.” 7  

Peer review is also limited by bias. Even with the best of intentions, reviewers bring biases including but not limited to prestige bias, affiliation bias, nationality bias, language bias, gender bias, content bias, confirmation bias, bias against interdisciplinary research, publication bias, conservatism, and bias of conflict of interest. 3 , 4 , 6   For example, peer reviewers score methodology higher and are more likely to recommend publication when prestigious author names or institutions are visible. 20   Although bias can be mitigated both by the reviewer and by the journal, it cannot be eliminated. Reviewers should be mindful of their own biases while performing reviews and work to actively mitigate them. For example, if English language editing is necessary, state this with specific examples rather than suggesting the authors seek editing by a “native English speaker.”

Peer review is an essential, though imperfect, part of the forward movement of science. Peer review can function as both a gatekeeper to protect the published record of science and a mechanism to improve research at the level of individual manuscripts. Here, we have described our strategy, summarized in Table 2 , for performing a thorough peer review, with a focus on organization, objectivity, and constructiveness. By using a systematized strategy to evaluate manuscripts and an organized format for writing reviews, you can provide a relatively objective perspective in editorial decision-making. By providing specific and constructive feedback to authors, you contribute to the quality of the published literature.

Take-home Points

FUNDING: No external funding.

CONFLICT OF INTEREST DISCLOSURES: The authors have indicated they have no potential conflicts of interest to disclose.

Dr Lu performed the literature review and wrote the manuscript. Dr Fischer assisted in the literature review and reviewed and edited the manuscript. Dr Plesac provided background information on the process of peer review, reviewed and edited the manuscript, and completed revisions. Dr Olson provided background information and practical advice, critically reviewed and revised the manuscript, and approved the final manuscript.

Advertising Disclaimer »

Citing articles via

Email alerts.

peer review in academic research

Affiliations

  • Editorial Board
  • Editorial Policies
  • Pediatrics On Call
  • Online ISSN 2154-1671
  • Print ISSN 2154-1663
  • Pediatrics Open Science
  • Hospital Pediatrics
  • Pediatrics in Review
  • AAP Grand Rounds
  • Latest News
  • Pediatric Care Online
  • Red Book Online
  • Pediatric Patient Education
  • AAP Toolkits
  • AAP Pediatric Coding Newsletter

First 1,000 Days Knowledge Center

Institutions/librarians, group practices, licensing/permissions, integrations, advertising.

  • Privacy Statement | Accessibility Statement | Terms of Use | Support Center | Contact Us
  • © Copyright American Academy of Pediatrics

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Banner

Peer Reviewed Literature

What is peer review, terminology, peer review what does that mean, what types of articles are peer-reviewed, what information is not peer-reviewed, what about google scholar.

  • How do I find peer-reviewed articles?
  • Scholarly vs. Popular Sources

Research Librarian

For more help on this topic, please contact our Research Help Desk: [email protected] or 781-768-7303. Stay up-to-date on our current hours . Note: all hours are EST.

peer review in academic research

This Guide was created by Carolyn Swidrak (retired).

Research findings are communicated in many ways.  One of the most important ways is through publication in scholarly, peer-reviewed journals.

Research published in scholarly journals is held to a high standard.  It must make a credible and significant contribution to the discipline.  To ensure a very high level of quality, articles that are submitted to scholarly journals undergo a process called peer-review.

Once an article has been submitted for publication, it is reviewed by other independent, academic experts (at least two) in the same field as the authors.  These are the peers.  The peers evaluate the research and decide if it is good enough and important enough to publish.  Usually there is a back-and-forth exchange between the reviewers and the authors, including requests for revisions, before an article is published. 

Peer review is a rigorous process but the intensity varies by journal.  Some journals are very prestigious and receive many submissions for publication.  They publish only the very best, most highly regarded research. 

The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.

Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.)  For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

Peer-reviewed and refereed are identical terms.

From  Peer Review in 3 Minutes  [Video], by the North Carolina State University Library, 2014, YouTube (https://youtu.be/rOCQZ7QnoN0).

Peer reviewed articles can include:

  • Original research (empirical studies)
  • Review articles
  • Systematic reviews
  • Meta-analyses

There is much excellent, credible information in existence that is NOT peer-reviewed.  Peer-review is simply ONE MEASURE of quality. 

Much of this information is referred to as "gray literature."

Government Agencies

Government websites such as the Centers for Disease Control (CDC) publish high level, trustworthy information.  However, most of it is not peer-reviewed.  (Some of their publications are peer-reviewed, however. The journal Emerging Infectious Diseases, published by the CDC is one example.)

Conference Proceedings

Papers from conference proceedings are not usually peer-reviewed.  They may go on to become published articles in a peer-reviewed journal. 

Dissertations

Dissertations are written by doctoral candidates, and while they are academic they are not peer-reviewed.

Many students like Google Scholar because it is easy to use.  While the results from Google Scholar are generally academic they are not necessarily peer-reviewed.  Typically, you will find:

  • Peer reviewed journal articles (although they are not identified as peer-reviewed)
  • Unpublished scholarly articles (not peer-reviewed)
  • Masters theses, doctoral dissertations and other degree publications (not peer-reviewed)
  • Book citations and links to some books (not necessarily peer-reviewed)
  • Next: How do I find peer-reviewed articles? >>
  • Last Updated: Feb 12, 2024 9:39 AM
  • URL: https://libguides.regiscollege.edu/peer_review

A modified action framework to develop and evaluate academic-policy engagement interventions

  • Petra Mäkelä   ORCID: orcid.org/0000-0002-0938-1175 1 ,
  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 2 &
  • Kathryn Oliver   ORCID: orcid.org/0000-0002-4326-5258 1  

Implementation Science volume  19 , Article number:  31 ( 2024 ) Cite this article

231 Accesses

17 Altmetric

Metrics details

There has been a proliferation of frameworks with a common goal of bridging the gap between evidence, policy, and practice, but few aim to specifically guide evaluations of academic-policy engagement. We present the modification of an action framework for the purpose of selecting, developing and evaluating interventions for academic-policy engagement.

We build on the conceptual work of an existing framework known as SPIRIT (Supporting Policy In Health with Research: an Intervention Trial), developed for the evaluation of strategies intended to increase the use of research in health policy. Our aim was to modify SPIRIT, (i) to be applicable beyond health policy contexts, for example encompassing social, environmental, and economic policy impacts and (ii) to address broader dynamics of academic-policy engagement. We used an iterative approach through literature reviews and consultation with multiple stakeholders from Higher Education Institutions (HEIs) and policy professionals working at different levels of government and across geographical contexts in England, alongside our evaluation activities in the Capabilities in Academic Policy Engagement (CAPE) programme.

Our modifications expand upon Redman et al.’s original framework, for example adding a domain of ‘Impacts and Sustainability’ to capture continued activities required in the achievement of desirable outcomes. The modified framework fulfils the criteria for a useful action framework, having a clear purpose, being informed by existing understandings, being capable of guiding targeted interventions, and providing a structure to build further knowledge.

The modified SPIRIT framework is designed to be meaningful and accessible for people working across varied contexts in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, to ultimately support the use of evidence in decision-making.

Peer Review reports

Contributions to the literature

There has been a proliferation of theories, models and frameworks relating to translation of research into practice. Few specifically relate to engagement between academia and policy.

Challenges of evidence-informed policy-making are receiving increasing attention globally. There is a growing number of academic-policy engagement interventions but a lack of published evaluations.

This article contributes a modified action framework that can be used to guide how academic-policy engagement interventions might be developed, evaluated, facilitated, and improved, to support the use of evidence in policy decision-making.

Our contribution demonstrates the potential for modification of existing, useful frameworks instead of creating brand-new frameworks. It provides an exemplar for others who are considering when and how to modify existing frameworks to address new or expanded purposes while respecting the conceptual underpinnings of the original work.

Academic-policy engagement refers to ways that Higher Education Institutions (HEIs) and their staff engage with institutions responsible for policy at national, regional, county or local levels. Academic-policy engagement is intended to support the use of evidence in decision-making and in turn, improve its effectiveness, and inform the identification of barriers and facilitators in policy implementation [ 1 , 2 , 3 ]. Challenges of evidence-informed policy-making are receiving increasing attention globally, including the implications of differences in cultural norms and mechanisms across national contexts [ 4 , 5 ]. Although challenges faced by researchers and policy-makers have been well documented [ 6 , 7 ], there has been less focus on actions at the engagement interface. Pragmatic guidance for the development, evaluation or comparison of structured responses to the challenges of academic-policy engagement is currently lacking [ 8 , 9 ].

Academic-policy engagement exists along a continuum of approaches from linear (pushing evidence out from academia or pulling evidence into policy), relational (promoting mutual understandings and partnerships), and systems approaches (addressing identified barriers and facilitators) [ 4 ]. Each approach is underpinned by sets of beliefs, assumptions and expectations, and each raises questions for implementation and evaluation. Little is known about which academic-policy engagement interventions work in which settings, with scarce empirical evidence to inform decisions about which interventions to use, when, with whom, or why, and how organisational contexts can affect motivation and capabilities for such engagement [ 10 ]. A deeper understanding through the evaluation of engagement interventions will help to identify inhibitory and facilitatory factors, which may or may not transfer across contexts [ 11 ].

The intellectual technologies [ 12 ] of implementation science have proliferated in recent decades, including models, frameworks and theories that address research translation and acknowledge difficulties in closing the gap between research, policy and practice [ 13 ]. Frameworks may serve overlapping purposes of describing or guiding processes of translating knowledge into practice (e.g. the Quality Implementation Framework [ 14 ]); or helping to explain influences on implementation outcomes (e.g. the Theoretical Domains Framework [ 15 ]); or guiding evaluation (e.g. the RE-AIM framework [ 16 , 17 ]. Frameworks can offer an efficient way to look across diverse settings and to identify implementation differences [ 18 , 19 ]. However, the abundance of options raises its own challenges when seeking a framework for a particular purpose, and the use of a framework may mean that more weight is placed on certain aspects, leading to a partial understanding [ 13 , 17 ].

‘Action frameworks’ are predictive models that intend to organise existing knowledge and enable a logical approach for the selection, implementation and evaluation of intervention strategies, thereby facilitating the expansion of that knowledge [ 20 ]. They can guide change by informing and clarifying practical steps to follow. As flexible entities, they can be adapted to accommodate new purposes. Framework modification may include the addition of constructs or changes in language to expand applicability to a broader range of settings [ 21 ].

We sought to identify one organising framework for evaluation activities in the Capabilities in Academic-Policy Engagement (CAPE) programme (2021–2023), funded by Research England. The CAPE programme aimed to understand how best to support effective and sustained engagement between academics and policy professionals across the higher education sector in England [ 22 ]. We first searched the literature and identified an action framework that was originally developed between 2011 and 2013, to underpin a trial known as SPIRIT (Supporting Policy In health with Research: an Intervention Trial) [ 20 , 23 ]. This trial evaluated strategies intended to increase the use of research in health policy and to identify modifiable points for intervention.

We selected the SPIRIT framework due to its potential suitability as an initial ‘road map’ for our evaluation of academic-policy interventions in the CAPE programme. The key elements of the original framework are catalysts, organisational capacity, engagement actions, and research use. We wished to build on the framework’s embedded conceptual work, derived from literature reviews and semi-structured interviews, to identify policymakers’ views on factors that assist policy agencies’ use of research [ 20 ]. The SPIRIT framework developers defined its “locus for change” as the policy organisation ( [ 20 ], p. 151). They proposed that it could offer the beginning of a process to identify and test pathways in policy agencies’ use of evidence.

Our goal was to modify SPIRIT to accommodate a different locus for change: the engagement interface between academia and policy. Instead of imagining a linear process in which knowledge comes from researchers and is transmitted to policy professionals, we intended to extend the framework to multidirectional relational and system interfaces. We wished to include processes and influences at individual, organisational and system levels, to be relevant for HEIs and their staff, policy bodies and professionals, funders of engagement activities, and facilitatory bodies. Ultimately, we seek to address a gap in understanding how engagement strategies work, for whom, how they are facilitated, and to improve the evaluation of academic-policy engagement.

We aimed to produce a conceptually guided action framework to enable systematic evaluation of interventions intending to support academic-policy engagement.

We used a pragmatic combination of processes for framework modification during our evaluation activities in the CAPE programme [ 22 ]. The CAPE programme included a range of interventions: seed funding for academic and policy professional collaboration in policy-focused projects, fellowships for academic placements in policy settings, or for policy professionals with HEI staff, training for policy professionals, and a range of knowledge exchange events for HEI staff and policy professionals. We modified the SPIRIT framework through iterative processes shown in Table  1 , including reviews of literature; consultations with HEI staff and policy professionals across a range of policy contexts and geographic settings in England, through the CAPE programme; and piloting, refining and seeking feedback from stakeholders in academic-policy engagement.

A number of characteristics of the original SPIRIT framework could be applied to academic-policy engagement. While keeping the core domains, we modified the framework to capture dynamics of engagement at multiple academic and policy levels (individuals, organisations and system), extending beyond the original unidirectional focus on policy agencies’ use of research. Components of the original framework, the need for modifications, and their corresponding action-oriented implications are shown in Table  2 . We added a new domain, ‘Impacts and Sustainability’, to consider transforming and enduring aspects at the engagement interface. The modified action framework is shown in Fig.  1 .

figure 1

SPIRIT Action Framework Modified for Academic-Policy Engagement Interventions (SPIRIT-ME), adapted with permission from the Sax Institute. Legend: The framework acknowledges that elements in each domain may influence other elements through mechanisms of action and that these do not necessarily flow through the framework in a ‘pipeline’ sequence. Mechanisms of action are processes through which engagement strategies operate to achieve desired outcomes. They might rely on influencing factors, catalysts, an aspect of an intervention action, or a combination of elements

Identifying relevant theories or models for missing elements

Catalysts and capacity.

Within our evaluation of academic-policy interventions, we identified a need to develop the original domain of catalysts beyond ‘policy/programme need for research’ and ‘new research with potential policy relevance’. Redman et al. characterised a catalyst as “a need for information to answer a particular problem in policy or program design, or to assist in supporting a case for funding” in the original framework (p. 149). We expanded this “need for information” to a perceived need for engagement, by either HEI staff or policy professionals, linking to the potential value they perceived in engaging. Specifically, there was a need to consider catalysts at the level of individual engagement, for example HEI staff wanting research to have real-world impact, or policy professionals’ desires to improve decision-making in policy, where productive interactions between academic and policy stakeholders are “necessary interim steps in the process that lead to societal impact” ( [ 24 ], p. 214). The catalyst domain expands the original emphasis on a need for research, to take account of challenges to be overcome by both the academic and policy communities in knowing how, and with whom, to engage and collaborate with [ 25 ].

We used a model proposing that there are three components for any behaviour: capability, opportunity and motivation, which is known as the COM-B model [ 26 ]. Informed by CAPE evaluation activities and our discussions with stakeholders, we mapped the opportunity and motivation constructs into the ‘catalysts’ domain of the original framework. Opportunity is an attribute of the system that can facilitate engagement. It may be a tangible factor such as the availability of seed funding, or a perceived social opportunity such as institutional support for engagement activities. Opportunity can act at the macro level of systems and organisational structures. Motivation acts at the micro level, deriving from an individual’s mental processes that stimulate and direct their behaviours; in this case, taking part in academic-policy engagement actions. The COM-B model distinguishes between reflective motivation through conscious planning and automatic motivation that may be instinctive or affective [ 26 ].

We presented an early application of the COM-B model to catalysts for engagement at an academic conference, enabling an informal exploration of attendees’ subjective views on the clarity and appropriateness, when developing the framework. This application introduces possibilities for intervention development and support by highlighting ‘opportunities’ and ‘motivations’ as key catalysts in the modified framework.

Within the ‘capacity’ domain, we retained the original levels of individuals, organisations and systems. We introduced individual capability as a construct from the COM-B model, describing knowledge, skills and abilities to generate behaviour change as a precursor of academic-policy engagement. This reframing extends the applicability to HEI staff as well as policy professionals. It brings attention to different starting conditions for individuals, such as capabilities developed through previous experience, which can link with social opportunity (for example, through training or support) as a catalyst.

Engagement actions

We identified a need to modify the original domain ‘engagement actions’ to extend the focus beyond the use of research. We added three categories of engagement actions described by Best and Holmes [ 27 ]: linear, relational, and systems. These categories were further specified through a systematic mapping of international organisations’ academic-policy engagement activities [ 5 ]. This framework modification expands the domain to encompass: (i) linear ‘push’ of evidence from academia or ‘pull’ of evidence into policy agencies; (ii) relational approaches focused on academic-policy-maker collaboration; and (iii) systems’ strategies to facilitate engagement for example through strategic leadership, rewards or incentives [ 5 ].

We retained the elements in the original framework’s ‘outcomes’ domain (instrumental, tactical, conceptual and imposed), which we found could apply to outcomes of engagement as well as research use. For example, discussions between a policy professional and a range of academics could lead to a conceptual outcome by considering an issue through different disciplinary lenses. We expanded these elements by drawing on literature on engagement outcomes [ 28 ] and through sense-checking with stakeholders in CAPE. We added capacity-building (changes to skills and expertise), connectivity (changes to the number and quality of relationships), and changes in organisational culture or attitude change towards engagement.

Impacts and sustainability

The original framework contained endpoints described as: ‘Better health system and health outcomes’ and ‘Research-informed health policy and policy documents’. For modification beyond health contexts and to encompass broader intentions of academic-policy engagement, we replaced these elements with a new domain of ‘Impacts and sustainability’. This domain captures the continued activities required in achievement of desirable outcomes [ 29 ]. The modification allows consideration of sustainability in relation to previous stages of engagement interventions, through the identification of beneficial effects that are sustained (or not), in which ways, and for whom. Following Borst [ 30 ], we propose a shift from the expectation that ‘sustainability’ will be a fixed endpoint. Instead, we emphasise the maintenance work needed over time, to sustain productive engagement.

Influences and facilitators

We modified the overarching ‘Policy influences’ (such as public opinion and media) in the original framework, to align with factors influencing academic-policy engagement beyond policy agencies’ use of research. We included influences at the level of the individual (for example, individual moral discretion [ 31 ]), the organisation (for example, managerial practices [ 31 ]) and the system (for example, career incentives [ 32 ]). Each of these processes takes place in the broader context of social, policy and financial environments (that is, potential sources of funding for engagement actions) [ 29 ].

We modified the domain ‘Reservoir of relevant and reliable research’ underpinning the original framework, replacing it with ‘Reservoir of people skills’, to emphasise intangible facilitatory work at the engagement interface, in place of concrete research outputs. We used the ‘Promoting Action on Research Implementation in Health Services’ (PARiHS) framework [ 33 , 34 ], which gives explicit consideration to facilitation mechanisms for researchers and policy-makers [ 13 ] . Here, facilitation expertise includes mechanisms that focus on particular goals (task-oriented facilitation) or enable changes in ways of working (holistic-oriented facilitation). Task-orientated facilitation skills might include, for example, the provision of contacts, practical help or project management skills, while holistic-oriented facilitation involves building and sustaining partnerships or support skills’ development across a range of capabilities. These conceptualisations aligned with our consultations with facilitators of engagement in CAPE. We further extended these to include aspects identified in our evaluation activities: strategic planning, contextual awareness and entrepreneurial orientation.

Piloting and refining the modified framework through stakeholder engagement

We piloted an early version of the modified framework to develop a survey for all CAPE programme participants. During this pilot stage, we sought feedback from the CAPE delivery team members across HEI and policy contexts in England. CAPE delivery team members are based at five collaborating universities with partners in the Parliamentary Office for Science and Technology (POST) and Government Office for Science (GO-Science), and Nesta (a British foundation that supports innovation). The HEI members include academics and professional services knowledge mobilisation staff, responsible for leading and coordinating CAPE activities. The delivery team comprised approximately 15–20 individuals (with some fluctuations according to individual availabilities).

We assessed appropriateness and utility, refined terminology, added domain elements and explored nuances. For example, stakeholders considered the multi-layered possibilities within the domain ‘capacity’, where some HEI or policy departments may demonstrate a belief that it is important to use research in policy, but this might not be the perception of the organisation as a whole. We also sought stakeholders’ views on the utility of the new domains, for example, the identification of facilitator expertise such as acting as a knowledge broker or intermediary; providing training, advice or guidance; facilitating engagement opportunities; creating engagement programmes; and sustainability of engagement that could be conceptualised at multiple levels: personally, in processes or through systems.

Testing against criteria for useful action framework

The modified framework fulfils the properties of a useful action framework [ 20 ]:

It has a clearly articulated purpose: development and evaluation of academic-policy engagement interventions through linear, relational and/or system approaches. It has identified loci for change, at the level of the individual, the organisation or system.

It has been informed by existing understandings, including conceptual work of the original SPIRIT framework, conceptual models identified from the literature, published empirical findings, understandings from consultation with stakeholders, and evaluation activities in CAPE.

It can be applied to the development, implementation and evaluation of targeted academic-policy engagement actions, the selection of points for intervention and identification of potential outcomes, including the work of sustaining them and unanticipated consequences.

It provides a structure to build knowledge by guiding the generation of hypotheses about mechanisms of action in academic-policy engagement interventions, or by adapting the framework further through application in practice.

The proliferation of frameworks to articulate processes of research translation reveals a need for their adaptation when applied in specific contexts. The majority of models in implementation science relate to translation of research into practice. By contrast, our focus was on engagement between academia and policy. There are a growing number of academic-policy engagement interventions but a lack of published evaluations [ 10 ].

Our framework modification provides an exemplar for others who are considering how to adapt existing conceptual frameworks to address new or expanded purposes. Field et al. identified the multiple, idiosyncratic ways that the Knowledge to Action Framework has been applied in practice, demonstrating its ‘informal’ adaptability to different healthcare settings and topics [ 35 ]. Others have reported on specific processes for framework refinement or extension. Wiltsey Stirman et al. adopted a framework that characterised forms of intervention modification, using a “pragmatic, multifaceted approach” ( [ 36 ], p.2). The authors later used the modified version as a foundation to build a further framework to encompass implementation strategies in a range of settings [ 21 ]. Oiumet et al. used the approach of borrowing from a different disciplinary field for framework adaptation, by using a model of absorptive capacity from management science to develop a conceptual framework for civil servants’ absorption of research knowledge [ 37 ].

We also took the approach of “adapting the tools we think with” ( [ 38 ], p.305) during our evaluation activities on the CAPE programme. Our conceptual modifications align with the literature on motivation and entrepreneurial orientation in determining policy-makers’ and researchers’ intentions to carry out engagement in addition to ‘usual’ roles [ 39 , 40 ]. Our framework offers an enabler for academic-policy engagement endeavours, by providing a structure for approaches beyond the linear transfer of information, emphasising the role of multidirectional relational activities, and the importance of their facilitation and maintenance. The framework emphasises the relationship between individuals’ and groups’ actions, and the social contexts in which these are embedded. It offers additional value by capturing the organisational and systems level factors that influence evidence-informed policymaking, incorporating the dynamic features of contexts shaping engagement and research use.

Conclusions

Our modifications extend the original SPIRIT framework’s focus on policy agencies’ use of research, to encompass dynamic academic-policy engagement at the levels of individuals, organisations and systems. Informed by the knowledge and experiences of policy professionals, HEI staff and knowledge mobilisers, it is designed to be meaningful and accessible for people working across varied contexts and functions in the evidence-policy ecosystem. It has potential applications in how academic-policy engagement interventions might be developed, evaluated, facilitated and improved, and it fulfils Redman et al.’s criteria as a useful action framework [ 20 ].

We are testing the ‘SPIRIT-Modified for Engagement’ framework (SPIRIT-ME) through our ongoing evaluation of academic-policy engagement activities. Further empirical research is needed to explore how the framework may capture ‘additionality’, that is, to identify what is achieved through engagement actions in addition to what would have happened anyway, including long-term changes in strategic behaviours or capabilities [ 41 , 42 , 43 ]. Application of the modified framework in practice will highlight its strengths and limitations, to inform further iterative development and adaptation.

Availability of data and materials

Not applicable.

Stewart R, Dayal H, Langer L, van Rooyen C. Transforming evidence for policy: do we have the evidence generation house in order? Humanit Soc Sci Commun. 2022;9(1):1–5.

Article   Google Scholar  

Sanderson I. Complexity, ‘practical rationality’ and evidence-based policy making. Policy Polit. 2006;34(1):115–32.

Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, et al. Using Qualitative Evidence in Decision Making for Health and Social Interventions: An Approach to Assess Confidence in Findings from Qualitative Evidence Syntheses (GRADE-CERQual). PLOS Med. 2015;12(10):e1001895.

Article   PubMed   PubMed Central   Google Scholar  

Bonell C, Meiksin R, Mays N, Petticrew M, McKee M. Defending evidence informed policy making from ideological attack. BMJ. 2018;10(362):k3827.

Hopkins A, Oliver K, Boaz A, Guillot-Wright S, Cairney P. Are research-policy engagement activities informed by policy theory and evidence? 7 challenges to the UK impact agenda. Policy Des Pract. 2021;4(3):341–56.

Google Scholar  

Head BW. Toward More “Evidence-Informed” Policy Making? Public Adm Rev. 2016;76(3):472–84.

Walker LA, Lawrence NS, Chambers CD, Wood M, Barnett J, Durrant H, et al. Supporting evidence-informed policy and scrutiny: A consultation of UK research professionals. PLoS ONE. 2019;14(3):e0214136.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Graham ID, Tetroe J, Group the KT. Planned action theories. In: Knowledge Translation in Health Care. John Wiley and Sons, Ltd; 2013. p. 277–87. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118413555.ch26 Cited 2023 Nov 1

Davies HT, Powell AE, Nutley SM. Mobilising knowledge to improve UK health care: learning from other countries and other sectors – a multimethod mapping study. Southampton (UK): NIHR Journals Library; 2015. (Health Services and Delivery Research). Available from: http://www.ncbi.nlm.nih.gov/books/NBK299400/ Cited 2023 Nov 1

Oliver K, Hopkins A, Boaz A, Guillot-Wright S, Cairney P. What works to promote research-policy engagement? Evid Policy. 2022;18(4):691–713.

Nelson JP, Lindsay S, Bozeman B. The last 20 years of empirical research on government utilization of academic social science research: a state-of-the-art literature review. Adm Soc. 2023;28:00953997231172923.

Bell D. Technology, nature and society: the vicissitudes of three world views and the confusion of realms. Am Sch. 1973;42:385–404.

Milat AJ, Li B. Narrative review of frameworks for translating research evidence into policy and practice. Public Health Res Pract. 2017; Available from: https://apo.org.au/sites/default/files/resource-files/2017-02/apo-nid74420.pdf Cited 2023 Nov 1

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80.

Article   PubMed   Google Scholar  

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7(1):37.

Glasgow RE, Battaglia C, McCreight M, Ayele RA, Rabin BA. Making implementation science more rapid: use of the RE-AIM framework for mid-course adaptations across five health services research projects in the veterans health administration. Front Public Health. 2020;8. Available from: https://www.frontiersin.org/articles/10.3389/fpubh.2020.00194 Cited 2023 Jun 13

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci IS. 2015 Apr 21 10. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4406164/ Cited 2020 May 4

Sheth A, Sinfield JV. An analytical framework to compare innovation strategies and identify simple rules. Technovation. 2022;1(115):102534.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, et al. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136:147–55.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

CAPE. CAPE. 2021. CAPE Capabilities in Academic Policy Engagement. Available from: https://www.cape.ac.uk/ Cited 2021 Aug 3

CIPHER Investigators. Supporting policy in health with research: an intervention trial (SPIRIT)—protocol for a stepped wedge trial. BMJ Open. 2014;4(7):e005293.

Spaapen J, Van Drooge L. Introducing ‘productive interactions’ in social impact assessment. Res Eval. 2011;20(3):211–8.

Williams C, Pettman T, Goodwin-Smith I, Tefera YM, Hanifie S, Baldock K. Experiences of research-policy engagement in policymaking processes. Public Health Res Pract. 2023. Online early publication. https://doi.org/10.17061/phrp33232308 .

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.

Best A, Holmes B. Systems thinking, knowledge and action: towards better models and methods. Evid Policy J Res Debate Pract. 2010;6(2):145–59.

Edwards DM, Meagher LR. A framework to evaluate the impacts of research on policy and practice: A forestry pilot study. For Policy Econ. 2020;1(114):101975.

Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101(11):2059–67.

Borst RAJ, Wehrens R, Bal R, Kok MO. From sustainability to sustaining work: What do actors do to sustain knowledge translation platforms? Soc Sci Med. 2022;1(296):114735.

Zacka B. When the state meets the street: public service and moral agency. Harvard university press; 2017. Available from: https://books.google.co.uk/books?hl=en&lr=&id=3KdFDwAAQBAJ&oi=fnd&pg=PP1&dq=zacka+when+the+street&ots=x93YEHPKhl&sig=9yXKlQiFZ0XblHrbYKzvAMwNWT4 Cited 2023 Nov 28

Torrance H. The research excellence framework in the United Kingdom: processes, consequences, and incentives to engage. Qual Inq. 2020;26(7):771–9.

Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297–304.

Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A guide for applying a revised version of the PARIHS framework for implementation. Implement Sci. 2011;6(1):99.

Field B, Booth A, Ilott I, Gerrish K. Using the knowledge to action framework in practice: a citation analysis and systematic review. Implement Sci. 2014;9(1):172.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Ouimet M, Landry R, Ziam S, Bédard PO. The absorption of research knowledge by public civil servants. Evid Policy. 2009;5(4):331–50.

Martin D, Spink MJ, Pereira PPG. Multiple bodies, political ontologies and the logic of care: an interview with Annemarie Mol. Interface - Comun Saúde Educ. 2018;22:295–305.

Sajadi HS, Majdzadeh R, Ehsani-Chimeh E, Yazdizadeh B, Nikooee S, Pourabbasi A, et al. Policy options to increase motivation for improving evidence-informed health policy-making in Iran. Health Res Policy Syst. 2021;19(1):91.

Athreye S, Sengupta A, Odetunde OJ. Academic entrepreneurial engagement with weak institutional support: roles of motivation, intention and perceptions. Stud High Educ. 2023;48(5):683–94.

Bamford D, Reid I, Forrester P, Dehe B, Bamford J, Papalexi M. An empirical investigation into UK university–industry collaboration: the development of an impact framework. J Technol Transf. 2023 Nov 13; Available from: https://doi.org/10.1007/s10961-023-10043-9 Cited 2023 Dec 20

McPherson AH, McDonald SM. Measuring the outcomes and impacts of innovation interventions assessing the role of additionality. Int J Technol Policy Manag. 2010;10(1–2):137–56.

Hind J. Additionality: a useful way to construct the counterfactual qualitatively? Eval J Australas. 2010;10(1):28–35.

Download references

Acknowledgements

We are very grateful to the CAPE Programme Delivery Group members, for many discussions throughout this work. Our thanks also go to the Sax Institute, Australia (where the original SPIRIT framework was developed), for reviewing and providing helpful feedback on the article. We also thank our reviewers who made very constructive suggestions, which have strengthened and clarified our article.

The evaluation of the CAPE programme, referred to in this report, was funded by Research England. The funding body had no role in the design of the study, analysis, interpretation or writing the manuscript.

Author information

Authors and affiliations.

Department of Health Services Research and Policy, Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Place, Kings Cross, London, WC1H 9SH, UK

Petra Mäkelä & Kathryn Oliver

Health and Social Care Workforce Research Unit, The Policy Institute, Virginia Woolf Building, Kings College London, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

You can also search for this author in PubMed   Google Scholar

Contributions

PM conceptualised the modification of the framework reported in this work. All authors made substantial contributions to the design of the work. PM drafted the initial manuscript. AB and KO contributed to revisions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Petra Mäkelä .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was granted for the overarching CAPE evaluation by the London School of Hygiene and Tropical Medicine Research Ethics Committee (reference 26347).

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mäkelä, P., Boaz, A. & Oliver, K. A modified action framework to develop and evaluate academic-policy engagement interventions. Implementation Sci 19 , 31 (2024). https://doi.org/10.1186/s13012-024-01359-7

Download citation

Received : 09 January 2024

Accepted : 20 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1186/s13012-024-01359-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-informed policy
  • Academic-policy engagement
  • Framework modification

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

peer review in academic research

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Alternative routes...

Alternative routes into clinical research: a guide for early career doctors

  • Related content
  • Peer review
  • Phillip LR Nicolson , consultant haematologist and associate professor of cardiovascular science 1 2 3 ,
  • Martha Belete , registrar in anaesthetics 4 5 ,
  • Rebecca Hawes , clinical fellow in anaesthetics 5 6 ,
  • Nicole Fowler , haematology clinical research fellow 7 ,
  • Cheng Hock Toh , professor of haematology and consultant haematologist 8 9
  • 1 Institute of Cardiovascular Sciences, University of Birmingham, UK
  • 2 Department of Haemostasis, Liaison Haematology and Transfusion, University Hospitals Birmingham NHS Foundation Trust, Birmingham
  • 3 HaemSTAR, UK
  • 4 Department of Anaesthesia, Plymouth Hospitals NHS Trust, Plymouth, UK
  • 5 Research and Audit Federation of Trainees, UK
  • 6 Department of Anaesthesia, The Rotherham NHS Foundation Trust, Rotherham Hospital, Rotherham
  • 7 Department of Haematology, Royal Cornwall Hospitals NHS Trust, Treliske, Truro
  • 8 Liverpool University Hospitals NHS Foundation Trust, Prescott Street, Liverpool
  • 9 Institute of Infection, Veterinary and Ecological Sciences, University of Liverpool
  • Correspondence to P Nicolson, C H Toh p.nicolson{at}bham.ac.uk ; c.h.toh{at}liverpool.ac.uk

Working in clinical research alongside clinical practice can make for a rewarding and worthwhile career. 1 2 3 Building research into a clinical career starts with research training for early and mid-career doctors. Traditional research training typically involves a dedicated period within an integrated clinical academic training programme or as part of an externally funded MD or PhD degree. Informal training opportunities, such as journal clubs and principal investigator (PI)-mentorship are available ( box 1 ), but in recent years several other initiatives have launched in the UK, meaning there are more ways to obtain research experience and embark on a career in clinical research.

Examples of in-person and online research training opportunities

These are available either informally or formally, free of charge or paid, and via local employing hospital trusts, allied health organisations, royal colleges, or universities

Acute medicine

No national trainee research network

Anaesthesia

Research and Audit Federation of Trainees (RAFT). www.raftrainees.org

Cardiothoracic surgery

No national trainee-specific research network. National research network does exist: Cardiothoracic Interdisciplinary Research Network (CIRN). www.scts.org/professionals/research/cirn.aspx

Emergency medicine

Trainee Emergency Medicine Research Network (TERN). www.ternresearch.co.uk

Ear, nose, and throat

UK ENT Trainee Research Network (INTEGRATE). www.entintegrate.co.uk

Gastroenterology

No national trainee research network. Many regional trainee research networks

General practice

No national trainee-specific research network, although national research networks exist: Society for Academic Primary Care (SAPC) and Primary Care Academic Collaborative (PACT). www.sapc.ac.uk ; www.gppact.org

General surgery

Student Audit and Research in Surgery (STARSurg). www.starsurg.org . Many regional trainee research networks

Geriatric Medicine Research Collaborative (GeMRC). www.gemresearchuk.com

Haematology (non-malignant)

Haematology Specialty Training Audit and Research (HaemSTAR). www.haemstar.org

Haematology (malignant)

Trainee Collaborative for Research and Audit in Hepatology UK (ToRcH-UK). www.twitter.com/uk_torch

Histopathology

Pathsoc Research Trainee Initiative (PARTI). www.pathsoc.org/parti.aspx

Intensive care medicine

Trainee Research in Intensive Care Network (TRIC). www.tricnetwork.co.uk

Internal medicine

No trainee-led research network. www.rcp.ac.uk/trainee-research-collaboratives

Interventional radiology

UK National Interventional Radiology Trainee Research (UNITE) Collaborative. https://www.unitecollaborative.com

Maxillofacial surgery

Maxillofacial Trainee Research Collaborative (MTReC). www.maxfaxtrainee.co.uk/

UK & Ireland Renal Trainee Network (NEPHwork). www.ukkidney.org/audit-research/projects/nephwork

Neurosurgery

British Neurosurgical Trainee Research Collaborative (BNTRC). www.bntrc.org.uk

Obstetrics and gynaecology

UK Audit and Research Collaborative in Obstetrics and Gynaecology (UKAROG). www.ukarcog.org

The National Oncology Trainee Collaborative for Healthcare Research (NOTCH). www.uknotch.com

Breast Cancer Trainee Research Collaborative Group (BCTRCG). https://bctrcguk.wixsite.com/bctrcg

Ophthalmology

The Ophthalmology Clinical Trials Network (OCTN). www.ophthalmologytrials.net

Paediatrics

RCPCH Trainee Research Network. www.rcpch.ac.uk/resources/rcpch-trainee-research-network

Paediatric anaesthesia

Paediatric Anaesthesia Trainee Research Network (PATRN). www.apagbi.org.uk/education-and-training/trainee-information/research-network-patrn

Paediatric haematology

Paediatric Haematology Trainee Research Network (PHTN). No website

Paediatric surgery

Paediatric Surgical Trainees Research Network (PSTRN). www.pstrnuk.org

Pain medicine

Network of Pain Trainees Interested in Research & Audit (PAIN-TRAIN). www.paintrainuk.com

Palliative care

UK Palliative Care Trainee Research Collaborative (UKPRC). www.twitter.com/uk_prc

Plastic surgery

Reconstructive Surgery Trials Network (RSTN). www.reconstructivesurgerytrials.net/trainees/

Pre-hospital medicine

Pre-Hospital Trainee Operated Research Network (PHOTON). www.facebook.com/PHOTONPHEM

Information from Royal College of Psychiatrists. www.rcpsych.ac.uk/members/your-faculties/academic-psychiatry/research

Radiology Academic Network for Trainees (RADIANT). www.radiantuk.com

Respiratory

Integrated Respiratory Research collaborative (INSPIRE). www.inspirerespiratory.co.uk

British Urology Researchers in Surgical Training (BURST). www.bursturology.com

Vascular surgery

Vascular & Endovascular Research Network (VERN). www.vascular-research.net

This article outlines these formal but “non-traditional” routes available to early and mid-career doctors that can successfully increase research involvement and enable research-active careers.

Trainee research networks

Trainee research networks are a recent phenomenon within most medical specialties. They are formalised regional or national groups led by early and mid-career doctors who work together to perform clinical research and create research training opportunities. The first of these groups started in the early 2010s within anaesthetics but now represent nearly every specialty ( box 2 ). 4 Trainee research networks provide research training with the aim of increasing doctors’ future research involvement. 5

A non-exhaustive list of UK national trainee led research networks*

Research training opportunities.

Mentorship by PIs at local hospital

Taking on formal role as sub-investigator

Journal clubs

Trainee representation on regional/national NIHR specialty group

API Scheme: https://www.nihr.ac.uk/health-and-care-professionals/training/associate-principal-investigator-scheme.htm .

eLearning courses available at https://learn.nihr.ac.uk (free): Good clinical practice, fundamentals of clinical research delivery, informed consent, leadership, future of health, central portfolio management system.

eLearning courses available from the Royal College of Physicians. Research in Practice programme (free). www.rcplondon.ac.uk

eLearning courses available from the Medical Research Council (free). https://bygsystems.net/mrcrsc-lms/

eLearning courses available from Nature (both free and for variable cost via employing institution): many and varied including research integrity and publication ethics, persuasive grant writing, publishing a research paper. https://masterclasses.nature.com

University courses. Examples include novel clinical trial design in translational medicine from the University of Cambridge ( https://advanceonline.cam.ac.uk/courses/ ) or introduction to randomised controlled trials in healthcare from the University of Birmingham ( https://www.birmingham.ac.uk/university/colleges/mds/cpd/ )

*limited to those with formal websites and/or active twitter accounts. Correct as of 5 January 2024. For regional trainee-led specialty research networks, see www.rcp.ac.uk/trainee-research-collaboratives for medical specialties, www.asit.org/resources/trainee-research-collaboratives/national-trainee-research-collaboratives/res1137 for surgical specialties, and www.rcoa.ac.uk/research/research-bodies/trainee-research-networks for anaesthetics.

Networks vary widely in structure and function. Most have senior mentorship to guide personal development and career trajectory. Projects are usually highly collaborative and include doctors and allied healthcare professionals working together.

Observational studies and large scale audits are common projects as their feasibility makes them deliverable rapidly with minimal funding. Some networks do, however, carry out interventional research. The benefits of increasing interventional research studies are self-evident, but observational projects are also important as they provide data useful for hypothesis generation and defining clinical equipoise and incidence/event rates, all of which are necessary steps in the development of randomised controlled studies.

These networks offer a supportive learning environment and research experience, and can match experience with expectations and responsibilities. Early and mid-career doctors are given opportunities to be involved and receive training in research at every phase from inception to publication. This develops experience in research methodology such as statistics, scientific writing, and peer review. As well as research skills training, an important reward for involvement in a study is manuscript authorship. Many groups give “citable collaborator” status to all project contributors, whatever their input. 6 7 This recognises the essential role everyone plays in the delivery of whole projects, counts towards publication metrics, and is important for future job applications.

Case study—Pip Nicolson (HaemSTAR)

Haematology Specialist Training, Audit and Research (HaemSTAR) is a trainee research network founded because of a lack of principal investigator training and clinical trial activity in non-malignant haematology. It has led and supported national audits and research projects in various subspecialty areas such as immune thrombocytopenia, thrombotic thrombocytopenic purpura, venous thrombosis, and transfusion. 8 9 10 Through involvement in this network as a registrar, I have acted as a sub-investigator and supported the principal investigator on observational and interventional portfolio-adopted studies by the National Institute for Health and Care Research. These experiences gave me valuable insight into the national and local processes involved in research delivery. I was introduced to national leaders in non-malignant haematology who not only provided mentorship and advice on career development, but also gave me opportunities to lead national audits and become involved in HaemSTAR’s committee. 10 11 These experiences in leadership have increased my confidence in management situations as I have transitioned to being a consultant, and have given me skills in balancing clinical and academic roles. Importantly, I have also developed long term friendships with peers across the country as a result of my involvement in HaemSTAR.

Associate Principal Investigator scheme

The Associate Principal Investigator (API) scheme is a training programme run by NIHR to develop research skills and contribute to clinical study delivery at a local level. It is available throughout England, Scotland, Wales, and Northern Ireland for NIHR portfolio-adopted studies. The programme runs for six months and, upon completion, APIs receive formal recognition endorsed by the NIHR and a large number of royal colleges. The scheme is free and open to medical and allied healthcare professionals at all career grades. It is designed to allow those who would not normally take part in clinical research to do so under the mentorship of a local PI. Currently there are more than 1500 accredited APIs and over 600 affiliated studies across 28 specialties. 12 It is a good way to show evidence of training and involvement in research and get more involved in research conduct. APIs have been shown to increase patient recruitment and most people completing the scheme continue to be involved in research. 12 13

Case study—Rebecca Hawes

I completed the API scheme as a senior house officer in 2021. A local PI introduced me to the Quality of Recovery after Obstetric Anaesthesia NIHR portfolio study, 14 which I saw as a training opportunity and useful experience ahead of specialist training applications. It was easy to apply for and straightforward to navigate. I was guided through the six month process in a step-by-step manner and completed eLearning modules and video based training on fundamental aspects of running research projects. All this training was evidenced on the online API platform and I had monthly supervision meetings with the PI and wider research team. As well as the experience of patient recruitment and data collection, other important aspects of training were study set-up and sponsor communications. Key to my successful API scheme was having a supportive and enthusiastic PI and developing good organisational skills. I really enjoyed the experience, and I have since done more research and have become a committee member on a national trainee research network in anaesthesia called RAFT (Research and Audit Federation of Trainees). I’ve seen great enthusiasm among anaesthetists to take part in the API scheme, with over 150 signing up to the most recent RAFT national research project.

Clinical research posts

Dedicated clinical research posts (sometimes termed “clinical research fellow” posts) allow clinicians to explore and develop research skills without committing to a formal academic pathway. They can be undertaken at any stage during a medical career but are generally performed between training posts, or during them by receiving permission from local training committees to temporarily go “out of programme.” These positions are extremely varied in how they are advertised, funded, and the balance between research and clinical time. Look out for opportunities with royal colleges, local and national research networks, and on the NHS Jobs website. Research fellowships are a good way to broaden skills that will have long term impact across one’s clinical career.

Case study—Nicole Fowler

After completing the Foundation Programme, I took up a 12 month clinical trials fellow position. This gave me early career exposure to clinical research and allowed me to act as a sub-investigator in a range of clinical trials. I received practical experience in all stages of clinical research while retaining a patient facing role, which included obtaining consent and reviewing patients at all subsequent visits until study completion. Many of the skills I developed in this post, such as good organisation and effective teamwork, are transferable to all areas of medicine. I have thoroughly enjoyed the experience and it is something I hope to talk about at interview as it is an effective way of showing commitment to a specialty. Furthermore, having a dedicated research doctor has been beneficial to my department in increasing patient involvement in research.

Acknowledgments

We would like to thank Holly Speight and Clare Shaw from the NIHR for information on the API scheme.

*These authors contributed equally to this work

Patient and public involvement: No patients were directly involved in the creation of this article.

PLRN, MB, and CHT conceived the article and are guarantors. All authors wrote and edited the manuscript.

Competing interests: PLRN was the chair of HaemSTAR from 2017 to 2023. MB is the current chair of the Research and Audit Federation of Trainees (RAFT). RH is the current secretary of RAFT. CHT conceived HaemSTAR.

Provenance and peer review: Commissioned; externally peer reviewed.

  • Downing A ,
  • Morris EJ ,
  • Corrigan N ,
  • Bracewell M ,
  • Medical Academic Staff Committee of the British Medical Association
  • ↵ RAFT. The start of RAFT. https://www.raftrainees.org/about
  • Jamjoom AAB ,
  • Hutchinson PJ ,
  • Bradbury CA ,
  • McCulloch R ,
  • Nicolson PLR ,
  • HaemSTAR Collaborators
  • Collaborators H ,
  • ↵ National Institute for Health and Care Research. Associate Principal Investigator (PI) Scheme. 2023. https://www.nihr.ac.uk/health-and-care-professionals/career-development/associate-principal-investigator-scheme.htm
  • Fairhurst C ,
  • Torgerson D
  • O’Carroll JE ,
  • Warwick E ,
  • ObsQoR Collaborators

peer review in academic research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of primer

Now More Than Ever: Reflections on the State and Importance of Peer Review

The process of peer review is a long-upheld ritual practiced across academic disciplines, intended to enforce standards of scholarship and rigor in what work is reported, and what gets to count as knowledge. As John Saultz noted, peer review is the “epistemological foundation standing between authors and readers of scientific papers.” 1 It is certainly a time-consuming effort on the part of reviewers, and when performed specifically for scholarly journals, it is generally performed without compensation. As a recent study by Anderson and Ledford demonstrated, however, a world without peer review would be harmful; the rapid diffusion of withdrawn or refuted hypotheses infiltrating the social and professional world could have life-and-death implications. 2 The purpose of this editorial is to appeal to each reader with the importance of serving as a peer reviewer.

Before highlighting its virtues, however, let us first frankly acknowledge that peer review is an imperfect process in need of improvement. It is rare that an active scholar or practitioner in a field has spare time to respond to voluntary and unplanned invitations to review manuscripts. Additionally, the peer review process has been criticized from a variety of perspectives over the years, 3 , 4 for being too obstructionist; for preserving status quo in academia and science, enforcing existing (and fraught) hierarchies and prejudices; for slowing the dissemination of new knowledge and scholarship; and even for being essentially flawed, sometimes allowing deceitful results into final publications. Finally, we must also acknowledge that the traditional peer review process has no immunity to systemic racism and inequality; more must be done to realize greater diversity of perspectives in research, and this includes peer review.

In spite of these flaws, the peer review process offers tremendous benefits, as many have noted. 3 – 7 To the list of recognized benefits, we add our own observation: the act of reviewing and considering a raw manuscript is instructive to each of us as writers. Considering the work of another in prepublished form affords the opportunity to consider the perspective of the reader—what information is needed, and what is superfluous; what descriptive styles are effective, as opposed to occluding; and so forth. We often tell our learners that one of the best ways of improving their own writing is to critically appraise that of others, and to recognize their own habits and assumptions that produce the same mistakes. Reviewing also pulls the curtain back, if ever so slightly, on the unspoken or emic view of a discipline, replete with implicit meanings understood by veteran practitioners. Participating in peer review helps reveal the culture of the discipline to the observant scholar. In short, your own research experiences will improve if you regularly allow yourself the editorial view of unpublished manuscripts. To gain this beneficial experience, there is no barrier; you need only jump in and do it.

There are also broader duties to the field that transcend the benefit to the individual. From an ethical perspective, it can be argued that the person holding expertise in a field is bound to apply that expertise in the service of the common good. Furthermore, individuals who benefit from publishing peer-reviewed manuscripts should consider their own obligation to reciprocate.

For journal editors, a major challenge is the slow recruitment of voluntary peer reviewers for manuscripts. We face competition as there are constantly new journals coming into existence. Consequently, many in academia face an influx of invitations to serve as reviewers, and editors personally face the same influx from other journals, even as we contribute our own. When it takes a long time for an author to receive a decision on a submitted manuscript, it is frequently the result of a backlog in the reviewer recruitment process. The crisis in peer review is real, but you can help.

In short, there are both practical and philosophical reasons to actively participate in the peer review process when invited. It keeps the journals you read and in which you publish healthy, and the process can be instructive for the reviewer. Active and vigorous peer review keeps the fields in which you work intellectually honest and rigorous, and the act of serving as a reviewer returns the service from which authors have benefitted. Finally, every person who accepts an invitation to review a manuscript, and goes on to submit a high-quality peer review, keeps the peer review process in motion. That is frankly good for all of us.

Acknowledgments

The authors thank Kristen Bene, PhD, Amy Lee, MD, and Andrea Wendling, MD, for their suggestions to improve the manuscript.

University Faculty and Students Release Peer-Reviewed LASER Journal

Posted in: Faculty and Student Research , Publications

LASER Journal covers for volumes one and two

Faculty and students in the Montclair Mathematics Department recently started a new scholarly journal! The LASER Journal, standing for Linking Art and Science through Education and Research, is open access, peer-reviewed, online journal dedicated to research activities at the interface of mathematics and arts. Our very own Drs. Bogdan Nita and Ashwin Vaidya are the editors-in-chief, and several other faculty and students at MSU serve on the editorial board. The LASER Journal provides a forum for mathematical, scientific and artistic discussions about the interconnections between math and art. Papers submitted to the journal should focus on the aesthetics of mathematical patterns in our world, with art being broadly interpreted to include music, the fine arts, performing arts or anything else, provided it is well articulated. Artists, Scientists and Mathematicians are all equally welcome to submit their work. We are also very interested in hearing from educators about the value of such interdisciplinary thinking and its impact on learning. Research articles, case studies and book reviews are acceptable. Students are welcome to co-author papers with their mentors, however we request that the mentors serve as the corresponding authors for such collaborative submissions.

LASER is currently accepting submissions for Issue 1 of the second volume (2024).

Learn More About the LASER Journal

IMAGES

  1. Peer Review

    peer review in academic research

  2. My Complete Guide to Academic Peer Review: Example Comments & How to

    peer review in academic research

  3. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    peer review in academic research

  4. What is Peer Review?

    peer review in academic research

  5. About Peer Review

    peer review in academic research

  6. The Peer Review Process

    peer review in academic research

VIDEO

  1. What is Peer Review? #archaeology #academia #publishing #journal

  2. Peer reviewing may sometimes saves us from being flooded with absolute rubbish

  3. THIS Got Through Peer Review?!

  4. How to Be a Peer Reviewer, Sage Open: Journal

  5. What is Peer Review and How Does It Ensure Scientific Accuracy?

  6. Join the Academic Revolution

COMMENTS

  1. What Is Peer Review?

    Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. ... or unanswered questions for readers who weren't involved in the research process. Peer-reviewed articles are considered a highly credible source due to this stringent process ...

  2. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  3. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it. 2) Be pleasant. If the paper is of low quality, suggest ...

  4. Reviewers

    Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued ...

  5. How to Write a Peer Review

    Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom. Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript ...

  6. Demystifying the process of scholarly peer-review: an ...

    The peer-review process is the longstanding method by which research quality is assured. On the one hand, it aims to assess the quality of a manuscript, with the desired outcome being (in theory ...

  7. Peer review

    It is not our intention to review in detail the historical development of peer review, which has been well summarised elsewhere [3], but we agree with Kharasch et al. [4] that "The benefits and advantages of peer review in medical research, are manifold and manifest". Peer review cannot improve poor research, but it can often "correct ...

  8. What Is Peer Review?

    Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript. However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant ...

  9. Peer Review in Academia

    There is a lack of consensus on what peer review is and on its purposes, practices, outcomes and impact on the academic enterprise (Tennant & Ross-Hellauer, 2020).The term peer review was relatively unknown before 1970.Referee was the more commonly applied notion, used primarily in relation to the evaluation of manuscripts and scientific communication (Batagelj et al., 2017).

  10. What is Peer Review?

    Peer review is 'a process where scientists ("peers") evaluate the quality of other scientists' work. By doing this, they aim to ensure the work is rigorous, coherent, uses past research and adds to what we already know.' You can learn more in this explainer from the Social Science Space. Peer review brings academic research to publication in the following ways: Evaluation -

  11. Structure peer review to make it more robust

    As co-editor-in-chief of the journal Research Integrity and Peer Review (an open-access journal published by BMC, which is part of Springer Nature), I'm invested in ensuring that the scholarly ...

  12. Understanding the peer review process: A step-by-step guide for

    The peer review process is a vital component of academic research and publishing. It serves as a quality control mechanism to ensure that scholarly articles meet rigorous standards of accuracy, validity, and significance. For researchers, navigating the peer review process can be both daunting and crucial for their professional growth.

  13. A step-by-step guide to peer review: a template for patients and novice

    The peer review template for patients and novice reviewers ( table 1) is a series of steps designed to create a workflow for the main components of peer review. A structured workflow can help a reviewer organise their thoughts and create space to engage in critical thinking. The template is a starting point for anyone new to peer review, and it ...

  14. Academic Guides: Evaluating Resources: Peer Review

    documenting and citing sources used to help authenticate the research done. The standard peer review process only applies to journals. While scholarly writing has certainly been edited and reviewed, peer review is a specific process only used by peer-reviewed journals. Books and dissertations may be scholarly, but are not considered peer reviewed.

  15. Research Methods: How to Perform an Effective Peer Review

    Peer review has been a part of scientific publications since 1665, when the Philosophical Transactions of the Royal Society became the first publication to formalize a system of expert review. 1,2 It became an institutionalized part of science in the latter half of the 20 th century and is now the standard in scientific research publications. 3 In 2012, there were more than 28 000 scholarly ...

  16. Research Guides: Peer Reviewed Literature: What is Peer Review?

    The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.. Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.) For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

  17. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  18. The Ongoing Importance of Peer Review

    The broader literature on peer review supports the focus of JAPNA editorials (Lu et al., 2022; Severin & Chataway, 2020).Peer review remains a vibrant part of scholarly publishing in all disciplines, marked by an increasing need for peer reviewers given the rise in scientific publication submissions (Lu et al., 2022).An ongoing theme in peer review discussions with pertinence to JAPNA involves ...

  19. Importance of Peer Review

    Research has shown that authors place a great value on peer review. An important study of review quality reported a survey of authors (320 of 528 surveyed) and editors (3) on the quality of reviews. The editors represented three major nursing journals. A total of 804 authors were approached, with 320 responding.

  20. The limitations to our understanding of peer review

    Introduction. Peer review is a ubiquitous element of scholarly research quality assurance and assessment. It forms a critical part of a research and development enterprise that annually invests $2 trillion US dollars (USD) globally [] and produces more than 3 million peer-reviewed research articles [].As an institutional norm governing scientific legitimacy, it plays a central role in defining ...

  21. Is peer review in academic publishing still working?*

    Peer review is central to academic publishing. Yet for many it is a mysterious and contentious practice, which can cause distress for both reviewers, and those whose work is reviewed. This paper, produced by the Editors' Collective, examines the past and future of peer review in academic publishing. The first sections consider how peer review ...

  22. JSTOR Home

    Harness the power of visual materials—explore more than 3 million images now on JSTOR. Enhance your scholarly research with underground newspapers, magazines, and journals. Explore collections in the arts, sciences, and literature from the world's leading museums, archives, and scholars. JSTOR is a digital library of academic journals ...

  23. A modified action framework to develop and evaluate academic-policy

    Peer Review reports. Contributions to the literature. There has been a proliferation of theories, models and frameworks relating to translation of research into practice. ... The last 20 years of empirical research on government utilization of academic social science research: a state-of-the-art literature review. Adm Soc. 2023;28: ...

  24. Balancing Relevancy and Rigor: Exploring the Impact of Scholarly Books

    Academic researchers rely on the peer-review journal system to evaluate the quality and relevancy of their work. Despite its value, the current review system is flawed and sometimes results in fragmented knowledge formation. Some scholars, including Kouzes and Posner (K&P), publish research in books more accessible to practitioners.

  25. Effects of rubrics on academic performance, self-regulated learning

    Rubrics are widely used as instructional and learning instrument. Though they have been claimed to have positive effects on students' learning, these effects have not been meta-analyzed. Our aim was to synthesize the effects of rubrics on academic performance, self-regulated learning, and self-efficacy. The moderator effect of the following variables was also investigated: year of ...

  26. Approaching literature review for academic purposes: The Literature

    A sophisticated literature review (LR) can result in a robust dissertation/thesis by scrutinizing the main problem examined by the academic study; anticipating research hypotheses, methods and results; and maintaining the interest of the audience in how the dissertation/thesis will provide solutions for the current gaps in a particular field.

  27. Supporting students' transition to higher education: the effects of a

    The transition to higher education. The relevance of the transition to HE for students' further academic trajectories and its' challenging nature are widely emphasized in the HE literature (Briggs, Clark, and Hall Citation 2012; Coertjens et al. Citation 2017).Based on Nicholson's (Citation 1990) transition cycle model, the transition is often conceptualized as a process of change and ...

  28. Alternative routes into clinical research: a guide for early career

    Working in clinical research alongside clinical practice can make for a rewarding and worthwhile career.123 Building research into a clinical career starts with research training for early and mid-career doctors. Traditional research training typically involves a dedicated period within an integrated clinical academic training programme or as part of an externally funded MD or PhD degree ...

  29. Now More Than Ever: Reflections on the State and Importance of Peer Review

    The process of peer review is a long-upheld ritual practiced across academic disciplines, intended to enforce standards of scholarship and rigor in what work is reported, and what gets to count as knowledge. ... more must be done to realize greater diversity of perspectives in research, and this includes peer review.

  30. University Faculty And Students Release Peer-Reviewed LASER Journal

    Faculty and students in the Montclair Mathematics Department recently started a new scholarly journal! The LASER Journal, standing for Linking Art and Science through Education and Research, is open access, peer-reviewed, online journal dedicated to research activities at the interface of mathematics and arts. Our very own Drs. Bogdan Nita and Ashwin Vaidya are the […]