U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Flood protection has blocked this Solr request. See more at The Acquia Search flood control mechanism has blocked a Solr query due to API usage limits

Systematic Review Service: Classes, Consultations, Software, and Databases

Systematic Review Service: Classes, Consultations, Software, and Databases

Interested in conducting a literature review? There are many types—narrative, rapid, scoping, and systematic—all with different methodologies to use. The  NIH Library’s Systematic Review Service  is available to guide you through the entire process. We offer classes, consultations, and resources to help you with selecting the most appropriate type of review, writing the protocol, conducting the search, managing the results, and writing and publishing the review.

Classes   The NIH Library is offering a series of webinars on conducting various types of reviews. Click on the links below for more information about the classes and to register. 

  • Introduction to the Systematic Review Process   May 3, 12:00‒1:00 PM
  • Developing and Publishing Your Protocol   May 4, 11:00 AM–12:00 PM
  • Selecting the Most Appropriate Type of Literature Review for Your Research   May 5, 10:00–11:00 AM
  • Developing the Research Question and Conducting the Literature Search   May 9, 12:00‒1:00 PM
  • Introduction to Scoping Reviews   May 10, 1:00‒2:00 PM
  • Establishing Your Eligibility Criteria and Conducting the Screening and Risk of Bias Steps in Your Review   May 11, 12:00‒1:00 PM
  • Exploring the Cochrane Library: Systematic Reviews, Clinical Trials, and More   May 18, 11:00 AM‒12:00 PM
  • Collecting and Cleaning Data for Your Review   May 18, 12:00‒1:30 PM
  • Gray Literature: Searching Beyond the Databases   May 19, 10:00‒11:00 AM
  • Writing and Publishing Your Review   May 22, 12:00‒1:00 PM
  • Introduction to Rapid Reviews   May 23, 1:00‒2:00 PM
  • Using Covidence for Conducting Your Review   May 24, 12:00‒1:00 PM
  • Introduction to Umbrella Reviews: Conducting a Review of Reviews   May 25, 1:00‒2:00 PM
  • Meta-Analysis: Quantifying a Systematic Review   June 6, 1:00–3:00 PM

Consultations   NIH Librarians are available to help you select the most appropriate type of review for your research project, identify and complete the steps of your review, conduct the literature search, and edit the final manuscript.  Schedule a consultation  to get started today. 

Covidence: Systematic Review Software   The NIH Library provides access to Covidence, an online tool for managing and streamlining your systematic review. Covidence can help you screen citations, conduct data extraction, and perform critical appraisal. For access to Covidence, please use our Software Registration form.

Databases   The NIH Library provides access to three primary databases used for most systematic reviews, in addition to many others:

  • Cochrane Library   Contains high-quality, independent evidence to inform health care decision making, including the Cochrane Database of Systematic Reviews and the Cochrane Central Register of Controlled Trials (CENTRAL). CENTRAL is a curated registry of randomized and quasi-randomized controlled trials conducted worldwide. Search using keywords or controlled vocabulary terms, and export results to a citation management tool. 
  • Embase   Allows users to build comprehensive literature searches through its extensive, deeply indexed database and flexible search options. By applying the PICO (Patient or Problem; Intervention; Comparison or Control; and Outcome) framework, users can structure searches that address clinical questions. Users can search Embase by keywords, controlled vocabulary terms, or use a special search feature to find literature on drugs, medical devices, pharmacovigilance, and more.
  • PubMed/MEDLINE   Features advanced search functions and filters to find literature for your systematic review. Search using keywords and controlled vocabulary terms from MeSH (Medical Subject Headings) to focus your search and find relevant information. 

____________________________

The NIH Library is part of the Office of Research Services (ORS) in the Office of the Director (OD), and serves the information needs of staff at NIH and select HHS agencies.

NIH Library | 301-496-1080 | [email protected]   Website | Ask a Question | NIH Library News    Subscribe | Unsubscribe

Pubrica Logo

Systematic Review Services

Review by experts, research services.

  • Literature Review & Gap
  • Meta-Analysis
  • Case Report Writing

Systematic Review

  • Experimental Design
  • Biostatistics
  • Grant Writing
  • Product Development

A systematic review is designed to summarise the results of available studies and provides a high level of evidence-based findings on the effectiveness of interventions. Our experts can handle any type of research, whether you need systematic review based on controlled clinical trials or review based on observational study designs or community (e.g. psychology) intervention? Our experts at Pubrica, perform a rigorous systematic review by following multi-step process, which includes

(a) Identifying a well-focused clinically relevant research question while following suitable frameworks including PICO, SPICE, SPIDER, and ECLIPSE etc.

(b) Developing a detailed review protocol with strict inclusion and exclusion criteria and registering the protocol at different registries such as The Campbell Collaboration, The Cochrane Collaboration, OSF Preregistration, SYREAF-systematic reviews for animals and food, Research Registry, Joanna Briggs Institute (JBI) and PROSPERO.

(c) A systematic literature search of multiple databases (includes PubMed, Embase, MEDLINE, Web of Science, and Google Scholar) in finding relevant references that further requires extensive search and study. A number of other electronic databases and bibliographic sources will also be searched. Sources we posit for use for the project include:

  • Scientific literature databases as described above and others
  • Cochrane Library
  • Database of Abstracts of Reviews of Effectiveness (DARE)
  • NHS Economic Evaluation database
  • Material referenced in Publications obtained in the course of research on the topic
  • International Network of Agencies for Health Technology Assessment (INAHTA) documents
  • Clinical trials databases, including clinicaltrialsregister.eu (EU), clinicaltrials.gov (US) and others.

However, using multiple databases to search relevant studies is laborious and time-consuming so that a well-designed search strategy will be developed.

(d) Meticulous study identification using a variety of search terms, checking for a clear outcome (primary and secondary)

(e) Systematic data abstraction, by at least two sets of investigators independently,

(f) Risk of bias assessment with the use of existing different assessment tools (e.g. STROBE). For example, allocation concealment (selection bias), incomplete outcome data addressed (attrition bias), and selective reporting (reporting bias).

(g) Thoughtful quantitative synthesis through meta-analysis where relevant. Besides informing guidelines, credible systematic reviews and quality of evidence assessment can help Identify key knowledge gaps for future studies.

We can help you with the most used qualitative systematic review, as well as, quantitative, health policy and management information and meta-analysis .

Our systematic review at Pubrica is more structured as at every stage of writing, and we ensure to critically check the rigour using standard methodological checklists such as PRISMA, CASP, AMSTAR, and ARIF etc., based on the checklist provided.

  • Formulate the research question
  • Search for studies
  • Selection of studies
  • Data collection
  • Methodological quality assessment
  • Findings are interpreted.
  • The time required for producing a quality systematic review

Our Experts:

Pubrica Healthcare and Medical Research Experts provide custom scientific research writing and analytics (data science & biostatistics) services that have a team of experienced researchers and writers who are available round the clock and ready to assist you with the systematic review writing. We have PhD level domain experts who also have decades of scientific writing experience. As such, we have the capability to deliver a high calibre, written literature review. Our experts are very facile around scientific literature databases to complete the literature review, including PubMed & Medline and cancer domain expertise to selectively identify impactful journal articles to draw upon for the review. Written status reports will be shared at every milestone completion including 1) an overview of the status of completion of deliverables including specifics of the literature search and environmental scans; 2) a description of progress addressing discrete components of the Report as agreed; 3) a description of challenges encountered, potential risks and associated mitigation strategies.

Pubrica

Recent trends

Pubrica has done plethora of work in the area of systematic reviews for authors, medical device, pharmaceutical companies and policy makers. Our SR experts ensure to collate empirical evidence that fits prespecified eligibility criteria, an assess its validity through risk of bias, and present and synthesis the attributes and findings from the studies used. All tasks in compliance to PRISMA guidelines and other reporting standards.

We deliver study designs balanced to meet your business needs and expectations with the current scientific understanding and all regulatory requirements considered.

Allow us to help propel your product forward.

Expert Assistance: “Moving from individual, informal tracking Pubrica’s Systematic review service has saved me innumerable hours and costs. They gave me the tools support and control assistance I needed to ensure thorough screening by analysts and clinicians at various locations.” - Rory K., PHD Student, Tulsa.

Quality delivery: “With Pubrica, I was able to produce high quality, accurate work in a much more timely fashion. I really liked what I saw in the final draft, and I was able to get up and running right away with access to live support anytime we need it.” - Melba R., PHD Student, South Hadley.

Client Satisfaction: “I can’t think of a way to do reviews faster than with Pubrica. Being able to monitor progress and collaborate with colleagues makes my life a lot easier.” - Fred A., Springfield.

High Level Experts: “Pubrica is an indispensable part of my systematic review and has allowed me to complete my research effectively and efficiently. It’s enabled me to better manage my increased volume of work.” - Mary V., PHD Student, Lexington.

Trust: “With Pubrica’s systematic review service, I could get my research’s Systematic reviews done and approved quickly and efficiently. The feedback from the notified bodies about the quality and presentation of my systematic review has been extremely positive.” - Glen G., PHD Student, Lexington.

On time delivery: “Pubrica has been instrumental in improving my ability to review full text articles in an efficient and consistent manner. My guide loves the PRISMA diagrams and process outputs which helped me in delivering compliant results that meet their expectations.” - Laura S., PHD Student, Lincoln.

We’ll scale

Up as your needs grow..

No compromising on integrity and quality. Our processes are well defined and flexible to ramp up as per your requirements.

Partnering with

You till the project end..

We come with you all the way. From design to market support

Pubrica Image

Pubrica Offerings

Pubrica offers you complete publishing support across a variety of publications, journals, and books. You can now morph your concepts into incisive reports with our array of writing services: regulatory writing, Clinical Report Forms (CRF), biostatistics, manuscripts, business writing, physician reports, medical writing and more. Experts in Science, Technology, Engineering and Mathematics (STEM), and pundits with therapeutic repertoire. Publishing that medical paper or getting a regulatory drug approval is now easy. Save time and money through Pubrica's support.

Download brochure on our offerings (PDF).

Frequently asked questions

We are with you the whole nine yards. In this section, we answer the tough questions. For any information, contact us via +91-9884350006 meanwhile, here are some of those queries

We provide a wide variety of services such as identifying a well-defined focused clinically relevant question, developing a detailed review protocol with strict inclusion and exclusion criteria, systematic literature search meticulous study identification and systematic data abstraction risk of bias assessment, and thoughtful quantitative synthesis through meta-analysis

Delivery depends on the order type. However, despite the type of order, if you require literature survey chapter, we will provide extensive and critical writing, identifying controversial in literature, referenced documents, fully formatted document, and assurance of plagiarism. Besides, under the Elite plan, we also link the problem gap with the current literature and provide you with a clear problem statement.

We have Develop a well-written scientific & academic research article, Use appropriate citations (e.g., Oxford, APA, and MLA) as necessary. For more about detailed research area plan selection, please visit

To choose the Systematic Review Services, we need clear & precise Domain area. E.g., Medical, Bio-medical, clinical research, Area of interest, Target Country. E.g. the UK, Target State, if any or generalized UK population, Research question, Clearly defining Eligibility criteria, Should have completed Systematic Review., University guidelines and also we need following information such as your Qualification, specialization, University, Country, Your experience, possible areas of your interest, Your supervisor capability and university interest, new methodology that is based on related to your Research and area of interest.

Pubrica hires only experienced and certified professionals from European and UK base. All of our medical writers hold Master and PhD degree and have at least five years of writing experience. Each medical writer have their specialization; it helps us to allocate the most appropriate writer according to your discipline. You will get only subject expertise, that’s our assurance, i.e., every order of thesis provide only a relevant research background.

After confirming your order, work will be assigned to Project Associates (PA), who will check the order according to the requirement. The order will, later on, assign to specific subject experts after signing a non-disclosure agreement. She/he will start working on the project as per the agreed deliverables. The order will be delivered after thorough quality check and assurance by the Quality Assurance Department (QAD) and will be given for plagiarism check. After that, you will get the QAD and plagiarism report.

Our work is completely based on your order and requirement. We promise on following guarantees: (1) On-time delivery (2) Plagiarism free and Unique Content (with the acceptability of less than 5-10% plagiarism) (3) Exact match with your requirements (4) Engaging Subject or domain experts for your project. If there is any deviation in the mentioned guarantees, we take 100% responsibility to compensate. However, the quality of work delivered may also get hampered when there is no precise requirement. In that case, you need to take up a fresh order.

We promise on following guarantees: (1) On-time delivery (2) Plagiarism free and Unique Content (with the acceptability of less than 5-10% plagiarism) (3) Exact match with your order requirements (4) Engaging Subject or domain experts for your project. If there is any deviation in the above guarantees, we take 100% responsibility to compensate.

Yes, at Scientific Writing & Publishing Support, our motto is to work hands-on with clients. We guarantee 100% project satisfaction. So we go exceed their expectations. Full-fledged writing services across all domains; moreover, we also provide animation, regulatory writing, medical writing, research, and biostatistical programming services as well. Call us now to get a quote.

SEE WHAT PUBRICA CAN DO FOR YOU

FILL OUT THE BRIEF FORM BELOW TO GET IN TOUCH.

Pubrica

  • Privacy Overview
  • Strictly Necessary Cookies
  • 3rd Party Cookies

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

Claude Moore Health Sciences Library

Collections and Library Services

  • Connecting from Off-Grounds
  • IDEA Resources

Introduction

ask us

Systematic Review Services and Resources

Need help with a review? Health Sciences Library experts and resources are here for UVA faculty, staff, and students for all types of reviews , from critical reviews, to mapping reviews, to scoping studies, and more. Both the Cochrane Collaboration and the Institute of Medicine recommend authors of systematic reviews work with librarians to identify the best possible evidence. Let us help you prepare your review with the best methods possible.

We fully support UVA faculty, students, and staff in their roles related to health and biomedical research and education, and in patient care. However, due to capacity and licensing limitations, we are unable to provide literature search services for professional society committee members and other professional organizational commitments of faculty. We applaud those professional medical societies that employ librarians to support these types of activities.

Librarian Participation Models We offer two models for librarian participation in systematic and other review types, such as scoping and narrative reviews. Services below are generally limited to UVA Health faculty, staff, and students.

1. Consult model:  A librarian will discuss your topic, review any terms you have or show you how to develop search terms, advise on database selection, and give you an overview of the review process. Review teams then run their own searches.

2. Collaboration model:   A librarian is part of the review team and due to their contributions, co-authorship is expected.  Librarian contributions may include the following:

  • formulate research question
  • investigate whether there is already a published systematic or scoping review on your topic or whether there is one currently under development
  • assist with protocol registration
  • recommend databases to be searched, and run the search
  • de-duplicate search results
  • advise on (or manage) choice of screening software/platforms
  • manage PDF availability for full-text screening
  • complete the PRISMA flow diagram 
  • write the search methods section of the review manuscript and provide appropriate documentation (e.g. full search strategy for one database)
  • approve the final manuscript

To request a librarian to participate in your systematic review, please fill out our Systematic Review Request form.  If you have questions about our services, please use our Ask Us form

Review Resources

Working on a systematic or other type of review? These guides and tools may be useful:

What Type of Review?

To determine what review is most appropriate for your question, timeframe, or resources, consult this  decision tree graphic  from U Maryland Health Sciences and Human Services Library

Also consult  Systematic Reviews & Other Review Types  from Temple University Libraries

  • Scroll down to see the process organized into steps/stages
  • Helpful links to Critical Appraisal Checklists (i.e. CASP) and Grading the strength of evidence (i.e. GRADE)

A) The Process as a Whole

At-a-Glance

Systematic Reviews: A simplified, step-by-step process  (UNC Health Sciences Library)

Think about where you would want to publish your review. What types of reviews does that journal publish? Check out the journal's website or use PubMed's Citation Matcher to search on your journal title, limiting your results to review to see what's been done.

In-Depth Guidance

Useful guides and articles on the basics (and more!) of systematic reviews

New to reviews? This multi-page guide does an excellent job detailing the many steps involved in a systematic review.

Systematic Reviews - Duke University Medical Library

Excellent overview; includes a helpful grid of Types of Reviews  and a helpful

Evidence Synthesis & Literature Reviews - Harvey Cushing/John Hay Whitney Medical Library

Includes tutorials

A Guide to Evidence Synthesis: Steps in a Systematic Review - Cornell University Library

Text and videos provide an overview of the steps in a review with links to useful tools

Systematic Reviews - U Kentucky Libraries

Well-designed layout to lead you through the needed steps

Systematic Reviews and Meta-Analysis — Open & Free Self-paced, asynchronous full tutorial with exercises (developed for the Campbell Collaboration (social sciences).

"Analysing data and undertaking meta-analyses"  (Cochrane Handbook, Ch. 10) covers the principles and various methods for conducting meta-analyses for the main types of data encountered. 

Review Workflow

Guidelines and tools are available to assist you with the planning and workflow of your review.

  • PROSPERO  is a prospective registry of health-related systematic reviews
  • Scoping review protocol  from JBI Evidence Synthesis 
  • PRISMA  provides a checklist, flow diagram, and other guidance for reporting
  • AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews

B) Specific Stages

Managing References

Collecting your citations is an important step in any review. Software and web-based tools assist with this process. All of the following tools have features to help with both formatting your in-text citations and your bibliography.

  • EndNote is a powerful software tool for Windows or Mac. EndNote 20 is available at the discounted price of $249.95 ($149.95 for students) via Cavalier Computers . It helps with collecting references as well as PDFs.
  • Zotero is a free product and is especially feature-rich in terms of capturing citation information for web pages and other document types. 

Want help comparing these tools? See our Citation Managers guide.

  Screening and Study Selection

Much of the work in a review involves managing the process of title and abstract screening and study selection. Fortunately there are tools that facilitate this process with features to import citations, screen titles and abstracts, etc.

  • Rayyan is a free, Web-based tool
  • Covidence  is fee-based, but allows one free trial (with two reviewers and up to 500 citations)
  • DistillerSR  is very feature-rich. It's fee-based (and pricey), but does provide a free student version .

Want to learn more? Check out these resources:

  • Inclusion and Exclusion criteria  - University of Melbourne Libraries
  • SR Toolbox - look up a tool to learn more, or, search by features you want the tool to support (e.g. data extraction).
  • Goldet G, Howick J. Understanding GRADE: an introduction . J Evid Based Med. 2013 Feb;6(1):50-54. doi: 10.1111/jebm.12018. PMID: 23557528.

Data Extraction

  • Consider consulting similar published reviews and their completed data tables
  • Data Extraction - UNC Health Sciences Library
  • Sample form for a small-scale review (this example is interventions to increase ED throughput
  • Cochrane Data Collection form for RCTs

Quality Assessment

  • Quality Assessment - UNC Health Sciences Library
  • Spreadsheet of Quality Assessment or Risk of Bias tool choices  (Duke Medical Library)

Tools for Creating Risk of Bias Figures

Web app designed for visualizing risk-of-bias assessments to create “traffic light” plots 

facebook

  • Last Updated: Jan 9, 2024 11:53 AM
  • URL: https://guides.hsl.virginia.edu/sys-review-resources
  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews

Systematic Reviews: Home

Created by health science librarians.

HSL Logo

  • Systematic review resources

What is a Systematic Review?

A simplified process map, how can the library help, publications by hsl librarians, systematic reviews in non-health disciplines, resources for performing systematic reviews.

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

  Chat with us (during business hours)

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Sign up for a systematic review workshop or watch a recording

A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias , provide reliable findings , and inform decision-making.  ¹  

There are many types of literature reviews.

Before beginning a systematic review, consider whether it is the best type of review for your question, goals, and resources. The table below compares a few different types of reviews to help you decide which is best for you. 

  • Scoping Review Guide For more information about scoping reviews, refer to the UNC HSL Scoping Review Guide.

Systematic Reviews: A Simplified, Step-by-Step Process Map

  • UNC HSL's Simplified, Step-by-Step Process Map A PDF file of the HSL's Systematic Review Process Map.
  • Text-Only: UNC HSL's Systematic Reviews - A Simplified, Step-by-Step Process A text-only PDF file of HSL's Systematic Review Process Map.

Creative commons license applied to systematic reviews image requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only.

The average systematic review takes 1,168 hours to complete. ¹   A librarian can help you speed up the process.

Systematic reviews follow established guidelines and best practices to produce high-quality research. Librarian involvement in systematic reviews is based on two levels. In Tier 1, your research team can consult with the librarian as needed. The librarian will answer questions and give you recommendations for tools to use. In Tier 2, the librarian will be an active member of your research team and co-author on your review. Roles and expectations of librarians vary based on the level of involvement desired. Examples of these differences are outlined in the table below.

  • Request a systematic or scoping review consultation

The following are systematic and scoping reviews co-authored by HSL librarians.

Only the most recent 15 results are listed. Click the website link at the bottom of the list to see all reviews co-authored by HSL librarians in PubMed

Researchers conduct systematic reviews in a variety of disciplines.  If your focus is on a topic outside of the health sciences, you may want to also consult the resources below to learn how systematic reviews may vary in your field.  You can also contact a librarian for your discipline with questions.

  • EPPI-Centre methods for conducting systematic reviews The EPPI-Centre develops methods and tools for conducting systematic reviews, including reviews for education, public and social policy.

Cover Art

Environmental Topics

  • Collaboration for Environmental Evidence (CEE) CEE seeks to promote and deliver evidence syntheses on issues of greatest concern to environmental policy and practice as a public service

Social Sciences

systematic literature review services

  • Siddaway AP, Wood AM, Hedges LV. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annu Rev Psychol. 2019 Jan 4;70:747-770. doi: 10.1146/annurev-psych-010418-102803. A resource for psychology systematic reviews, which also covers qualitative meta-syntheses or meta-ethnographies
  • The Campbell Collaboration

Social Work

Cover Art

Software engineering

  • Guidelines for Performing Systematic Literature Reviews in Software Engineering The objective of this report is to propose comprehensive guidelines for systematic literature reviews appropriate for software engineering researchers, including PhD students.

Cover Art

Sport, Exercise, & Nutrition

Cover Art

  • Application of systematic review methodology to the field of nutrition by Tufts Evidence-based Practice Center Publication Date: 2009
  • Systematic Reviews and Meta-Analysis — Open & Free (Open Learning Initiative) The course follows guidelines and standards developed by the Campbell Collaboration, based on empirical evidence about how to produce the most comprehensive and accurate reviews of research

Cover Art

  • Systematic Reviews by David Gough, Sandy Oliver & James Thomas Publication Date: 2020

Cover Art

Updating reviews

  • Updating systematic reviews by University of Ottawa Evidence-based Practice Center Publication Date: 2007

Looking for our previous Systematic Review guide?

Our legacy guide was used June 2020 to August 2022

  • Systematic Review Legacy Guide
  • Next: Step 1: Complete Pre-Review Tasks >>
  • Last Updated: Apr 24, 2024 2:00 PM
  • URL: https://guides.lib.unc.edu/systematic-reviews

Search & Find

  • E-Research by Discipline
  • More Search & Find

Places & Spaces

  • Places to Study
  • Book a Study Room
  • Printers, Scanners, & Computers
  • More Places & Spaces
  • Borrowing & Circulation
  • Request a Title for Purchase
  • Schedule Instruction Session
  • More Services

Support & Guides

  • Course Reserves
  • Research Guides
  • Citing & Writing
  • More Support & Guides
  • Mission Statement
  • Diversity Statement
  • Staff Directory
  • Job Opportunities
  • Give to the Libraries
  • News & Exhibits
  • Reckoning Initiative
  • More About Us

UNC University Libraries Logo

  • Search This Site
  • Privacy Policy
  • Accessibility
  • Give Us Your Feedback
  • 208 Raleigh Street CB #3916
  • Chapel Hill, NC 27515-8890
  • 919-962-1053

University Libraries      University of Nevada, Reno

  • Skill Guides
  • Subject Guides

Systematic, Scoping, and Other Literature Reviews: Overview

  • Project Planning

What Is a Systematic Review?

Regular literature reviews are simply summaries of the literature on a particular topic. A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) outline the minimum amount of information that needs to be reported at the conclusion of a systematic review project. 

Other types of what are known as "evidence syntheses," such as scoping, rapid, and integrative reviews, have varying methodologies. While systematic reviews originated with and continue to be a popular publication type in medicine and other health sciences fields, more and more researchers in other disciplines are choosing to conduct evidence syntheses. 

This guide will walk you through the major steps of a systematic review and point you to key resources including Covidence, a systematic review project management tool. For help with systematic reviews and other major literature review projects, please send us an email at  [email protected] .

Getting Help with Reviews

Organization such as the Institute of Medicine recommend that you consult a librarian when conducting a systematic review. Librarians at the University of Nevada, Reno can help you:

  • Understand best practices for conducting systematic reviews and other evidence syntheses in your discipline
  • Choose and formulate a research question
  • Decide which review type (e.g., systematic, scoping, rapid, etc.) is the best fit for your project
  • Determine what to include and where to register a systematic review protocol
  • Select search terms and develop a search strategy
  • Identify databases and platforms to search
  • Find the full text of articles and other sources
  • Become familiar with free citation management (e.g., EndNote, Zotero)
  • Get access to you and help using Covidence, a systematic review project management tool

Doing a Systematic Review

  • Plan - This is the project planning stage. You and your team will need to develop a good research question, determine the type of review you will conduct (systematic, scoping, rapid, etc.), and establish the inclusion and exclusion criteria (e.g., you're only going to look at studies that use a certain methodology). All of this information needs to be included in your protocol. You'll also need to ensure that the project is viable - has someone already done a systematic review on this topic? Do some searches and check the various protocol registries to find out. 
  • Identify - Next, a comprehensive search of the literature is undertaken to ensure all studies that meet the predetermined criteria are identified. Each research question is different, so the number and types of databases you'll search - as well as other online publication venues - will vary. Some standards and guidelines specify that certain databases (e.g., MEDLINE, EMBASE) should be searched regardless. Your subject librarian can help you select appropriate databases to search and develop search strings for each of those databases.  
  • Evaluate - In this step, retrieved articles are screened and sorted using the predetermined inclusion and exclusion criteria. The risk of bias for each included study is also assessed around this time. It's best if you import search results into a citation management tool (see below) to clean up the citations and remove any duplicates. You can then use a tool like Rayyan (see below) to screen the results. You should begin by screening titles and abstracts only, and then you'll examine the full text of any remaining articles. Each study should be reviewed by a minimum of two people on the project team. 
  • Collect - Each included study is coded and the quantitative or qualitative data contained in these studies is then synthesized. You'll have to either find or develop a coding strategy or form that meets your needs. 
  • Explain - The synthesized results are articulated and contextualized. What do the results mean? How have they answered your research question?
  • Summarize - The final report provides a complete description of the methods and results in a clear, transparent fashion. 

Adapted from

Types of reviews, systematic review.

These types of studies employ a systematic method to analyze and synthesize the results of numerous studies. "Systematic" in this case means following a strict set of steps - as outlined by entities like PRISMA and the Institute of Medicine - so as to make the review more reproducible and less biased. Consistent, thorough documentation is also key. Reviews of this type are not meant to be conducted by an individual but rather a (small) team of researchers. Systematic reviews are widely used in the health sciences, often to find a generalized conclusion from multiple evidence-based studies. 

Meta-Analysis

A systematic method that uses statistics to analyze the data from numerous studies. The researchers combine the data from studies with similar data types and analyze them as a single, expanded dataset. Meta-analyses are a type of systematic review.

Scoping Review

A scoping review employs the systematic review methodology to explore a broader topic or question rather than a specific and answerable one, as is generally the case with a systematic review. Authors of these types of reviews seek to collect and categorize the existing literature so as to identify any gaps.

Rapid Review

Rapid reviews are systematic reviews conducted under a time constraint. Researchers make use of workarounds to complete the review quickly (e.g., only looking at English-language publications), which can lead to a less thorough and more biased review. 

Narrative Review

A traditional literature review that summarizes and synthesizes the findings of numerous original research articles. The purpose and scope of narrative literature reviews vary widely and do not follow a set protocol. Most literature reviews are narrative reviews. 

Umbrella Review

Umbrella reviews are, essentially, systematic reviews of systematic reviews. These compile evidence from multiple review studies into one usable document. 

Grant, Maria J., and Andrew Booth. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information & Libraries Journal , vol. 26, no. 2, 2009, pp. 91-108. doi: 10.1111/j.1471-1842.2009.00848.x .

  • Next: Project Planning >>
  • University of Wisconsin–Madison
  • University of Wisconsin-Madison
  • Research Guides
  • Evidence Synthesis, Systematic Review Services
  • Literature Review Types, Taxonomies

Evidence Synthesis, Systematic Review Services : Literature Review Types, Taxonomies

  • Develop a Protocol
  • Develop Your Research Question
  • Select Databases
  • Select Gray Literature Sources
  • Write a Search Strategy
  • Manage Your Search Process
  • Register Your Protocol
  • Citation Management
  • Article Screening
  • Risk of Bias Assessment
  • Synthesize, Map, or Describe the Results
  • Find Guidance by Discipline
  • Manage Your Research Data
  • Browse Evidence Portals by Discipline
  • Automate the Process, Tools & Technologies
  • Additional Resources

Choosing a Literature Review Methodology

Growing interest in evidence-based practice has driven an increase in review methodologies. Your choice of review methodology (or literature review type) will be informed by the intent (purpose, function) of your research project and the time and resources of your team. 

  • Decision Tree (What Type of Review is Right for You?) Developed by Cornell University Library staff, this "decision-tree" guides the user to a handful of review guides given time and intent.

Types of Evidence Synthesis*

Critical Review - Aims to demonstrate writer has extensively researched literature and critically evaluated its quality. Goes beyond mere description to include degree of analysis and conceptual innovation. Typically results in hypothesis or model.

Mapping Review (Systematic Map) - Map out and categorize existing literature from which to commission further reviews and/or primary research by identifying gaps in research literature.

Meta-Analysis - Technique that statistically combines the results of quantitative studies to provide a more precise effect of the results.

Mixed Studies Review (Mixed Methods Review) - Refers to any combination of methods where one significant component is a literature review (usually systematic). Within a review context it refers to a combination of review approaches for example combining quantitative with qualitative research or outcome with process studies.

Narrative (Literature) Review - Generic term: published materials that provide examination of recent or current literature. Can cover wide range of subjects at various levels of completeness and comprehensiveness.

Overview - Generic term: summary of the [medical] literature that attempts to survey the literature and describe its characteristics.

Qualitative Systematic Review or Qualitative Evidence Synthesis - Method for integrating or comparing the findings from qualitative studies. It looks for ‘themes’ or ‘constructs’ that lie in or across individual qualitative studies.

Rapid Review - Assessment of what is already known about a policy or practice issue, by using systematic review methods to search and critically appraise existing research.

Scoping Review or Evidence Map - Preliminary assessment of potential size and scope of available research literature. Aims to identify nature and extent of research.

State-of-the-art Review - Tend to address more current matters in contrast to other combined retrospective and current approaches. May offer new perspectives on issue or point out area for further research.

Systematic Review - Seeks to systematically search for, appraise and synthesis research evidence, often adhering to guidelines on the conduct of a review. (An emerging subset includes Living Reviews or Living Systematic Reviews - A [review or] systematic review which is continually updated, incorporating relevant new evidence as it becomes available.)

Systematic Search and Review - Combines strengths of critical review with a comprehensive search process. Typically addresses broad questions to produce ‘best evidence synthesis.’

Umbrella Review - Specifically refers to review compiling evidence from multiple reviews into one accessible and usable document. Focuses on broad condition or problem for which there are competing interventions and highlights reviews that address these interventions and their results.

*These definitions are in Grant & Booth's "A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies."

Literature Review Types/Typologies, Taxonomies

Grant, M. J., and A. Booth. "A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies."  Health Information and Libraries Journal  26.2 (2009): 91-108.  DOI: 10.1111/j.1471-1842.2009.00848.x  Link

Munn, Zachary, et al. “Systematic Review or Scoping Review? Guidance for Authors When Choosing between a Systematic or Scoping Review Approach.” BMC Medical Research Methodology , vol. 18, no. 1, Nov. 2018, p. 143. DOI: 10.1186/s12874-018-0611-x. Link

Sutton, A., et al. "Meeting the Review Family: Exploring Review Types and Associated Information Retrieval Requirements."  Health Information and Libraries Journal  36.3 (2019): 202-22.  DOI: 10.1111/hir.12276  Link

Dissertation Research (Capstones, Theses)

While a full systematic review may not necessarily satisfy criteria for dissertation research in a discipline (as independent scholarship), the methods described in this guide--from developing a protocol to searching and synthesizing the literature--can help to ensure that your review of the literature is comprehensive, transparent, and reproducible.

In this context, your review type , then, may be better described as a 'structured literature review', a ' systematized search and review', or a ' systematized scoping (or integrative or mapping) review'.

  • Planning Worksheet for Structured, Systematized Literature Reviews
  • << Previous: Home
  • Next: The Systematic Review Process >>
  • Last Updated: Apr 12, 2024 11:56 AM
  • URL: https://researchguides.library.wisc.edu/literature_review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Declarations

The authors declare no competing interests.

This article has been published simultaneously in BMC Systematic Reviews, Acta Anaesthesiologica Scandinavica, BMC Infectious Diseases, British Journal of Pharmacology, JBI Evidence Synthesis, the Journal of Bone and Joint Surgery Reviews , and the Journal of Pediatric Rehabilitation Medicine .

Publisher’ s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Elsevier QRcode Wechat

  • Research Process

Systematic Literature Review or Literature Review?

  • 3 minute read
  • 43.3K views

Table of Contents

As a researcher, you may be required to conduct a literature review. But what kind of review do you need to complete? Is it a systematic literature review or a standard literature review? In this article, we’ll outline the purpose of a systematic literature review, the difference between literature review and systematic review, and other important aspects of systematic literature reviews.

What is a Systematic Literature Review?

The purpose of systematic literature reviews is simple. Essentially, it is to provide a high-level of a particular research question. This question, in and of itself, is highly focused to match the review of the literature related to the topic at hand. For example, a focused question related to medical or clinical outcomes.

The components of a systematic literature review are quite different from the standard literature review research theses that most of us are used to (more on this below). And because of the specificity of the research question, typically a systematic literature review involves more than one primary author. There’s more work related to a systematic literature review, so it makes sense to divide the work among two or three (or even more) researchers.

Your systematic literature review will follow very clear and defined protocols that are decided on prior to any review. This involves extensive planning, and a deliberately designed search strategy that is in tune with the specific research question. Every aspect of a systematic literature review, including the research protocols, which databases are used, and dates of each search, must be transparent so that other researchers can be assured that the systematic literature review is comprehensive and focused.

Most systematic literature reviews originated in the world of medicine science. Now, they also include any evidence-based research questions. In addition to the focus and transparency of these types of reviews, additional aspects of a quality systematic literature review includes:

  • Clear and concise review and summary
  • Comprehensive coverage of the topic
  • Accessibility and equality of the research reviewed

Systematic Review vs Literature Review

The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper. That is, it includes an introduction, description of the methods used, a discussion and conclusion, as well as a reference list or bibliography.

A systematic review, however, includes entirely different components that reflect the specificity of its research question, and the requirement for transparency and inclusion. For instance, the systematic review will include:

  • Eligibility criteria for included research
  • A description of the systematic research search strategy
  • An assessment of the validity of reviewed research
  • Interpretations of the results of research included in the review

As you can see, contrary to the general overview or summary of a topic, the systematic literature review includes much more detail and work to compile than a standard literature review. Indeed, it can take years to conduct and write a systematic literature review. But the information that practitioners and other researchers can glean from a systematic literature review is, by its very nature, exceptionally valuable.

This is not to diminish the value of the standard literature review. The importance of literature reviews in research writing is discussed in this article . It’s just that the two types of research reviews answer different questions, and, therefore, have different purposes and roles in the world of research and evidence-based writing.

Systematic Literature Review vs Meta Analysis

It would be understandable to think that a systematic literature review is similar to a meta analysis. But, whereas a systematic review can include several research studies to answer a specific question, typically a meta analysis includes a comparison of different studies to suss out any inconsistencies or discrepancies. For more about this topic, check out Systematic Review VS Meta-Analysis article.

Language Editing Plus

With Elsevier’s Language Editing Plus services , you can relax with our complete language review of your systematic literature review or literature review, or any other type of manuscript or scientific presentation. Our editors are PhD or PhD candidates, who are native-English speakers. Language Editing Plus includes checking the logic and flow of your manuscript, reference checks, formatting in accordance to your chosen journal and even a custom cover letter. Our most comprehensive editing package, Language Editing Plus also includes any English-editing needs for up to 180 days.

PowerPoint Presentation of Your Research Paper

  • Publication Recognition

How to Make a PowerPoint Presentation of Your Research Paper

What is and How to Write a Good Hypothesis in Research?

  • Manuscript Preparation

What is and How to Write a Good Hypothesis in Research?

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

systematic literature review services

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

Systematic Review Consultants LTD Official Logo

Systematic Review Consultants LTD

Systematic Review Services

As a team of consultants working as Information Specialists, Systematic Reviewers, Health Economists, Research Analysts, Biostatisticians, Epidemiologists, and Medical Writers. We provide a wide range of services, from writing the proposal to writing the final manuscript.

Whether you are a student* ,  a senior researcher, a clinician, an academician, or a policy maker, we support you with all types of literature reviews with relevant services:

Evidence Synthesis

  • Scoping Reviews ( See published example )
  • Systematic Reviews ( See published examples )
  • Cochrane Reviews ( See published example )
  • Rapid Reviews
  • Realist Reviews
  • Evidence Gap and Map ( See published example )
  • Overviews / Umbrella Reviews (Systematic Review of Systematic Reviews)
  • Evidence Synthesis for Clinical Practice Guidelines
  • Health Economics and Outcome Research (HEOR)
  • Cost-Benefit Analysis, Cost-Effectiveness Analysis, Cost-Utility Analysis, and Cost-Consequences Analysis
  • Cohort Models (Decision Trees and Markov Model)
  • Disease Models (Micro-Simulation and Markov Model)
  • Early Modelling / Early Model
  • Partitioned Survival Model
  • Stochastic Modelling

Regulatory Affairs

  • Health Technology Assessment (HTA)
  • Clinical Evaluation Report (CER) for UKCA Marking
  • Clinical Evaluation Report (CER) for EU CE Marking, UKCA Marking
  • Assessment of Artificial Intelligence and Machine Learning-Based Devices and Technologies
  • Horizon Scanning (+SDI/CAS)

Other Services

  • Medical Translations (All Languages) ( See published example )
  • Data Management Plans for Grant Application
  • Scientific Writing for Publication

We provide Systematic Review Consulting Services for all stages of systematic reviewing:

  • Providing information, support, and some advice ( contact us to book a time slot)
  • Training (watch the webinars on our YouTube channel and subscribe for more)
  • Grant application development (including Data Management Plans)
  • Protocol development and registration
  • Systematic search  and removing duplicates (See free training here )
  • Search Support for Cochrane Reviews
  • Finding Answers to Clinical Questions (See free training here )
  • Additional searching, including grey literature (See free training here ) and snowballing
  • Choosing the right computer program (See  free training here )
  • Following reporting guidelines such as PRISMA 2020, PRISMA-ScR, PRISMA-S, TIDieR, SwiM and PRISMA flow diagram
  • Screening the search results (See free training here )
  • Creating study
  • Extracting data from studies
  • Assessment of Risk of Bias or Quality Appraisal
  • Translating from all languages into English ( See published example )
  • Summarising, indexing and abstracting services for individuals and medical journals
  • Language editing for publication
  • Obtaining the full reports
  • Meta-analysis, network meta-analysis, meta-synthesis, and narrative synthesis
  • Report writing for medical journals
  • Choosing the right journal ( Watch series )
  • Formatting for publication based on instructions for authors (by the way, ChatGPT can do this for you for free :D)
  • Supporting submissions, replying to peer-review comments, and publications
  • Post-publication customer services until the end of the research project
  • Daily, weekly, monthly, quarterly, or annual updates on any topic
  • Assessment for Indexing Journals in MEDLINE, Embase, Scopus, Web of Science (ISI), PubMed, PubMed CENTRAL, CINAHL, and PsycINFO
  • Altmetrcis, Bibliometrics, Cybermetrics, Scientometrics, and Webometrics ( See published example )

* We do not provide Ghost Authorship ( Ghostwriter ) services. If you are a student or researcher conducting a review as part of your coursework or PhD dissertation, you should contact us using your academic or organisational email address and CC your supervisor/course director/line manager’s academic email address so they can confirm that you are allowed to use our paid services based on your institute’s academic honesty policies. The company’s name and services should be acknowledged in the final report. If the authorship criteria are met, the company members’ names should be listed as co-authors.

Email us or see Testimonials from our previous and current clients before you decide.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Systematic Reviews & Literature Reviews

Evidence synthesis: part 1.

This blog post is the first in a series exploring Evidence Synthesis . We’re going to start by looking at two types of evidence synthesis: literature reviews and systemic reviews . To help me with this topic I looked at a number of research guides from other institutions, e.g., Cornell University Libraries.

The Key Differences Between a Literature Review and a Systematic Review

Overall, while both literature reviews and systematic reviews involve reviewing existing research literature, systematic reviews adhere to more rigorous and transparent methods to minimize bias and provide robust evidence to inform decision-making in education and other fields. If you are interested in learning about other evidence synthesis this decision tree created by Cornell Libraries (Robinson, n.d.) is a nice visual introduction.

Along with exploring evidence synthesis I am also interested in generative A.I.   I want to be transparent about how I used A.I. to create the table above. I fed this prompt into ChatGPT:

“ List the differences between a literature review and a systemic review for a graduate student of education “

I wanted to see what it would produce. I reformatted the list into a table so that it would be easier to compare and contrast these two reviews much like the one created by Cornell University Libraries (Kibbee, 2024). I think ChatGPT did a pretty good job. I did have to do quite a bit of editing, and make sure that what was created matched what I already knew. There are things ChatGPT left out, for example time frames, and how many people are needed for a systemic review, but we can revisit that in a later post.

Kibbee, M. (2024, April 10). Libguides: A guide to evidence synthesis: Cornell University Library Evidence Synthesis Service. Cornell University Library. https://guides.library.cornell.edu/evidence-synthesis/intro

  • Blog Archive 2009-2018
  • Library Hours
  • Library Salons
  • Library Spaces
  • Library Workshops
  • Reference Desk Questions

Subscribe to the Bank Street Library Blog

Advertisement

Advertisement

A Global Systematic Literature Review of Ecosystem Services in Reef Environments

  • Published: 25 November 2023
  • Volume 73 , pages 634–645, ( 2024 )

Cite this article

systematic literature review services

  • Vinicius J. Giglio 1 ,
  • Anaide W. Aued 2 ,
  • Cesar A. M. M. Cordeiro 3 ,
  • Linda Eggertsen 4 , 5 ,
  • Débora S. Ferrari 6 ,
  • Leandra R. Gonçalves 7 ,
  • Natalia Hanazaki 2 ,
  • Osmar J. Luiz 8 ,
  • André L. Luza 4 ,
  • Thiago C. Mendes 9 ,
  • Hudson T. Pinheiro 10 ,
  • Bárbara Segal 2 ,
  • Luiza S. Waechter 4 &
  • Mariana G. Bender 4  

754 Accesses

Explore all metrics

Ecosystem services (ES) embrace contributions of nature to human livelihood and well-being. Reef environments provide a range of ES with direct and indirect contributions to people. However, the health of reef environments is declining globally due to local and large-scale threats, affecting ES delivery in different ways. Mapping scientific knowledge and identifying research gaps on reefs’ ES is critical to guide their management and conservation. We conducted a systematic assessment of peer-reviewed articles published between 2007 and 2022 to build an overview of ES research on reef environments. We analyzed the geographical distribution, reef types, approaches used to assess ES, and the potential drivers of change in ES delivery reported across these studies. Based on 115 articles, our results revealed that coral and oyster reefs are the most studied reef ecosystems. Cultural ES (e.g., subcategories recreation and tourism) was the most studied ES in high-income countries, while regulating and maintenance ES (e.g., subcategory life cycle maintenance) prevailed in low and middle-income countries. Research efforts on reef ES are biased toward the Global North, mainly North America and Oceania. Studies predominantly used observational approaches to assess ES, with a marked increase in the number of studies using statistical modeling during 2021 and 2022. The scale of studies was mostly local and regional, and the studies addressed mainly one or two subcategories of reefs’ ES. Overexploitation, reef degradation, and pollution were the most commonly cited drivers affecting the delivery of provisioning, regulating and maintenance, and cultural ES. With increasing threats to reef environments, the growing demand for assessing the contributions to humans provided by reefs will benefit the projections on how these ES will be impacted by anthropogenic pressures. The incorporation of multiple and synergistic ecosystem mechanisms is paramount to providing a comprehensive ES assessment, and improving the understanding of functions, services, and benefits.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

systematic literature review services

Similar content being viewed by others

systematic literature review services

Spatial and temporal scales of coral reef fish ecological research and management: a systematic map protocol

systematic literature review services

When Coviability Meets Ecosystem Services: The Case of Reunion Island’s Coral Reefs

systematic literature review services

Ecosystem Services of Mesophotic Coral Ecosystems and a Call for Better Accounting

Data availability.

The dataset is available as supplementary material.

Albert JA, Olds AD, Albert S, Cruz-Trinidad A, Schward A-M (2015) Reaping the reef: provisioning services from coral reefs in Solomon Islands. Mar Policy 62:244–251

Article   Google Scholar  

Arabadzhyan A, Figini P, García C, González MM, Lam-González YE, León CJ (2021) Climate change, coastal tourism, and impact chains–a literature review. Curr Issues Tour 24(16):2233–2268

Barbier EB et al. (2011) The value of estuarine and coastal ecosystem services. Ecol Monogr 81(2):169–193

Baker J, Sheate WR, Phillips P, Eales R (2013) Ecosystem services in environmental assessment—help or hindrance? Environ Impact Assess Rev 40:3–13

Beck MW, Brumbaugh RD, Airoldi L, Carranza A, Coen LD, Crawford C, Guo X (2011) Oyster reefs at risk and recommendations for conservation, restoration, and management. Bioscience 61(2):107–116

Beck MW, Losada IJ, Menéndez P, Reguero BG, Díaz-Simal P, Fernández F (2018) The global flood protection savings provided by coral reefs. Nat Commun 9(1):2186

Article   ADS   PubMed   PubMed Central   Google Scholar  

Bell JD, Ganachaud A, Gehrke PC, Griffiths SP, Hobday AJ, Hoegh-Guldberg O, Johnson JE, Le Borgne R, Lehodey P, Lough JM, Matear RJ (2013) Mixed responses of tropical Pacific fisheries and aquaculture to climate change. Nat Clim Change 3(6):591–599

Article   ADS   Google Scholar  

Beddington JR, Agnew DJ, Clark CW (2007) Current problems in the management of marine fisheries. Science 316(5832):1713–1716

Article   ADS   CAS   PubMed   Google Scholar  

Blowes SA, Supp SR, Antão LH, Bates A, Bruelheide H, Chase JM, Moyes F, Magurran A, McGill B, Myers-Smith IH, Winter M (2019) The geography of biodiversity change in marine and terrestrial assemblages. Science 366(6463):339–345

Brouwer R, Pinto R, Dugstad A, Navrud S (2022) The economic value of the Brazilian Amazon rainforest ecosystem services: a meta-analysis of the Brazilian literature. PloS one 17(5):e0268425

Article   CAS   PubMed   PubMed Central   Google Scholar  

Burke L, Spalding M (2022) Shoreline protection by the world’s coral reefs: mapping the benefits to people, assets, and infrastructure. Mar Policy 146:105311

Checon HH, Xavier LY, Gonçalves LR, Carrilho CD, Silva AGD (2022) Beach market: what have we been computing in Brazil? Ocean Coast Res 69:e21038

Comte A, Pendleton LH (2018) Management strategies for coral reefs and people under global environmental change: 25 years of scientific research. J Environ Manag 209:462–474

De’Ath G, Fabricius KE, Sweatman H, Puotinen M (2012) The 27–year decline of coral cover on the Great Barrier Reef and its causes. Proc Natl Acad Sci 109(44):17995–17999

Díaz S, Fargione J, Chapin III FS, Tilman D (2006) Biodiversity loss threatens human well-being. PLoS Biol 4(8):e277

Article   PubMed   PubMed Central   Google Scholar  

Domeier ML, Colin PL (1997) Tropical reef fish spawning aggregations: defined and reviewed. Bull Mar Sci 60(3):698–726

Google Scholar  

Eddy TD, Lam VWY, Reygondeau G, Cisneros-Montemayor AM, Greer K, Palomares MLD, Bruno JF, Ota Y, Cheung WWL (2021) Global decline in capacity of coral reefs to provide ecosystem services. One Earth 4:1278–1285

Edinger EN, Jompa J, Limmon GV, Widjatmoko W, Risk MJ (1998) Reef degradation and coral biodiversity in Indonesia: effects of land-based pollution, destructive fishing practices and changes over time. Mar Pollut Bull 36(8):617–630

Article   CAS   Google Scholar  

Egoh B et al. (2012) Indicators for mapping ecosystem services: a review. European Commission, Joint Research Centre (JRC)

Fogliarini CO, Ferreira CEL, Bornholdt J, Barbosa MC, Giglio VJ, Bender MG (2021) Telling the same story: Fishers and landing data reveal changes in fisheries on the Southeastern Brazilian Coast. PloS ONE 16(6):e0252391

Friess DA, Yando ES, Wong LW, Bhatia N (2020) Indicators of scientific value: an under-recognised ecosystem service of coastal and marine habitats. Ecol Indic 113:106255

Grabowski JH, Brumbaugh RD, Conrad RF, Keeler AG, Opaluch JJ, Peterson CH, Smyth AR (2012) Economic valuation of ecosystem services provided by oyster reefs. Bioscience 62(10):900–909

Garnier S, Ross N, Rudis B, Sciaini M, Scherer C (2018) viridis: default color maps from’matplotlib’. R package version 051. CRAN: the Comprehensive R Archive Network

Gill DA, Mascia MB, Ahmadia GN, Glew L, Lester SE, Barnes M, Craigie I, Darling ES, Free CM, Geldmann J, Holst S (2017) Capacity shortfalls hinder the performance of marine protected areas globally. Nature 543(7647):665–669

Haas AF, Guibert M, Foerschner A, Calhoun S, George E, Hatay M, Dinsdale E, Sandin SA, Smith JE, Vermeij MJ, Felts B (2015) Can we measure beauty? Computational evaluation of coral reef aesthetics. PeerJ 3:e1390

Hafezi M, Stewart RA, Sahin O, Giffin AL, Mackey B (2021) Evaluating coral reef ecosystem services outcomes from climate change adaptation strategies using integrative system dynamics. J Environ Manag 285:112082

Haines-Young R, Potschin M (2011) Common International Classification of Ecosystem Services (CICES): 2011 Update. Report to the European Environmental Agency, Nottingham

Halpern BS, Frazier M, Potapenko J, Casey KS, Koenig K, Longo C, Lowndes JS, Rockwood RC, Selig ER, Selkoe KA, Walbridge S (2015) Spatial and temporal changes in cumulative human impacts on the world’s ocean. Nat Commun 6(1):1–7

He Q, Silliman BR (2019) Climate Change, human impacts, and coastal ecosystems in the Anthropocene. Current Biol 29(19):R1021–R1035

Hicks CC, Cohen PJ, Graham NA, Nash KL, Allison EH, D’Lima C, Mills DJ, Roscher M, Thilsted SH, Thorne-Lyman AL, MacNeil MA (2019) Harnessing global fisheries to tackle micronutrient deficiencies. Nature 574(7776):95–98

Hoegh-Guldberg O, Pendleton L, Kaup A (2019) People and the changing nature of coral reefs. Reg Stud Mar Sci 30:100699

Holstein DM, Fletcher P, Groves SH, Smith TB (2019) Ecosystem services of mesophotic coral ecosystems and a call for better accounting. In Mesophotic coral ecosystems (pp. 943-956). Springer, Cham

Hughes TP, Barnes ML, Bellwood DR, Cinner JE, Cumming GS, Jackson JB, Kleypas J, Van De Leemput IA, Lough JM, Morrison TH, Palumbi SR (2017) Coral reefs in the Anthropocene. Nature 546(7656):82–90

Ingram RJ, Oleson KL, Gove JM (2018) Revealing complex social-ecological interactions through participatory modeling to support ecosystem-based management in Hawai ‘i. Mar Policy 94:180–188

Intergovernmental Oceanographic Commission (2018) UNESCO (United Nations Educational, Scientific and Cultural Organization). One planet, one ocean Paris: IOC Publishing. Available at https://unesdoc.unesco.org/ark:/48223/pf0000261962 . Accessed 13 Nov 2022

Kleypas JA, Mcmanus JW, Meñez LAB (1999) Environmental limits to coral reef development: Where do we draw the line? Am Zool 39(1):146–159

Kroon FJ, Thorburn P, Schaffelke B, Whitten S (2016) Towards protecting the Great Barrier Reef from land‐based pollution. Glob Change Biol 22(6):1985–2002

Langemeyer J, Gómez-Baggethun E, Haase D, Scheuer S, Elmqvist T (2016) Bridging the gap between ecosystem service assessments and land-use planning through Multi-Criteria Decision. Anal (MCDA) Environ Sci Policy 62:45–56

Leão ZMAN et al. (2016) Brazilian coral reefs in a period of global change: a synthesis. Braz J Oceanogr 64:97–116

Lebel J, McLean R (2018) A better measure of research from the global south. Nature 559(7712):23–26

Ling SD, Davey A, Reeves SE, Gaylard S, Davies PL, Stuart-Smith RD, Edgar GJ (2018) Pollution signature for temperate reef biodiversity is short and simple. Mar Pollut Bull 130:159–169

Article   CAS   PubMed   Google Scholar  

Liquete C et al. (2013) Current status and future prospects for the assessment of marine and coastal ecosystem services: a systematic review. PLoS ONE 8(7):e67737

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Li M, Liu S, Liu Y, Sun Y, Wang F, Dong S, An Y (2021) The cost–benefit evaluation based on ecosystem services under different ecological restoration scenarios. Environ Monit Assess 193(7):1–15

Magris RA, Costa MD, Ferreira CE, Vilar CC, Joyeux JC, Creed JC, Copertino MS, Horta PA, Sumida PY, Francini‐Filho RB, Floeter SR (2021) A blueprint for securing Brazil’s marine biodiversity and supporting the achievement of global conservation goals. Diversity Distrib 27(2):198–215

Marconi M, Giglio VJ, Pereira Filho GH, Motta FS (2020) Does quality of scuba diving experience vary according to the context and management regime of marine protected areas? Ocean Coast Manag 194:105246

Marre JB, Thebaud O, Pascoe S, Jennings S, Boncoeur J, Coglan L (2015) The use of ecosystem services valuation in Australian coastal zone management. Mar Policy 56:117–124

Marshall NA, Dunstan P, Pert P, Thiaul L (2019) How people value different ecosystems within the Great Barrier Reef. J Environ Manag 243:39–44

Martin SL, Ballance LT, Groves T (2016) An ecosystem services perspective for the oceanic Eastern Tropical Pacific: Commercial fisheries, carbon storage, recreational fishing, and biodiversity. Front Mar Sci 3:50

McAfee D, Connell SD (2021) The global fall and rise of oyster reefs. Front Ecol Environ 19(2):118–125

McClenachan L, O’Connor G, Neal BP, Pandolfi JM, Jackson JB (2017) Ghost reefs: Nautical charts document large spatial scale of coral reef loss over 240 years. Sci Adv 3(9):e1603155

Messmer V, Jones GP, Munday PL, Holbrook SJ, Schmitt RJ, Brooks AJ (2011) Habitat biodiversity as a determinant of fish community structure on coral reefs. Ecology 92(12):2285–2298

Article   PubMed   Google Scholar  

Milcu AI et al. (2013) Cultural ecosystem services: a literature review and prospects for future research. Ecol Soc 18(3):44

Millenium Ecosystem Assessment [MEA] (2005) Ecosystems and human well-being: synthesis. Island Press, Washington, DC

Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6(7):e1000097

Morrison TH, Adger N, Barnett J, Brown K, Possingham H, Hughes T (2020) Advancing coral reef governance into the Anthropocene. One Earth 2(1):64–74

Neumann B, Vafeidis AT, Zimmermann J, Nicholls RJ (2015) Future coastal population growth and exposure to sea-level rise and coastal flooding – a global assessment. PLoS ONE 10:e0118571

Outeiro L, Rodrigues JG, Damásio LMA, Lopes PFM (2019) Is it just about the money? A spatial-economic approach to assess ecosystem service tradeoffs in a marine protected area in Brazil. Ecosyst Serv 38:100959

Pellowe KE, Meacham M, Peterson GD, Lade SJ (2023) Global analysis of reef ecosystem services reveals synergies, trade-offs and bundles. Ecosyst Serv 63:101545

Pendleton L, Comte A, Langdon C, Ekstrom JA, Cooley SR, Suatoni L, Beck MW, Brander LM, Burke L, Cinner JE, Doherty C, Edwards PET, Gledhill D, Jiang L-Q, van Hooidonk RJ, Teh L, Waldbusser GG, Ritter J (2016) Coral reefs and people in a high-CO2 world: where can science make a difference to people? PloS ONE 11(11): e0164699

Perry CT, Larcombe P (2003) Marginal and non-reef-building coral environments. Coral Reefs 22:427–432

Pinheiro HT, MacDonald C, Santos RG, Ali R, Bobat A, Cresswell BJ, Francini-Filho R, Freitas R, Galbraith GF, Musembi P, Phelps TA, Quimbayo JP, Quiros TEAL, Shepherd B, Stefanoudis PV, Talma S, Teixeira JB, Woodall LC, Rocha LA (2023) Plastic pollution on the world’s coral reefs. Nature 619:311–316. https://doi.org/10.1038/s41586-023-06113-5

Ponti M, Linares C, Cerrano C, Rodolfo-Metalpa R, Hoeksema BW (2021) Biogenic reefs at risk: facing globally widespread local threats and their interaction with climate change. Front Mar Sci 8:793038

R Development Core Team (2019) A Language and Environment for Statistical Computing and others. Vienna, Austria

Reid WV (2005) Ecosystems and human well-being: synthesis: a report of the millenium ecosystems assessment. Washington DC: Island Press

Rees SE, Mangi SC, Hattam C, Gall SC, Rodwell LD, Peckett FJ, Attrill MJ (2015) The socio-economic effects of a Marine Protected Area on the ecosystem service of leisure and recreation. Mar Policy 62:144–152

Rivero S, Villasante S (2016) What are the research priorities for marine ecosystem services? Mar Policy 66:104–113

Robinson JP, Nash KL, Blanchard JL, Jacobsen NS, Maire E, Graham NA, MacNeil MA, Zamborain‐Mason J, Allison EH, Hicks CC (2022) Managing fisheries for maximum nutrient yield. Fish Fish 23(4):800–811

Rodrigues JG, Conides AJ, Rivero Rodriguez S, Raicevich S, Pita P, Kleisner KM, Pita C, Lopes PF, Alonso Roldáni V, Ramos SS, Klaoudatos D (2017) Marine and coastal cultural ecosystem services: knowledge gaps and research priorities. One Ecosyst 2:e12290

Ruiz NN, Alonso MSL, Vidal-Abarca MV (2021) Contributions of dry rivers to human well-being: a global review for future research. Ecosyst Serv 50:101307

Santavy DL, Horstmann CL, Sharpe LM, Yee SH, Ringold P (2021) What is it about coral reefs? Translation of ecosystem goods and services relevant to people and their well‐being. Ecosphere 12(8):e03639

Sato M, Nanami A, Bayne CJ, Makino M, Hori M (2020) Changes in the potential stocks of coral reef ecosystem services following coral bleaching in Sekisei Lagoon, southern Japan: implications for the future under global warming. Sustainability Sci 15(3):863–883

Schröter M, Ring I, Schröter-Schlaack C and Bonn A 2019 The ecosystem service concept: linking ecosystems and human well-being. In Atlas of Ecosystem Services . Springer, Cham, pp 7–11

Seppelt R, Dormann CF, Eppink FV, Lautenbach S, Schmidt S (2011) A quantitative review of ecosystem service studies: approaches, shortcomings and the road ahead. J Appl Ecol 48(3):630–636

Sheppard C, Davy S, Pilling G, Graham N (2017) The biology of coral reefs. Oxford University Press

Simard P, Wall KR, Mann DA, Wall CC, Stallings CD (2016) Quantification of Boat Visitation Rates at Artificial and Natural Reefs in the Eastern Gulf of Mexico Using Acoustic Recorders. PLoS ONE 11(8):e0160695

Smith A, Yee SH, Russell M, Awkerman J, Fisher WS (2017) Linking ecosystem service supply to stakeholder concerns on both land and sea: An example from Guánica Bay watershed, Puerto Rico. Ecol Indic 74:371–383

Souter D, Planes S, Wicquart J, Logan M, Obura D, Staub F (2021) Status of coral reefs of the world: 2020 report. Global Coral Reef Monitoring Network (GCRMN)/International Coral Reef Initiative (ICRI). Accessed: https://gcrmn.net/2020-report/

Spalding M, Burke L, Wood SA, Ashpole J, Hutchison J, Zu Ermgassen P (2017) Mapping the global value and distribution of coral reef tourism. Mar Policy 82:104–113

Teoh SHS, Symes WS, Sun H, Pienkowski T, Carrasco LR (2019) A global meta-analysis of the economic values of provisioning and cultural ecosystem services. Sci Total Environ 649:1293–1298

Tielbörger K, Fleischer A, Menzel L, Metz J, Sternberg M (2010) The aesthetics of water and land: a promising concept for managing scarce water resources under climate change. Philos Trans R Soc A: Math, Phys Eng Sci 368(1931):5323–5337

Torres AFC, Alburez-Gutierrez D (2022) North and South: Naming practices and the hidden dimension of global disparities in knowledge production. Proc Natl Acad Sci 119(10):e2119373119

Tribot AS, Deter J, Mouquet N (2018) Integrating the aesthetic value of landscapes and biological diversity. Proc R Soc B: Biol Sci 285(1886):20180971

Tribot AS, Mouquet N, Villéger S, Raymond M, Hoff F, Boissery P, Holon F, Deter J (2016) Taxonomic and functional diversity increase the aesthetic value of coralligenous reefs. Sci Rep 6(1):1–12

Tribot A-S, Deter J, Claverie T, Guillhaumon F, Villéger S, Mouquet N (2019) Species diversity and composition drive the aesthetic value of coral reef fish assemblages. Biol Lett 15:20190703. https://doi.org/10.1098/rsbl.2019.0703

Trisos CH, Auerbach J, Katti M (2021) Decoloniality and anti-oppressive practices for a more ethical ecology. Nat Ecol Evol 5(9):1205–1212

van Zanten BT, van Beukering PJ, Wagtendonk AJ (2014) Coastal protection by coral reefs: a framework for spatial assessment and economic valuation. Ocean Coast Manag 96:94–103

Vergés A, McCosker E, Mayer‐Pinto M, Coleman MA, Wernberg T, Ainsworth T, Steinberg PD (2019) Tropicalisation of temperate reefs: implications for ecosystem functions and management actions. Funct Ecol 33(6):1000–1013

Waite R, Kushner B, Jungwiwattanaporn M, Gray E, Burke L (2015) Use of coastal economic valuation in decision making in the Caribbean: Enabling conditions and lessons learned. Ecosyst Serv 11:45–55

Warnes GR, Bolker B, Bonebakker L, Gentleman R, Huber W, Liaw A, Lumley T, Maechler M, Magnusson A, Moeller S, Schwartz M, Venables B (2020) gplots: Various R Programming Tools for Plotting Data. R package version 3.1.1. https://CRAN.R-project.org/package=gplots

Wilkinson CR (1999) Global and local threats to coral reef functioning and existence: review and predictions. Mar Freshw Res 50:867–878

Woodhead AJ, Hicks CC, Norström AV, Williams GJ, Graham NA (2019) Coral reef ecosystem services in the Anthropocene. Funct Ecol 33(6):1023–1034

Yang Q, Liu G, Hao Y, Zhang L, Giannetti BF, Wang J, Casazza M (2019) Donor-side evaluation of coastal and marine ecosystem services. Water Res 166:115028

Download references

Acknowledgements

This study is part of the Reef Synthesis Working Group (ReefSYN) founded by the Synthesis Center on Biodiversity and Ecosystem Services (SinBiose, CNPq, grant 442417/2019-5). NH thanks to CNPq for a research scholarship (306789/2022-1). ALL acknowledges postdoctoral fellowships from CNPq (#153024/2022-4, #164240/2021-7, #151228/2021-3, #152410/2020-1) and LE thanks CNPq for a postdoctoral grant (#150095/2022-8). TCM thanks CNPq for a postdoctoral fellowship (#102450/2022-6). HTP thanks FAPESP for funding and fellowship (2019/24215-2; 2021/07039-6). CAMMC thanks FAPERJ agency for the fellowship (E-26/200.215/2023).

Author information

Authors and affiliations.

Universidade Federal do Oeste do Pará, Campus Oriximiná, PA, Brazil

Vinicius J. Giglio

Departamento de Ecologia e Zoologia, Universidade Federal de Santa Catarina, Florianópolis, SC, Brazil

Anaide W. Aued, Natalia Hanazaki & Bárbara Segal

Laboratório de Ciências Ambientais, Universidade Estadual do Norte Fluminense, Campos dos Goytacazes, RJ, Brazil

Cesar A. M. M. Cordeiro

Departamento de Ecologia e Evolução, Universidade Federal de Santa Maria, Santa Maria, RS, Brazil

Linda Eggertsen, André L. Luza, Luiza S. Waechter & Mariana G. Bender

Hawai’i Institute of Marine Biology, University of Hawai’i at Manoa, Kaneohe, HI, 96744, USA

Linda Eggertsen

Programa de Pós Graduação em Ecologia, Universidade Federal de Santa Catarina, Florianópolis, SC, Brazil

Débora S. Ferrari

Instituto do Mar, Universidade Federal de São Paulo, Santos, SP, Brazil

Leandra R. Gonçalves

Research Institute for the Environment and Livelihoods, Charles Darwin University, Darwin, NT, Australia

Osmar J. Luiz

Departamento de Biologia Marinha, Universidade Federal Fluminense, Niterói, RJ, Brazil

Thiago C. Mendes

Centro de Biologia Marinha, Universidade de São Paulo, São Sebastião, SP, Brazil

Hudson T. Pinheiro

You can also search for this author in PubMed   Google Scholar

Contributions

All authors conceived the project and collected the data. VJG analyzed data and wrote the first version of the paper. All authors authors reviewed the manuscript and contributed to the final version of the paper.

Corresponding author

Correspondence to Vinicius J. Giglio .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material, rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Giglio, V.J., Aued, A.W., Cordeiro, C.A.M.M. et al. A Global Systematic Literature Review of Ecosystem Services in Reef Environments. Environmental Management 73 , 634–645 (2024). https://doi.org/10.1007/s00267-023-01912-y

Download citation

Received : 13 December 2022

Accepted : 05 November 2023

Published : 25 November 2023

Issue Date : March 2024

DOI : https://doi.org/10.1007/s00267-023-01912-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Ecosystem benefits
  • Marine ecosystem services
  • Reef systems
  • Coastal livelihoods
  • Human well-being
  • Food security
  • Find a journal
  • Publish with us
  • Track your research

Supervised injection services: what has been demonstrated? A systematic literature review

Affiliations.

  • 1 Department of Addiction Medicine, CHRU de Lille, Univ Lille Nord de France, F-59037 Lille, France; University of Lille 2, Faculty of Medicine, F-59045 Lille, France. Electronic address: [email protected].
  • 2 CHU Nancy, Maison des Addictions, Nancy F-54000, France; CHU Nancy, Centre d'Investigation Clinique CIC-INSERM 9501, Nancy F-54000, France.
  • 3 Institute of Social and Preventive Medicine, University Hospital Center and University of Lausanne, Chemin de la Corniche 10, 1010 Lausanne, Switzerland.
  • 4 Department of Addiction Medicine, CHRU de Lille, Univ Lille Nord de France, F-59037 Lille, France; University of Lille 2, Faculty of Medicine, F-59045 Lille, France.
  • PMID: 25456324
  • DOI: 10.1016/j.drugalcdep.2014.10.012

Background: Supervised injection services (SISs) have been developed to promote safer drug injection practices, enhance health-related behaviors among people who inject drugs (PWID), and connect PWID with external health and social services. Nevertheless, SISs have also been accused of fostering drug use and drug trafficking.

Aims: To systematically collect and synthesize the currently available evidence regarding SIS-induced benefits and harm.

Methods: A systematic review was performed via the PubMed, Web of Science, and ScienceDirect databases using the keyword algorithm [("supervised" or "safer") and ("injection" or "injecting" or "shooting" or "consumption") and ("facility" or "facilities" or "room" or "gallery" or "centre" or "site")].

Results: Seventy-five relevant articles were found. All studies converged to find that SISs were efficacious in attracting the most marginalized PWID, promoting safer injection conditions, enhancing access to primary health care, and reducing the overdose frequency. SISs were not found to increase drug injecting, drug trafficking or crime in the surrounding environments. SISs were found to be associated with reduced levels of public drug injections and dropped syringes. Of the articles, 85% originated from Vancouver or Sydney.

Conclusion: SISs have largely fulfilled their initial objectives without enhancing drug use or drug trafficking. Almost all of the studies found in this review were performed in Canada or Australia, whereas the majority of SISs are located in Europe. The implementation of new SISs in places with high rates of injection drug use and associated harms appears to be supported by evidence.

Keywords: Drug consumption facility; Drug consumption room; Injection drug user; Safer injection facility; Supervised injecting center; Supervised injection service.

Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

Publication types

  • Systematic Review
  • Australia / epidemiology
  • Canada / epidemiology
  • Cohort Studies
  • Drug Overdose / diagnosis
  • Drug Overdose / epidemiology
  • Drug Overdose / prevention & control
  • Europe / epidemiology
  • Needle-Exchange Programs / methods*
  • Needle-Exchange Programs / trends
  • Substance Abuse Treatment Centers / methods*
  • Substance Abuse Treatment Centers / trends
  • Substance Abuse, Intravenous / diagnosis
  • Substance Abuse, Intravenous / epidemiology*
  • Substance Abuse, Intravenous / therapy*

REVIEW article

A systematic review of midwives’ training needs in perinatal mental health and related interventions.

Marine Dubreucq,*

  • 1 Centre referent de rehabilitation psychosociale, GCSMS REHACOOR 42, Saint-Étienne, France
  • 2 University Claude Bernard Lyon1, Research on Healthcare Performance (RESHAPE) INSERM U1290, Lyon, France
  • 3 AURORE Perinatal Network, Hospices civiles de Lyon, Croix Rousse Hospital, Lyon, France
  • 4 Departments of Psychiatry and Child & Adolescent Psychiatry, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands
  • 5 Medical Library, Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands
  • 6 University Hospital of Saint-Étienne & EA 7423 (Troubles du Comportement Alimentaire, Addictions et Poids Extrêmes (TAPE), Université Jean Monnet - Saint-Etienne), Saint-Etienne, France
  • 7 University Hospital of Saint-Étienne, Department of Child and Adolescent Psychiatry, France & Marc Jeannerod Institute of Cognitive Sciences UMR 5229, CNRS & Claude Bernard University, Lyon, France

Background: Midwives may be key stakeholders to improve perinatal mental healthcare (PMHC). Three systematic reviews considered midwives’ educational needs in perinatal mental health (PMH) or related interventions with a focus on depression or anxiety. This systematic review aims to review: 1) midwives’ educational/training needs in PMH; 2) the training programs in PMH and their effectiveness in improving PMHC.

Methods: We searched six electronic databases using a search strategy designed by a biomedical information specialist. Inclusion criteria were: (1) focus on midwives; (2) reporting on training needs in PMH, perinatal mental health problems or related conditions or training programs; (3) using quantitative, qualitative or mixed-methods design. We used the Mixed Methods Appraisal Tool for study quality.

Results: Of 4969 articles screened, 66 papers met eligibility criteria (47 on knowledge, skills or attitudes and 19 on training programs). Study quality was low to moderate in most studies. We found that midwives’ understanding of their role in PMHC (e.g. finding meaning in opening discussions about PMH; perception that screening, referral and support is part of their routine clinical duties) is determinant. Training programs had positive effects on proximal outcomes (e.g. knowledge) and contrasted effects on distal outcomes (e.g. number of referrals).

Conclusions: This review generated novel insights to inform initial and continuous education curriculums on PMH (e.g. focus on midwives’ understanding on their role in PMHC or content on person-centered care).

Registration details: The protocol is registered on PROSPERO (CRD42021285926)

1 Introduction

Perinatal Mental Health Problems (PMHPs) affect parents during pregnancy and the first year after childbirth and commonly consist of anxiety, non-psychotic depressive episode, psychotic episodes, post-traumatic stress disorder and adjustment disorder. Despite being often associated with poor parental and child outcomes ( 1 ), PMHPs remain predominantly unrecognized, undiagnosed and untreated ( 2 ).

Given their role in perinatal care providing multiple occasions to discuss perinatal mental health ( 3 ) - midwives may be key stakeholders to improve the detection, referral and management of PMHPs. Parents usually welcome midwives’ interest in their mental health and report to prefer discussing mental health issues with obstetric providers than with mental health providers ( 4 , 5 ). Assessing perinatal mental health (PMH) and detecting symptoms of postpartum depression, anxiety and psychosis are part of the essential competencies for midwifery practice according to the International Confederation of Midwives (2019) ( 6 ). However, and despite being in general interested in assessing perinatal mental health (PMH) and wellbeing ( 7 ), midwives report feeling less comfortable with putting competencies related to PMH into practice compared to those related physical health ( 8 , 9 ).

To our knowledge, three literature reviews have been conducted on midwives’ educational needs in perinatal mental health ( 7 , 10 , 11 ). These reviews reported a lack of knowledge, skills and confidence influential at different levels of the care pathway, e.g. detection, decision-making about referral and support. However, there remain some limitations to the current body of evidence. First, all reviews found low-to-moderate quality studies coming predominantly from high-income countries. Second, two out of three reviews ( 10 , 11 ) - conducted in 2017 (n=17 articles) and 2022 (43 articles) - focused on perinatal depression or perinatal anxiety and did not cover the full range of PMHPs as well as related conditions (e.g. substance use disorder, serious mental illness (SMI)) or autism). The third review ( 7 ) conducted in 2017 (n=22 articles) covered a wider range of PMHPs using an integrative review design, the other two ( 10 , 11 ) being systematic reviews. Third, previous reviews ( 7 , 10 , 11 ) focused on midwives’ knowledge, skills and attitudes and context-related factors. However, it remains unclear whether improvements in these areas translate into in routine clinical practice (e.g. improved detection of PMHPs or facilitated decision-making about referral to mental health providers). Fourth, case identification - using formal or informal screening methods - have contrasted effects on referral rates ( 7 ) and patient outcomes [e.g. limited effects of screening on depressive symptoms ( 12 , 13 )]. Fifth, two systematic reviews reported on training programs in perinatal depression [n=7 studies ( 10 ), n=12 studies ( 14 )]. However, these reviews included mixed samples [e.g. 37% midwives in Wang et al., 2022 ( 14 ) and 54% midwives in Legere et al., 2017 ( 10 )] and did not target the same set of skills [e.g. improving knowledge and detection ( 10 ); providing evidence-based interventions ( 14 )]. Reviews either investigated midwives’ training needs ( 7 , 11 ) or training interventions ( 10 , 14 ). The literature on training programs in PMH for student midwives and midwives remains scarce [n=4 studies ( 10 )]. A synthesis of evidence before this study is presented on Table 1 .

www.frontiersin.org

Table 1 Evidence before this study.

The present review primarily aims to identify and review: 1) midwives’ educational/training needs in PMH (i.e. beyond perinatal depression or anxiety to include PMHPs, SMI, substance use disorder, and autism); 2) the existing interventions and their effectiveness in improving detection and management of PMHPs.

2.1 Search strategy

The protocol for this systematic review was reported according to PRISMA guidelines ( 15 ). The search strategy was designed by a biomedical information specialist (WMB) from the Medical Library of Erasmus MC, University Medical Center Rotterdam ( 16 ). We searched Embase, MEDLINE, Web of Science, Cochrane Central Register of Controlled Trials, CINAHL and, PsycINFO for published, peer reviewed original articles. The search combined terms for (1) perinatal mental health problems, serious mental illness (i.e. schizophrenia, mood disorders, personality disorders, anxiety), eating disorders, substance use disorders or autism, and (2) midwives’ knowledge, attitudes, skills or training needs, as well as existing training programs for midwives on PMH. We included only published articles in English or French. No time restriction was set. The search was updated prior to publication on 21 June 2023. We hand-searched the reference list of three systematic literature reviews ( 7 , 10 , 11 ) for additional relevant articles. The full search strategy, search terms and syntax are presented in online Supplementary Table 1 .

2.2 Inclusion/exclusion criteria

To be included, articles had to meet all the following criteria: 1) focus on midwives (included midwives, nurse-midwives, registered midwives, registered midwives tutors, registered midwives prescribers and registered advanced midwives practitioners - referred as “midwives” in this review); 2) reporting on midwives’ training needs in PMH, PMHPs or related conditions or existing training programs that focus on the use of screening tools to detect PMHPs, on PMH in general or specific aspects of PMH; 3) using quantitative, qualitative or mixed-methods design. For training programs, we included uncontrolled and controlled studies (placebo, TAU or active comparators).

Our exclusion criteria were: 1) no full text available or studies published in languages other than English or French; 2) grey literature because the aim of this systematic review was to guide the development of future interventions; 3) training programs on psychological interventions (e.g. cognitive behavior therapy) because this review focused on interventions aiming at improving midwives’ training on essential competencies related to PMH (e.g. PMH assessment, detection, referral and support of parents with PMHPs).

2.3 Selection and coding

The screening process was conducted in two separate stages: 1) Two authors (M.D. and J.D) independently screened the title and abstracts of all non-duplicated papers excluding those not relevant. Potential discrepancies were resolved by consensus; 2) Two authors (M.D. and J.D) independently applied eligibility criteria and screened the full-text papers to select the included studies. Disputed items were solved discussing together and reading further the paper to reach a final decision. Supplementary Tables 2 , 3 present the list of included/excluded studies. Inter-rater reliability was calculated (kappa=0.90).

2.4 Data extraction

Two authors (MD and JD) performed independently the data extraction. For each study, we extracted the following information: general information (author, year of publication, country, design, type of study, population considered, period), assessment tools or methods, cultural aspects, the main findings and variables relating to quality assessment. For studies reporting on training programs, we also extracted information about the intervention (nature, type, length, targeted skills or outcomes, format), outcome measures and effectiveness on midwives’ knowledge, attitude, skills or routine use of screening tools to detect PMHPs or parents’ outcomes (e.g. depressive symptoms). Tables 2 – 6 present the factors associated with knowledge, skills, confidence and decisions about screening, referral or support. Supplementary Tables 4 , 5 present the detailed characteristics of the included studies.

www.frontiersin.org

Table 2 Factors influencing the level of knowledge and skills.

www.frontiersin.org

Table 3 Factors influencing confidence and the perception of being well-equipped.

www.frontiersin.org

Table 4 Factors influencing decisions about screening.

www.frontiersin.org

Table 5 Factors influencing decisions about referral.

www.frontiersin.org

Table 6 Factors influencing decisions about support.

2.5 Quality assessment

Quality assessment was realized using the Mixed Methods Appraisal Tool (MMAT) ( 61 ). MMAT is a validated instrument to assess the methodological quality of qualitative, randomized controlled trials, non-randomized trials, descriptive studies, and mixed methods studies. It is comprised of five 5-item subscales assessing different aspects of quality (e.g. appropriateness of the selected design/methods/measurements, integration of quantitative and qualitative parts for mixed-methods studies). Two researchers (MD and JD) independently assessed methodological quality using the MMAT and extracted MMAT scores for each article. Discrepancies were resolved through consensus. The MMAT overall quality score and detailed scores are provided in Supplementary Tables 4 , 5 . The study protocol was registered on PROSPERO on November 1, 2021 (CRD42021285926).

Of the 9650 articles found during searches from inception to June 26 th 2023, 4969 references remained after removing all duplicates. Based on titles and abstracts, 4772 papers were excluded for lack of relevance. Our search strategy yielded 197 full-text articles. After conducting a full-text analysis of all these papers, we ended up with 66 relevant papers (47 on knowledge, skills or attitudes and 19 on training programs; PRISMA diagram on Figure 1 ).

www.frontiersin.org

Figure 1 PRISMA diagram.

3.1 Study characteristics

The characteristics of the 66 included studies are presented on Tables 7 , 8 . Most studies were conducted in high-income countries (89.4%) and published after 2015 (50%). Study designs were quantitative (n=33; 50%), qualitative (n=22; 33.3%) or mixed-methods (n=11; 16.7%). Samples included qualified midwives (n=37; 56.0%), qualified midwives and other perinatal health providers (n=17; 25.8%) and student midwives (n=11; 16.7%). Qualified midwives had a variable level of training in PMH ranging from none to 90% (specified in 24 studies; most covered topics: general information about PMH and PMHPs; least covered topics: interviewing/counseling skills, psychopharmacology and suicide risk assessment). Eight studies (12.1%) reported on midwives’ mental health nursing experience (ranging from 0.8% to 30%) or placement experience in a mental health setting or a mother-baby unit during their studies (ranging to 9% to 23.2%). Four studies (6%) mentioned family or personal experience of mental health problems ranging from 25% to 66.3%. Most studies covered the entire perinatal period (n=44; 66.7%) and reported on PMHPs (n=32; 48.5%). The definition of PMHPs was highly variable across the studies (e.g. inclusion of conditions usually not considered as PMHPs, such as schizophrenia, bipolar disorder, personality disorders, self-harm, suicide eating disorders or SUD in 16 studies; definition restricted to anxiety, depression, postpartum psychosis and/or posttraumatic stress disorder in 9 studies; unspecified in 7 studies). One third of the included studies used validated instruments to assess outcomes (n=16; 36.4%). Five studies (7.6%) investigated the influence of cultural aspects on the detection and management of PMHPs.

www.frontiersin.org

Table 7 Research characteristics of the 66 studies included in the review.

www.frontiersin.org

Table 8 Research characteristics of the training programs included in the review.

Of 15 studies reporting on a training program using a quantitative or a mixed-methods design, three used a waiting-list control group (20%; one randomized controlled trial (RCT)) and 13 (86.7%) were uncontrolled. Sample size was small in most studies (< 50 participants; n=9 studies). Nine studies (47.3%) reported contact with persons with lived experience when designing their training program. The training programs were heterogeneous in nature (initial training, n=6, 31.6%; continuous education, n=13, 68.4%), type, format and duration (ranging from 2 minutes to a fifteen-week module). All studies assessed training outcomes either immediately after (n=15; 79%) or up to 3 months after the intervention is delivered (n=4; 21%).

3.2 Quality assessment

The overall assessment score ranged from low (n=30, 45.4%; n=13, 68.4% for training programs) to high (n=11, 16.7%; n=2, 10.5%). For quantitative or mixed-methods studies, the reasons were convenience sampling (n=61 studies, 92.4%), sample size, low response rate (n=18 studies > 60%), limited use of validated outcome measures (36.4%), use of self-reported measures, absence or short duration of the follow-up period, limited integration of the results in mixed-methods studies and lack of controlled/RCT studies to evaluate the effectiveness of training programs. For qualitative studies, the reasons were interpretation bias (e.g. no investigator triangulation, the data being analyzed by only one researcher), absence of data saturation and lack of reflexivity.

3.3 Narrative review

Many studies found that midwives felt ill equipped to care for parents with PMHPs [e.g. ranging from 69.2% of 815 midwives in Jones et al., 2011 ( 17 ) to 82.2% of 157 midwives in Noonan et al., 2018 ( 28 )]. The reasons included insufficient initial training/continuous education on PMH (n=2 studies), perception that PMH assessment is not part of their role (n=2 studies), lack of knowledge about the detection, referral and management of PMHPs (n=12 studies). Compared with other perinatal health providers (GPs, health visitors, maternal child health nurses; n=11 studies), midwives had lower knowledge on PMH (n=2), felt less confident in the detection, referral or management of PMHPs (n=3) and had more negative attitudes toward their role in perinatal mental healthcare (PMHC) ( 57 ) or suicide prevention ( 40 ). Self-reported barriers to discuss PMH issues or self-reported interviewing skills did not differ between nurses and midwives ( 25 ). Student midwives’ knowledge, skills and attitudes in PMH did not clearly differ from those of qualified midwives (n=5 studies). On the job experience, learning from peers and attending to workshops/conferences were midwives’ main sources of knowledge (n=3 studies).

The factors positively associated with knowledge about PMHPs included the perception to be well equipped to provide PMHC (66.7% significance), previous training in PMH (50% significance), younger age ( 17 ), shorter work experience in general and as a midwife (20% significance), frequent contact with parents with PMHPs (50% significance) and type of practice (33.3% significance). Mental health nursing experience was positively associated with the perception to be well equipped to provide PMHC, but not with higher knowledge about PMH ( 8 ). No significant association was found between confidence in providing PMHC and other factors [e.g. age, personal experience of mental health problems, frequent contact with parents with PMHPs ( 29 )], except for PMH education and case identification ( 8 ). Compared with suicide risk assessment and other conditions (e.g. postpartum psychosis, SMI, eating disorders or posttraumatic stress disorder; n=4 studies), midwives reported higher knowledge, better skills and more confidence in detecting and managing perinatal depression and anxiety. Midwives felt in general ill equipped to care for postpartum psychosis, eating disorders, posttraumatic stress and SMI (n=10 studies) and reported ambivalent or negative attitudes toward parents with these conditions (n=7 studies). Knowledge about PMHPs varied according to the assessment method [i.e. higher self-report knowledge than researcher-rated knowledge ( 19 , 43 )] and the timing of perinatal period (i.e. higher in the postpartum than during pregnancy, n=5 studies).

3.3.1 Detection/screening

The practices and policies around screening for PMHPs varied across studies. There was a considerable overlap between the factors influencing the decision to screen, refer and support parents with PMHPs. Midwives’ attitudes toward their role in PMHC (e.g. personal interest in PMHPs and perception that it is part of their role) played a central role in decision-making about opening discussions about PMH (n=12 studies), referral ( 42 , 57 ) and support parents with PMHPs (n=6). Cultural aspects and stigma toward parents with ethnic minority background (e.g. underestimation of depression and suicide risks) impacted midwives’ ability to detect and manage PMHPs and parents’ maternity care experiences (n=4 studies). Other common factors included lack of knowledge about PMHPs (n=20 studies), referral pathways (n=8) and treatment options (n=10), lack of time/clear referral pathways (n=22) and stigma related to preexisting mental health problems/SMI (n=8).

Midwives considered routine universal screening as useful in two studies ( 5 , 56 ). Facilitators included self-efficacy in screening (n=10 studies), person-centered care (n=3), the presence of a specialist team (n=2 studies) and mandatory routine screening (n=2). Barriers to screening included longer work experience ( 42 ), lack of knowledge about screening tools (n=11 studies), local/national guidelines on screening (ranging from 12.8% to 53%, n=4 studies), and negative attitudes toward the use of formal screening tools (n=12 studies). The relationship between personal/family experience of PMHPs was either positive [e.g. reduces stigma and allows to relate with parents ( 29 )] or negative ( 45 ). For student midwives, the presence of specialist midwives was both a facilitator [e.g. provides referral options and placement opportunities ( 50 )] and a barrier to screening [e.g. perception that it is not part of their role ( 43 )]. Of note, specialist midwives reported to lack confidence in opening discussions about PMH and to lack knowledge about SMI ( 21 ).

The reasons underlying negative attitudes toward the use of formal screening tools included perceiving the questions as intrusive (n=3 studies), not clearly understanding the purpose of doing so (n=3 studies), inexperience in conducting assessment and feeling compelled to undertake it as a standardized survey ( 23 ), the fear of “not doing it right” (n=2) and discomfort when disclosure occurs (n=7 studies). Some studies reported a flexible use of screening tools (e.g. modified wording or timing of the questions; n=4 studies) and one study outlined the importance of person-centered care in conducting assessment ( 23 ). Conversely, midwives who lacked clarity about their role in PMHC reported feelings of inadequacy resulting in a non-flexible use of screening tools and a distant and superficial manner of asking questions ( 23 ). Midwives reported to feel more comfortable in opening discussions about PMH during follow-up visits compared with the booking appointment (n=5 studies). Alternatives to formal screening included assessing previous psychiatric history/current symptoms ( 28 ), using general open-ended questions (n=5 studies), behavioral observation (n=4 studies) and labor debriefing ( 46 ). Training needs covered knowledge about PMHPs (n=9 studies), screening tools (n=4 studies) and cultural issues and interviewing/distress management skills (n=10 studies).

3.3.2 Referral/support

Midwives reported to feel confident in their ability to refer parents with PMHPs to other health providers including specialist mental health services (n=7 studies). The opposite was found for parents with postpartum psychosis, eating disorders or SMI. High self-reported confidence in referring parents to other providers did not in practice lead to a higher number of referrals ( 37 ). The proportion of midwives indicating to feel confident in supporting parents with PMHPs in self-report questionnaires ranged from 34% to 53% (n=5 studies). Accurate case identification ( 9 ), an established diagnosis of PMHP ( 53 ) and parents’ preferences ( 53 ) influenced decision-making about referral. Other factors included the intention to collaborate with other providers (n=2) or conversely a lack of trust/a reluctance to disclose sensitive information to other providers (n=3 studies).

3.3.3 Training outcomes

All training programs reported improved self-rated knowledge, skills, attitudes and confidence in screening, referring and supporting parents with PMHPs (n=19). Few significant positive training effects were reported due to small-sized samples and lack of controlled/RCT studies. Results included positive effects on empathic communication skills ( 62 , 63 ), case identification ( 64 , 65 ) and the detection of PMHPs in maternity wards ( 66 – 68 ). Contrasted results were found on the number of referrals [n=2 studies; 50% significance; positive effect on self-reported referrals in Pearson et al. (2019) ( 69 ) and no significant effect in Wickberg et al. (2005, 70 )]. No significant effects were found on depressive symptoms ( 70 ) and attitudes toward providing psychological support to parents with PMHPs ( 63 ). Participants’ satisfaction rates were high, the insight provided by parents with lived experience of PMHP being determinant for student midwives (n=4 studies). Barriers included an excessive workload ( 71 ) and for student midwives, elective participation and late delivery within midwifery studies ( 72 ). No difference related to the format of the intervention was reported.

4 Discussion

To our knowledge, this systematic review of 66 studies is one of the first exploring both the training needs in PMH identified by student midwives and midwives and the training programs designed for this population. Overall, a main finding of this systematic review is that although detection, referral and support of parents with PMHPs are part of the essential competencies for midwifery practice according to the ICM (2019) ( 6 ), their effective translation into routine clinical practice may depend on midwives’ understanding of their role in PMHC, i.e. finding meaning in opening discussions about PMH with all parents and the perception that this is part of their routine clinical duties. This suggests that this factor should be targeted by raining interventions aiming at improving detection and management of PMHPs, above and beyond knowledge, confidence, and skills.

Extending the findings of previous reviews ( 7 , 10 , 11 ), we found that although most midwives consider they have a role in PMHC (this aligning with ICM essential competencies for midwifery practice; 2019 ( 6 )), their understanding of that role remains often unclear. Several potential explaining factors have been identified. First, while this topic may be central for a meaningful engagement into providing PMHC, only a few training programs explored the role of midwives in PMHC ( 71 , 73 ). Second, there is a view - in particular in student midwives - that addressing PMH needs is less a priority than addressing physical health needs and that other providers should assume this responsibility ( 31 , 35 , 39 , 43 , 50 , 52 ). The interaction between this view, mental illness stigma and racism toward parents with ethnic minority background contributed to poorer maternity experiences and under-detection of PMHPs ( 19 , 35 , 73 ).

Third, some midwives consider their role as limited to assessing PMH and wellbeing and as appropriate, referring to other health providers ( 9 , 18 , 55 , 57 , 58 ), whereas others have a broad perception of their role that include providing support, psychoeducation and with adequate training counseling interventions ( 21 , 24 , 25 , 42 ). Recent meta-analyses showed positive effects of midwife-led counseling on anxiety and depressive symptoms after at least 3 days of training ( 14 , 74 ). This concurs with recent calls for a better integration of mental health and perinatal health care and an extension of the scope of midwifery practice to include strengths-based case management and psychological interventions for parents with PMHPs ( 50 , 75 – 77 ). Given there is some degree of difference between midwives’ perception of their role in PMHC and what is required as essential competencies for midwifery practice (ICM, 2019) ( 6 ), an explicit focus on midwives’ role in PMHC should be made in initial and continuous midwifery education ( 72 , 73 , 76 , 78 ). Fourth, most student midwives, midwives and specialist midwives reported negative attitudes toward parents with suicide ideations, postpartum psychosis and SMI ( 21 , 35 , 40 , 43 , 57 , 79 ). Aligning with this, Hawthorne et al. (2020) ( 79 ) found that student midwives had more negative attitudes toward persons with mental illness compared with mental health nursing students. However, other studies reported that midwives consider caring for parents with these conditions as part of their role but felt ill equipped to do so and expressed the need for additional training ( 8 , 28 , 29 , 34 , 39 , 49 ).

4.1 Implications for training interventions

While the need to improve midwives’ initial and continuous education in PMH is now well established ( 7 , 10 ), student midwives, midwives and even specialist midwives continue reporting to feel ill prepared to care for parents with PMHP in particular in case of co-occurring SMI ( 9 , 21 , 24 , 33 , 34 , 36 ). Moreover, the proportion of midwives who received education in PMH - in particular in topics such as mental health/suicide risk assessment - remains consistently low. Given suicide is the leading cause of maternal mortality in the 1 st year postpartum in high-income countries, this is concerning ( 1 , 80 ).

Aligning with previous research ( 7 , 10 , 11 ), this systematic review found that education/training programs had positive effects on proximal outcomes (e.g. midwives’ knowledge, skills, attitudes and confidence in providing PMHC) and contrasted effects on distal outcomes (e.g. screening in maternity wards, the number of referrals or depressive symptoms). This could be related to methodological bias (e.g. lack of RCT or quasi-experimental studies). There is a need for high-quality studies on interventions designed following the Medical Research Council framework for complex interventions ( 81 ), which proposes among other core elements to: 1) take into account the context of delivery; 2) use a clear theoretical basis (e.g. how the intervention is expected to produce positive effects and under which conditions) and; 3) promote a meaningful engagement of persons with lived experience among other relevant stakeholders.

According to Wadephul et al. (2018) ( 82 ) framework for assessing midwifery practice in PMH, knowledge, confidence, attitudes and organizational factors influence midwives’ ability to detect and manage PMHPs. However, higher knowledge about PMH does not necessarily translate into higher confidence in providing PMHC and the opposite ( 8 ). As reported in one of the articles included in this review ( 42 ) and aligning with the theory of planned behavior ( 82 ), additional factors such as individual values, e.g. personal interest in PMH, and behavioral intent (e.g. the intention to open discussions about PMH) could influence detection and decision-making about referral and support in PMHPs and thus be relevant for midwifery education.

To improve midwives’ engagement into PMHC, training programs should put PMH in context (e.g. the positive outcomes that could be achieved with appropriate support) before covering topics related to specific knowledge or skills ( 5 , 38 , 49 , 50 , 53 , 54 ). Instead of focusing only on biomedical aspects (e.g. the signs, risk factors, consequences and treatments of PMHPs), programs should propose a continuum approach of PMH that covers the positive aspects of the person’s life including wellbeing and personal recovery ( 83 – 86 ).

Extending the findings of previous reviews ( 7 , 10 , 11 ), training programs should target student midwives, midwives and specialist midwives and cover interviewing and distress management skills with a focus on specific aspects (e.g. opening discussions without feeling intrusive, using flexibly screening tools and reacting in case of a positive answer) ( 5 , 21 , 38 , 45 , 49 , 50 , 53 , 54 ). In addition, training programs should include clinical supervision by mental health providers during and after intervention delivery ( 14 ). Future studies should include a longer follow-up period, as the embedding of practice change requires a minimum of nine months after the intervention is delivered ( 87 ).

Finally, while contact with persons with lived experience is one of the most effective strategies to reduce mental illness stigma in the general public and in frontline health providers ( 88 , 89 ), this review found a very low proportion of training programs that engaged persons with lived experience in the conception and delivery of the intervention. Initial and continuous midwifery education curriculums on PMH should involve persons with lived experience - co-design and co-intervention - and include content about personal recovery/person-centered care ( 72 , 73 , 81 , 84 , 90 – 92 ).

4.2 Limitations

There are limitations. First, despite a growing number of published studies on midwives’ training needs in PMH and training interventions designed for this population (n=66 studies in this review vs. n=22 ( 7 ); n=17 ( 10 ); n=43 ( 11 );), the quality of the included studies remains low to moderate, a concerning finding given the clinical relevance of this topic that is also a considerable limitation. Among other methodological bias, the absence of a clear theoretical basis for designing interventions ( 81 ), the small or unjustified sample sizes, the lack of RCT/quasi-experimental studies, the absence of control groups (or active comparators in controlled studies) and the absence or short duration of follow-up makes unclear whether interventions have positive effects on proximal or distal outcomes. Future high-quality studies on this topic are therefore needed. Despite these limitations, the inclusion of quantitative, qualitative and mixed-methods studies provided a complete synthesis of the available evidence and consistent messages emerged across studies. Second, relevant studies may have been missed since we excluded studies published in other languages than English or French and did not include the grey literature in our searches.

5 Conclusion

This review generated novel insights to inform initial and continuous midwifery education curriculums on PMH (e.g. co-design with persons with lived experience, focus on midwives’ understanding on their role in PMHC or inclusion on content on person-centered care).

Author contributions

MD: Conceptualization, Formal analysis, Writing – original draft. CD: Writing – review & editing. ML: Conceptualization, Writing – review & editing. WB: Conceptualization, Data curation, Methodology, Writing – review & editing. CM: Writing – review & editing. JD: Conceptualization, Formal analysis, Project administration, Supervision, Validation, Writing – original draft.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

The authors wish to thank Dr. M.F.M. Engel and Mrs. C.D. Niehot medical information specialist from the Erasmus MC Medical Library for updating the search strategies. The authors are grateful to the reviewers of a previous version of the manuscript for their helpful comments.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1345738/full#supplementary-material

Supplementary Table 1 | Search strategy, search terms and syntax.

Supplementary Table 2 | List of excluded studies.

Supplementary Table 3 | List of included studies.

Supplementary Table 4 | Characteristics of included studies about midwives' knowledge, skills and attitude.

Supplementary Table 5 | Characteristics of included studies about midwives' needs for peripartum mental health training program.

Supplementary Table 6 | List of abbreviations.

1. Howard LM, Khalifeh H. Perinatal mental health: a review of progress and challenges. World Psychiatry . (2020) 19:313–27. doi: 10.1002/wps.20769

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Cox EQ, Sowa NA, Meltzer-Brody SE, Gaynes BN. The perinatal depression treatment cascade: baby steps toward improving outcomes. J Clin Psychiatry . (2016) 77:1189–200. doi: 10.4088/JCP.15r10174

3. Moss KM, Reilly N, Dobson AJ, Loxton D, Tooth L, Mishra GD. How rates of perinatal mental health screening in Australia have changed over time and which women are missing out. Aust N Z J Public Health . (2020) 44:301–6. doi: 10.1111/1753-6405.12999

4. Kingston D, McDonald S, Tough S, Austin MP, Hegadoren K, Lasiuk G. Public views of acceptability of perinatal mental health screening and treatment preference: a population based survey. BMC Preg. Childbirth . (2014) 14:67. doi: 10.1186/1471-2393-14-67

CrossRef Full Text | Google Scholar

5. Williams CJ, Turner KM, Burns A, Evans J, Bennert K. Midwives and women’s views on using UK recommended depression case finding questions in antenatal care. Midwifery . (2016) 35:39–46. doi: 10.1016/j.midw.2016.01.015

6. International Confederation of Midwives (ICM). Essential competences for midwifery practice (2019). Available online at: file:///C:/Users/marin/Downloads/icm-competencies-en-print-october-2019_final_18-oct-5db05248843e8.pdf .

Google Scholar

7. Noonan M, Doody O, Jomeen J, Galvin R. Midwives’ perceptions and experiences of caring for women who experience perinatal mental health problems: An integrative review. Midwifery . (2017) 45:56–71. doi: 10.1016/j.midw.2016.12.010

8. Hauck YL, Kelly G, Dragovic M, Butt J, Whittaker P, Badcock JC. Australian midwives knowledge, attitude and perceived learning needs around perinatal mental health. Midwifery . (2015) 31:247–55. doi: 10.1016/j.midw.2014.09.002

9. Magdalena CD, Tamara WK. Antenatal and postnatal depression - Are Polish midwives really ready for them? Midwifery . (2020) 83:102646. doi: 10.1016/j.midw.2020.102646

10. Legere LE, Wallace K, Bowen A, McQueen K, Montgomery P, Evans M. Approaches to health-care provider education and professional development in perinatal depression: a systematic review. BMC Preg. Childbirth . (2017) 17:239. doi: 10.1186/s12884-017-1431-4

11. Branquinho M, Shakeel N, Horsch A, Fonseca A. Frontline health professionals’ perinatal depression literacy: A systematic review. Midwifery . (2022) 111:103365. doi: 10.1016/j.midw.2022.103365

12. Waqas A, Koukab A, Meraj H, Dua T, Chowdhary N, Fatima B, et al. Screening programs for common maternal mental health disorders among perinatal women: report of the systematic review of evidence. BMC Psychiatry . (2022) 22:54. doi: 10.1186/s12888-022-03694-9

13. Beck A, Hamel C, Thuku M, Esmaeilisaraji L, Bennett A, Shaver N, et al. Screening for depression among the general adult population and in women during pregnancy or the first-year postpartum: two systematic reviews to inform a guideline of the Canadian Task Force on Preventive Health Care. Syst Rev . (2022) 11:176. doi: 10.1186/s13643-022-02022-2

14. Wang TH, Tzeng YL, Teng YK, Pai LW, Yeh TP. Evaluation of psychological training for nurses and midwives to optimise care for women with perinatal depression: a systematic review and meta-analysis. Midwifery . (2022) 104:103160. doi: 10.1016/j.midw.2021.103160

15. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev . (2015) 4:1. doi: 10.1186/2046-4053-4-1

16. Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. J Med Libr Assoc . (2017) 105:84–7. doi: 10.5195/jmla.2017.111

17. Jones CJ, Creedy DK, Gamble JA. Australian midwives’ knowledge of antenatal and postpartum depression: a national survey. J Midwifery Womens Health . (2011) 56:353–61. doi: 10.1111/j.1542-2011.2011.00039.x

18. Buist A, Bilszta J, Milgrom J, Barnett B, Hayes B, Austin MP. Health professional’s knowledge and awareness of perinatal depression: results of a national survey. Women Birth . (2006) 19:11–6. doi: 10.1016/j.wombi.2005.12.001

19. Işık SN, Bilgili N. Postnatal depression: Midwives’ and nurses’ knowledge and practices. Erciyes Med J . (2010) 32:265–74.

20. Salomonsson B, Alehagen S, Wijma K. Swedish midwives’ views on severe fear of childbirth. Sex Reprod Healthc . (2011) 2:153–9. doi: 10.1016/j.srhc.2011.07.002

21. Savory NA, Sanders J, Hannigan B. Midwives’ experiences of supporting women’s mental health: A mixed-method study. Midwifery . (2022) 111:103368. doi: 10.1016/j.midw.2022.103368

22. de Vries NE, Stramrood CAI, Sligter LM, Sluijs AM, van Pampus MG. Midwives’ practices and knowledge about fear of childbirth and postpartum posttraumatic stress disorder. Women Birth . (2020) 33:e95–e104. doi: 10.1016/j.wombi.2018.11.014

23. Andersen CG, Thomsen LLH, Gram P, Overgaard C. ‘It’s about developing a trustful relationship’: A Realist Evaluation of midwives’ relational competencies and confidence in a Danish antenatal psychosocial assessment programme. Midwifery . (2023) 122:103675. doi: 10.1016/j.midw.2023.103675

24. Carroll M, Downes C, Gill A, Monahan M, Nagle U, Madden D, et al. Knowledge, confidence, skills and practices among midwives in the republic of Ireland in relation to perinatal mental health care: The mind mothers study. Midwifery . (2018) 64:29–37. doi: 10.1016/j.midw.2018.05.006

25. Higgins A, Downes C, Carroll M, Gill A, Monahan M. There is more to perinatal mental health care than depression: Public health nurses reported engagement and competence in perinatal mental health care. J Clin Nurs . (2017) 27:e476–87. doi: 10.1111/jocn.13986

26. Higgins A, Downes C, Monahan M, Gill A, Lamb SA, Carroll M. Barriers to midwives and nurses addressing mental health issues with women during the perinatal period: The Mind Mothers study. J Clin Nurs . (2018) 27:872–1883. doi: 10.1111/jocn.14252

27. Keng SL. Malaysian midwives’ views on postnatal depression. Br J Midwifery . (2005) 13:78–86. doi: 10.12968/bjom.2005.13.2.17465

28. Noonan M, Jomeen J, Galvin R, Doody O. Survey of midwives’ perinatal mental health knowledge, confidence, attitudes and learning needs. Women Birth . (2018) 31:e358–66. doi: 10.1016/j.wombi.2018.02.002

29. Noonan M, Galvin R, Jomeen J, Doody O. Public health nurses’ perinatal mental health training needs: A cross sectional survey. J Adv Nurs . (2019) 75:2535–47. doi: 10.1111/jan.14013

30. Stewart C, Henshaw C. Midwives and perinatal mental health. Br J Midwifery . (2002) 10:117–21. doi: 10.12968/bjom.2002.10.2.10186

31. Edge D. Falling through the net - black and minority ethnic women and perinatal mental healthcare: health professionals’ views. Gen Hosp Psychiatry . (2010) 32:17–25. doi: 10.1016/j.genhosppsych.2009.07.007

32. Whitehead R, O’Callaghan F, Gamble J, Reid N. Contextual influences experienced by Queensland midwives: a qualitative study focusing on alcohol and other substance use during pregnancy. Int J Childbirth . (2019) 9:80–91. doi: 10.1891/2156-5287.9.2.80

33. Cunningham C, Galloway S. Let’s end the postcode lottery. Community Practitioner . (2019) 92:26–9.

34. Dubreucq M, Jourdan S, Poizat A, Dubreucq J. Ressenti des sages-femmes dans la prise en charge en suites de couche des patientes avec troubles psychiques sévères: une analyse qualitative (Midwives’ feelings about the post-partum care of women with severe mental illness: A qualitative analysis). Encephale . (2020) 46:226–30. doi: 10.1016/j.encep.2019.07.009

35. Phillips L. Assessing the knowledge of perinatal mental illness among student midwives. Nurse Educ Pract . (2015) 15:463–9. doi: 10.1016/j.nepr.2014.09.003

36. Bye A, Shawe J, Bick D, Easter A, Kash-Macdonald M, Micali N. Barriers to identifying eating disorders in pregnancy and in the postnatal period: a qualitative approach. BMC Preg. Childbirth . (2018) 18:114. doi: 10.1186/s12884-018-1745-x

37. Jones CJ, Creedy DK, Gamble JA. Australian midwives’ awareness and management of antenatal and postpartum depression. Women Birth . (2012) 25:23–8. doi: 10.1016/j.wombi.2011.03.001

38. Oni HT, Buultjens M, Blandthorn J, Davis D, Abdel-Latif M, Islam MM. Barriers and facilitators in antenatal settings to screening and referral of pregnant women who use alcohol or other drugs: A qualitative study of midwives’ experience. Midwifery . (2020) 81:102595. doi: 10.1016/j.midw.2019.102595

39. McCauley K, Elsom S, Muir-Cochrane E, Lyneham J. Midwives and assessment of perinatal mental health. J Psychiatr Ment Health Nurs . (2011) 18:786–95. doi: 10.1111/jpm.2011.18.issue-9

40. Lau R, McCauley K, Barnfield J, Moss C, Cross W. Attitudes of midwives and maternal child health nurses towards suicide: A cross-sectional study. Int J Ment Health Nurs . (2015) 24:561–8. doi: 10.1111/inm.12162

41. Sanders LB. Attitudes, perceived ability, and knowledge about depression screening: a survey of certified nurse-midwives/certified midwives. J Midwifery Womens Health . (2006) 51:340–6. doi: 10.1016/j.jmwh.2006.02.011

42. Fontein-Kuipers YJ, Budé L, Ausems M, de Vries R, Nieuwenhuijze MJ. Dutch midwives’ behavioural intentions of antenatal management of maternal distress and factors influencing these intentions: an exploratory survey. Midwifery . (2014) 30:234–41. doi: 10.1016/j.midw.2013.06.010

43. Jarrett P. Student midwives’ knowledge of perinatal mental health. Br J Midwifery . (2015) 23:32–9. doi: 10.12968/bjom.2015.23.1.32

44. Shahid Ali S, Letourneau N, Rajan A, Jaffer S, Adnan F, Asif N, et al. Midwives’ perspectives on perinatal mental health: A qualitative exploratory study in a maternity setting in Karachi, Pakistan. Asian J Psychiatr . (2023) 80:103356. doi: 10.1016/j.ajp.2022.103356

45. Fletcher A, Murphy M, Leahy-Warren P. Midwives’ experiences of caring for women’s emotional and mental well-being during pregnancy. J Clin Nurs . (2021) 30:1403–16. doi: 10.1111/jocn.15690

46. Gibb S, Hundley V. What psychosocial well-being in the postnatal period means to midwives. Midwifery . (2007) 23:413–24. doi: 10.1016/j.midw.2006.07.005

47. Asare SF, Rodriguez-Muñoz MF. Understanding healthcare professionals’ Knowledge on perinatal depression among women in a tertiary hospital in Ghana: A qualitative study. Int J Environ Res Public Health . (2022) 19:15960. doi: 10.3390/ijerph192315960

48. Jomeen J, Glover LF, Davies SA. Midwives’ illness perceptions of antenatal depression. Br J Midwifery . (2009) 17:296–303. doi: 10.12968/bjom.2009.17.5.42221

49. McGlone C, Hollins Martin CJ, Furber C. Midwives’ experiences of asking the Whooley questions to assess current mental health: a qualitative interpretive study. J Reprod infant Psychol . (2016) 34:383–93. doi: 10.1080/02646838.2016.1188278

50. McGookin A, Furber C, Smith DM. Student midwives’ awareness, knowledge, and experiences of antenatal anxiety within clinical practice. J Reprod Infant Psychol . (2017) 35:380–93. doi: 10.1080/02646838.2017.1337270

51. Ross-Davie M, Elliott S, Sarkar A, Green L. A public health role in perinatal mental health: are midwives ready? Br J Midwifery . (2006) 14:330–4. doi: 10.12968/bjom.2006.14.6.21181

52. Schouten BC, Westerneng M, Smit AM. Midwives’ perceived barriers in communicating about depression with ethnic minority clients. Patient Educ Couns . (2021) 104:2393–9. doi: 10.1016/j.pec.2021.07.032

53. Madden D, Sliney A, O’Friel A, McMackin B, O’Callaghan B, Casey K, et al. Using action research to develop midwives’ skills to support women with perinatal mental health needs. J Clin Nurs . (2018) 27:561–71. doi: 10.1111/jocn.13908

54. Jarrett P. Attitudes of student midwives caring for women with perinatal mental health problems. Br J Midwifery . (2014) 22:718–24. doi: 10.12968/bjom.2014.22.10.718

55. Jones CJ, Creedy DK, Gamble JA. Australian midwives’ attitudes towards care for women with emotional distress. Midwifery . (2012) 28:216–21. doi: 10.1016/j.midw.2010.12.008

56. Willey SM, Gibson-Helm ME, Finch TL, East CE, Khan NN, Boyd LM, et al. Implementing innovative evidence-based perinatal mental health screening for women of refugee background. Women Birth . (2020) 33:e245–55. doi: 10.1016/j.wombi.2019.05.007

57. Rothera I, Oates M. Managing perinatal mental health: A survey of practitioners’ views. Br J Midwifery . (2011) 19:04–313. doi: 10.12968/bjom.2011.19.5.304

58. McCann TV, Clark E. Australian Bachelor of Midwifery students’ mental health literacy: an exploratory study. Nurs Health Sci . (2010) 12:14–20. doi: 10.1111/j.1442-2018.2009.00477.x

59. Salomonsson B, Wijma K, Alehagen S. Swedish midwives’ perceptions of fear of childbirth. Midwifery . (2010) 26:327–37. doi: 10.1016/j.midw.2008.07.003

60. Nyberg K, Lindberg I, Öhrling K. Midwives’ experience of encountering women with posttraumatic stress symptoms after childbirth. Sex Reprod Healthc . (2010) 1:55–60. doi: 10.1016/j.srhc.2010.01.003

61. Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed methods appraisal tool (MMAT), version 2018. Registration of copyright (#1148552) . (2018) Canadian Intellectual Property Office, Industry Canada.

62. Fox D, Solanki K, Brown G, Catling C, Scarf V, Sheehy A, et al. Perinatal mental healthcare: Developing skills in midwifery students. Women Birth . (2023) 36:167–70. doi: 10.1016/j.wombi.2022.11.005

63. Shinohara E, Ohashi Y, Hada A, Usui Y. Effects of 1-day e-learning education on perinatal psychological support skills among midwives and perinatal healthcare workers in Japan: a randomised controlled study. BMC Psychol . (2022) 10:133. doi: 10.1186/s40359-022-00832-6

64. Badiya PK, Siddabattuni S, Dey D, Hiremath AC, Nalam RL, Srinivasan V, et al. Task-sharing to screen perinatal depression in resource limited setting in India: Comparison of outcomes based on screening by non-expert and expert rater. Asian J Psychiatr . (2021) 62:102738. doi: 10.1016/j.ajp.2021.102738

65. Yamashita H, Ariyoshi A, Uchida H, Tanishima H, Kitamura T, Nakano H. Japanese midwives as psychiatric diagnosticians: application of criteria of DSM-IV mood and anxiety disorders to case vignettes. Psychiatry Clin Neurosci . (2007) 61:226–33. doi: 10.1111/j.1440-1819.2007.01659.x

66. Elliott S, Ross-Davie M, Sarkar A, Green L. Detection and initial assessment of mental disorder: the midwife’s role. Br J Midwifery . (2007) 15:759–64. doi: 10.12968/bjom.2007.15.12.27791

67. Jardri R, Maron M, Pelta J, Thomas P, Codaccioni X, Goudemand M, et al. Impact of midwives’ training on postnatal depression screening in the first week post delivery: a quality improvement report. Midwifery . (2010) 26:622–9. doi: 10.1016/j.midw.2008.12.006

68. Toler S, Stapleton S, Kertsburg K, Callahan TJ, Hastings-Tolsma M. Screening for postpartum anxiety: A quality improvement project to promote the screening of women suffering in silence. Midwifery . (2018) 62:161–70. doi: 10.1016/j.midw.2018.03.016

69. Pearson P, Klima C, Snyder M. Reducing barriers that hinder obstetric providers from addressing perinatal depression: A provider education module. J Dr Nurs Pract . (2019) 12:212–24. doi: 10.1891/2380-9418.12.2.212

70. Wickberg B, Tjus T, Hwang P. Using the EPDS in routine antenatal care in Sweden: a naturalistic study. J Reprod infant Psychol . (2005) 23:33–41. doi: 10.1080/02646830512331330956

71. Forrest E, Poat A. Perinatal mental health education for midwives in Scotland. Br J Midwifery . (2010) 18:280–4. doi: 10.12968/bjom.2010.18.5.47853

72. Higgins A, Carroll M, Sharek D. It opened my mind: student midwives’ views of a motherhood and mental health module. MIDIRS Midwifery Digest . (2012) 22:287–92.

73. Larkin V, Flaherty A, Keys C, Yaseen J. Exploring maternal perinatal mental health using a blended learning package. Br J Midwifery . (2014) 22:210–7. doi: 10.12968/bjom.2014.22.3.210

74. Wang TH, Pai LW, Tzeng YL, Yeh TP, Teng YK. Effectiveness of nurses and midwives-led psychological interventions on reducing depression symptoms in the perinatal period: A systematic review and meta-analysis. Nurs Open . (2021) 8:2117–30. doi: 10.1002/nop2.764

75. Laios L, Rio I, Judd F. Improving maternal perinatal mental health: integrated care for all women versus screening for depression. Australas Psychiatry . (2013) 21:171–5. doi: 10.1177/1039856212466432

76. Coates D, Foureur M. The role and competence of midwives in supporting women with mental health concerns during the perinatal period: A scoping review. Health Soc Care Community . (2019) 27:e389–405. doi: 10.1111/hsc.12740

77. WHO. Guide for integration of perinatal mental health in maternal and child health services . Geneva: World Health Organization. Licence: CC BY-NC-SA 3.0 IGO (2022).

78. Higgins A, Carroll M, Sharek D. Impact of perinatal mental health education on student midwives’ knowledge, skills and attitudes: A pre/post evaluation of a module of study. Nurse Educ Today . (2016) 36:364–9. doi: 10.1016/j.nedt.2015.09.007

79. Hawthorne A, Fagan R, Leaver E, Baxter J, Logan P, Snowden A. Undergraduate nursing and midwifery student’s attitudes to mental illness. Nurs Open . (2020) 7:1118–28. doi: 10.1002/nop2.494

80. ENCMM 6e rapport de l’Enquête Nationale Confidentielle sur les Morts Maternelles, 2013-2015. In: Les morts maternelles en France: mieux comprendre pour mieux prévenir . Santé publique France, Saint-Maurice. Available at: www.santepubliqueFrance.fr . 237 p.

81. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ . (2021) 374:n2061. doi: 10.1136/bmj.n2061

82. Wadephul F, Jarrett PM, Jomeen J, Martin CR. A mixed methods review to develop and confirm a framework for assessing midwifery practice in perinatal mental health. J Adv Nurs . (2018) 74:2258–72. doi: 10.1111/jan.13786

83. Williams P. Mothers’ descriptions of recovery from postpartum depression. MCN Am J Matern Child Nurs . (2013) 38:276–81. doi: 10.1097/NMC.0b013e3182993fbf

84. Slade M, Bird V, Clarke E, Le Boutillier C, McCrone P, Macpherson R, et al. Supporting recovery in patients with psychosis through care by community-based adult mental health teams (REFOCUS): a multisite, cluster, randomised, controlled trial. Lancet Psychiatry . (2015) 2:503–14. doi: 10.1016/S2215-0366(15)00086-3

85. Law S, Ormel I, Babinski S, Plett D, Dionne E, Schwartz H, et al. Dread and solace: Talking about perinatal mental health. Int J Ment Health Nurs . (2021) 30 Suppl 1:1376–85. doi: 10.1111/inm.12884

86. Powell C, Bedi S, Nath S, Potts L, Trevillion K, Howard L. Mothers’ experiences of acute perinatal mental health services in England and Wales: a qualitative analysis. J Reprod Infant Psychol . (2022) 40:155–67. doi: 10.1080/02646838.2020.1814225

87. Kirkpatrick DL, Kirkpatrick JD. Evaluating training programs: the four levels . 3rd ed. San Francisco, CA: Berrett-Koehler Publishers, Inc (2006).

88. Corrigan PW, Watson AC. Understanding the impact of stigma on people with mental illness. World Psychiatry . (2002) 1:16–20.

PubMed Abstract | Google Scholar

89. Kohrt BA, Jordans MJD, Turner EL, Rai S, Gurung D, Dhakal M, et al. Collaboration with people with lived experience of mental illness to reduce stigma and improve primary care services: A pilot cluster randomized clinical trial. JAMA Netw Open . (2021) 4:e2131475. doi: 10.1001/jamanetworkopen.2021.31475

90. Davies L, Page N, Glover H, Sudbury H. Developing a perinatal mental health module: An integrated care approach. Br J Midwifery . (2016) 24:118–21. doi: 10.12968/bjom.2016.24.2.118

91. Verbiest S, Tully K, Simpson M, Stuebe A. Elevating mothers’ voices: recommendations for improved patient-centered postpartum. J Behav Med . (2018) 41:577–90. doi: 10.1007/s10865-018-9961-4

92. Hooks C. Attitudes toward substance misusing pregnant women following a specialist education programme: An exploratory case study. Midwifery . (2019) 76:45–53. doi: 10.1016/j.midw.2019.05.011

Keywords: midwifery, perinatal care, mental health services, education, attitude of health personnel, literature review

Citation: Dubreucq M, Dupont C, Lambregtse-Van den Berg MP, Bramer WM, Massoubre C and Dubreucq J (2024) A systematic review of midwives’ training needs in perinatal mental health and related interventions. Front. Psychiatry 15:1345738. doi: 10.3389/fpsyt.2024.1345738

Received: 28 November 2023; Accepted: 02 April 2024; Published: 22 April 2024.

Reviewed by:

Copyright © 2024 Dubreucq, Dupont, Lambregtse-Van den Berg, Bramer, Massoubre and Dubreucq. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marine Dubreucq, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

COMMENTS

  1. Systematic Review Service

    Establish scope of review and desired timetable for completion of systematic review. Assist to develop the systematic review protocol and update with literature searching steps. Determine if there are any relevant published research articles or reviews. Advise on use of a citation management tool for a systematic review.

  2. Systematic Review Service: Classes, Consultations, Software, and

    Meta-Analysis: Quantifying a Systematic Review June 6, 1:00-3:00 PM; Consultations NIH Librarians are available to help you select the most appropriate type of review for your research project, identify and complete the steps of your review, conduct the literature search, and edit the final manuscript. Schedule a consultation to get started ...

  3. PDF Systematic Literature Reviews: an Introduction

    Systematic literature reviews (SRs) are a way of synthesising scientific evidence to answer a particular research question in a way that is transparent and reproducible, while seeking to include all published evidence on the topic and appraising the quality of th is evidence. SRs have become a major methodology

  4. Systematic Review Service For Scientific Research Paper

    We provide a wide variety of services such as identifying a well-defined focused clinically relevant question, developing a detailed review protocol with strict inclusion and exclusion criteria, systematic literature search meticulous study identification and systematic data abstraction risk of bias assessment, and thoughtful quantitative ...

  5. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  6. Systematic Review

    Proofreading Services Run a free plagiarism check in 10 minutes Plagiarism Checker ... Systematic review vs. literature review. A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a ...

  7. How-to conduct a systematic literature review: A quick guide for

    Method details Overview. A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12].An SLR updates the reader with current literature about a subject [6].The goal is to review critical points of current knowledge on a ...

  8. HSL: Systematic Review Services and Resources: Home

    This multi-page guide does an excellent job detailing the many steps involved in a systematic review. Systematic Reviews - Duke University Medical Library. Excellent overview; includes a helpful grid of Types of Reviews and a helpful . Evidence Synthesis & Literature Reviews - Harvey Cushing/John Hay Whitney Medical Library. Includes tutorials ...

  9. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information.

  10. Home

    A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias, provide reliable findings, and inform decision-making. ¹.

  11. (PDF) Systematic Literature Reviews: An Introduction

    Systematic literature reviews (SRs) are a way of synt hesising scientific evidence to answer a particular. research question in a way that is transparent and reproducible, while seeking to include ...

  12. Evidence Synthesis, Systematic Review Services : Home

    This research guide addresses systematic reviews and other evidence synthesis projects. Evidence synthesis (including systematic reviews and other review types within the systematic review family) is a form of literature review. Like all literature reviews, evidence synthesis projects involve collecting previously published information and reading, evaluating, and aggregating the information.

  13. Systematic, Scoping, and Other Literature Reviews: Overview

    A systematic review, however, is a comprehensive literature review conducted to answer a specific research question. Authors of a systematic review aim to find, code, appraise, and synthesize all of the previous research on their question in an unbiased and well-documented manner.

  14. Evidence Synthesis, Systematic Review Services : Literature Review

    While a full systematic review may not necessarily satisfy criteria for dissertation research in a discipline (as independent scholarship), the methods described in this guide--from developing a protocol to searching and synthesizing the literature--can help to ensure that your review of the literature is comprehensive, transparent, and reproducible.

  15. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  16. Guidance to best tools and practices for systematic reviews

    Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ...

  17. Systematic Literature Review or Literature Review

    The difference between literature review and systematic review comes back to the initial research question. Whereas the systematic review is very specific and focused, the standard literature review is much more general. The components of a literature review, for example, are similar to any other research paper.

  18. Services

    Services. As a team of consultants working as Information Specialists, Systematic Reviewers, Health Economists, Research Analysts, Biostatisticians, Epidemiologists, and Medical Writers. We provide a wide range of services, from writing the proposal to writing the final manuscript. Whether you are a student*, a senior researcher, a clinician ...

  19. A systematic literature review and bibliometric analysis based on

    This study is the first literature review that focuses on member subscription services in retailing. It provides (1) a four-question framework to study the development of member subscription services, and answers each question in the form of a literature review. (2) A classification method of member subscription services.

  20. Systematic Reviews & Literature Reviews

    Overall, while both literature reviews and systematic reviews involve reviewing existing research literature, systematic reviews adhere to more rigorous and transparent methods to minimize bias and provide robust evidence to inform decision-making in education and other fields. If you are interested in learning about other evidence synthesis ...

  21. A Global Systematic Literature Review of Ecosystem Services in Reef

    Data Collection—Literature Review. We performed a systematic literature review through the PRISMA protocol, using explicit eligibility criteria (Moher et al. 2009).The scientific literature was examined by searching on the Web of Science database (Clarivate Analytics) for peer-reviewed research articles using Boolean search terms, exclusively in English.

  22. Review Ecosystem services research in mountainous regions: A systematic

    The present systematic literature review (SLR) work, therefore, aim to enhance our understanding about the existed scientific knowledge and research works on MES, mountain ecosystem to sustainably supporting the continuing human well-being, and the main limitations and gaps that hinder the assessment of mountain-related ES, and the way forward ...

  23. Adoption Factors of Digital Services—A Systematic Literature Review

    Hence, by using the methodology of a systematic literature review, this paper identifies key factors that drive a consumer to adopt digital services. As a result of a subsequent classification, the present work distinguishes among three main categories. First are consumer-specific factors relating to individual predispositions, demographics ...

  24. Supervised injection services: what has been demonstrated? A systematic

    A systematic literature review Drug Alcohol Depend. 2014 Dec 1:145:48-68. doi: 10.1016/j.drugalcdep.2014.10.012. Epub 2014 Oct 23. Authors ... Supervised injection services (SISs) have been developed to promote safer drug injection practices, enhance health-related behaviors among people who inject drugs (PWID), and connect PWID with external ...

  25. Full article: A systematic literature review of school counselling

    This systematic literature review investigated school counselling needs in East and Southeast Asia based on 109 studies from 14 countries published since 2011. School counselling needs were categorised using an international taxonomy (Morshed & Carey, 2020, Development of a taxonomy of policy levers to promote high quality school-based counseling.

  26. Integration of Shared Micromobility into Public Transit: A Systematic

    Shared micromobility services have become increasingly prevalent and indispensable as a means of transportation across diverse geographical regions. Integrating shared micromobility with public transit offers opportunities to complement fixed-route transit networks and address first- and last-mile issues. To explore this topic, a systematic literature review was conducted to consolidate ...

  27. Frontiers

    Keywords: midwifery, perinatal care, mental health services, education, attitude of health personnel, literature review. Citation: Dubreucq M, Dupont C, Lambregtse-Van den Berg MP, Bramer WM, Massoubre C and Dubreucq J (2024) A systematic review of midwives' training needs in perinatal mental health and related interventions. Front.

  28. Crisis and acute mental health care for people who have been given a

    Background: People who have been given a diagnosis of a 'personality disorder' need access to good quality mental healthcare when in crisis, but the evidence underpinning crisis services for this group is limited. We synthesised quantitative studies reporting outcomes for people with a 'personality disorder' diagnosis using crisis and acute mental health services. Methods: We searched ...