Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations
  • What is a Systematic Review?
  • Types of Reviews
  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

Manuals versus Reporting Guidelines

There are two types of guidance documents necessary for conducting systematic reviews and other evidence syntheses, they serve different purposes and you need both to successfully navigate the process from planning to publication.

  • Handbooks or manuals
  • Reporting guidelines

Handbooks and manuals provide practical methodological guidance for undertaking a systematic review.  They contain detailed steps on how to plan, conduct, organize, and present your review.  This is the best place to go if you have any questions about the best practices for any of the steps in the process.

Reporting guidelines aid in the transparent and accurate reporting, in your manuscript for publication, the steps you performed when conducting your review.

From the Equator Network - What is a reporting guideline?

A reporting guideline is a simple, structured tool for health researchers to use while writing manuscripts. A reporting guideline provides a minimum list of information needed to ensure a manuscript can be:

  • Understood by a reader,
  • Replicated by a researcher,
  • Used by a doctor to make a clinical decision, and
  • Included in a systematic review.
  • EQUATOR Network: Enhancing the QUAlity and Transparency Of health Research

Handbooks and Manuals

  • Cochrane Handbook for Systematic Reviews of Interventions The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
  • JBI Manual for Evidence Synthesis: Scoping Reviews Comprehensive chapter from the Joanna Briggs Institute on how to conduct a scoping review.
  • Scoping studies: Towards a Methodological Framework Arksey H, O'Malley L. Scoping studies: Towards a Methodological Framework. Int J Soc Res Methodol. 2005;8:19–32. doi: 10.1080/1364557032000119616
  • AAFP Clinical Practice Guideline Manual This manual summarizes the processes used by the AAFP to produce high-quality, evidence-based guidelines.
  • Finding What Works in Health Care: Standards for Systematic Reviews The National Academies Press (formerly IOM) standards address the entire systematic review process, from locating, screening, and selecting studies for the review, to synthesizing the findings and assessing the overall quality of the body of evidence, to producing the final review report.
  • Methods Guide for Effectiveness and Comparative Effectiveness Reviews (AHRQ) This guide was developed to improve the transparency, consistency, and scientific rigor of those working on Comparative Effectiveness Reviews.
  • Conduct Standards: Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) Guidelines for conducting a Campbell Systematic Review. The Campbell Collaboration is an international research network that produces systematic reviews of the effects of social interventions.
  • Guidance for producing a Campbell evidence and gap map This guidance is intended for commissioners and producers of Campbell evidence and gaps maps (EGMs), and will be of use to others also producing evidence maps. The guidance provides an overview of the steps involved in producing a map.
  • Cochrane Handbook - Chapter 16: Equity and specific populations

Systematic Review Reporting Guidelines

  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 (PRISMA Statement) The aim of the 20020 PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses. The focus of PRISMA is randomized trials, but it can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions. We highly encourage authors to review the PRISMA 2020 Elaboration & Explanation document.
  • PRISMA for Diagnostic Test Accuracy The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.
  • PRISMA for reviews including harms outcomes The PRISMA harms checklist contains four extension items that must be used in any systematic review addressing harms, irrespective of whether harms are analysed alone or in association with benefits
  • PRISMA for Scoping Reviews The PRISMA extension for scoping reviews was published in 2018. The checklist contains 20 essential reporting items and 2 optional items to include when completing a scoping review. Scoping reviews serve to synthesize evidence and assess the scope of literature on a topic. Among other objectives, scoping reviews help determine whether a systematic review of the literature is warranted.
  • Reporting Standards: Campbell evidence and gap map This document provides detailed methodological expectations for the reporting of Campbell Collaboration evidence and gap maps (EGMs).
  • Reporting Standads: Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) Guidelines for reporting a Campbell Systematic Review. The Campbell Collaboration is an international research network that produces systematic reviews of the effects of social interventions.
  • The Equity Checklist for Systematic Review Authors This tool developed by the Campbell and Cochrane Equity Methods Group may help authors when considering equity in their review. It may also be helpful to use the PRISMA-Equity checklist for reporting.
  • PRISMA Equity It provides guidance for reporting equity-focused systematic reviews in order to help reviewers identify, extract, and synthesize evidence on equity in systematic reviews. Health inequity is defined as unfair and avoidable differences in health.
  • MOOSE (Meta-analyses Of Observational Studies in Epidemiology) Checklist Guidelines for meta-analyses of observational studies in epidemiology.
  • Cochrane Handbook - Chapter 4: Searching for and selecting studies This chapter aims to provide review authors with background information on all aspects of searching for and selecting studies so that they can better understand the search and selection processes. All authors of systematic reviews should, however, identify an experienced medical/healthcare librarian or information specialist to collaborate with on the search process.
  • PRISMA-S: an extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews The checklist includes 16 reporting items, each of which is detailed with exemplar reporting and Rationale.
  • Searching for studies: a guide to information retrieval for Campbell systematic reviews This guide (a) identifies the key issues faced by reviewers when gathering information for a review, (b) proposes different approaches in order to guide the work of the reviewer during the information retrieval phase, and (c) provides examples that demonstrate these approaches.

Reporting Guidelines for Other Reviews

  • EQUATOR Network: Systematic Reviews, Meta-Analysis, Reviews, Overview, HTA The EQUATOR network includes reporting guidelines for many review types.
  • << Previous: Types of Reviews
  • Next: Our Service >>
  • Last Updated: Mar 20, 2024 2:21 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

Publication types

  • Systematic Review
  • Research Design*

Systematic reviews

  • Introduction to systematic reviews
  • Steps in a systematic review
  • Formulate the question

Review protocols

Finding existing systematic reviews, registering your protocol.

  • Sources to search
  • Conduct a thorough search
  • Post search phase
  • Select studies (screening)
  • Appraise the quality of the studies
  • Extract data, synthesise and analyse
  • Interpret results and write
  • Guides and manuals
  • Training and support

The plan for a systematic review is called a protocol and defines the steps that will be undertaken in the review.

The Cochrane Collaboration defines a protocol as the plan or set of steps to be followed in a study.

A protocol for a systematic review should describe the rationale for the review, the objectives, the methods that will be used to locate, select, and critically appraise studies, and to collect and analyse data from the included studies.

Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al, editors. Cochrane handbook for systematic reviews of interventions,  Cochrane; 2022.

A review protocol will include:

  • Background to the study and the importance of the topic
  • Objectives and scope
  • Selection criteria for the studies
  • Planned search strategy
  • Planned data extraction
  • Proposed method for synthesising the findings.

The protocol will help guide the process of the review and work as a point of reference to each part of the review.

PRISMA is considered the gold standard for reporting how you conducted your systematic review, and it's worth considering these elements now as you prepare your protocol. There is an extension to PRISMA specifically aimed at guiding the format of protocols, PRISMA-P . There is more information about PRISMA in the section of this guide,  'Writing the review' .

In the early stages of your planned project you'll need to check for any existing systematic reviews on the same topic. The section 'Before you build a search' in this guide demonstrates how to do this as part of the scoping searching process. You can also do some searching in the sources listed below for existing reviews. Note that you'll need to keep your eye out for existing systematic reviews as you progress your project and develop more sophisticated searches for your topic.

PubMed : A good place to check for health-related reviews. Use the 'systematic reviews' limit on the left hand side to help you find existing reviews. PubMed will also allow to you find the reviews from the Cochrane Library , and the Joanna Briggs Institute , two well known producers of health related reviews (you could also search those two sites separately if you wish).

PROSPERO :  International Prospective Register of Systematic Reviews (health-related). Search PROSPERO to find registered systematic review protocols, i.e. systematic review projects that are currently underway.

The Campbell Collaboration : Campbell systematic reviews follow structured guidelines and standards for summarizing the international research evidence on the effects of interventions in crime and justice, education, international development, and social welfare.

Environmental Evidence Library : contains a collection of Systematic Reviews (SRs) of evidence on the effectiveness human interventions in environmental management and the environmental impacts of humans activities.

EPPI-Centre : Evidence for Policy and Practice Information and Co-ordinating Centre.  (Education and social policy, Health promotion and public health, International health).

You should also consult subject-specific or specialist databases that may be relevant to the topic area. It's worth searching the main databases that you're planning to use for your project for existing systematic reviews. Check the database options to see if you can limit to systematic reviews, or include 'systematic review' in your searches.

Registering the protocol for your planned project "promotes transparency, helps reduce potential for bias and serves to avoid unintended duplication of reviews". It's best to wait until your protocol is well developed and unlikely to change before registering it.

Stewart L, Moher D, Shekelle P. Why prospective registration of systematic reviews makes sense . Syst Rev. 2012;1(1):7.

PROSPERO is the main place for registering systematic reviews, rapid reviews and umbrella reviews that have a clear outcome related to human health. The PROSPERO webpage Accessing and completing the registration form has information about how to register your protocol, including information on which review types are eligible. Note that PROSPERO does not accept scoping reviews, and we would recommend registering these with the Open Science Framework.

Open Science Framework is another place that many review protocols are registered. It is a large, free platform dedicated to supporting many aspects of open science and is a good option for scoping reviews or reviews without a health outcome. The OSF Registries area is where protocols can be registered and searched. Information on how to register your protocol is available on the Welcome to Registrations page. Examples of registered protocols may be viewed by searching OSF Registries .

Another way to make your protocol available is to publish it in a journal that accepts protocols. Examples of such journals include Systematic Reviews, BMJ Open, Journal of Human Nutrition & Dietetics and more. To find potential journals, search on the topic of your review using a major database in your discipline and include the words systematic review protocol in your search, e.g. you might search for blended learning education systematic review protocol . The UQ Librarians can also assist you to identify potential journals.

  • << Previous: Formulate the question
  • Next: Sources to search >>
  • Last Updated: Mar 26, 2024 2:54 PM
  • URL: https://guides.library.uq.edu.au/research-techniques/systematic-reviews
  • Locations and Hours
  • UCLA Library
  • Research Guides
  • Biomedical Library Guides

Systematic Reviews

  • Planning Your Systematic Review
  • Types of Literature Reviews

Assembling Your Team

Steps of a systematic review, writing and publishing your protocol.

  • Database Searching
  • Creating the Search
  • Search Filters and Hedges
  • Grey Literature
  • Managing and Appraising Results
  • Further Resources

team

It is essential that Cochrane reviews be undertaken by more than one person. This ensures that tasks such as selection of studies for eligibility and data extraction can be performed by at least two people independently, increasing the likelihood that errors are detected.

- Cochrane Handbook version 5.1 , 2011, section 2.3.4.1

The objective of organizing the review team is to pull together a group of researchers as well as key users and stakeholders who have the necessary skills and clinical content knowledge to produce a high-quality SR.
Standard 2.1 Establish a team with appropriate expertise and experience to conduct the systematic review Required elements: Include expertise in the pertinent clinical content areas Include expertise in systematic review methods Include expertise in searching for relevant evidence Include expertise in quantitative methods Include other expertise as appropriate

- National Academies of Sciences, Engineering, and Medicine, Finding What Works in Health Care: Standards for Systematic Reviews , chapter 2, 2011.

See the further resources page for links to more in-depth resources on these steps.

steps of systematic review

  • Usually this means deciding on an answerable question. The PICO framework can help you formulate a question that can be answered in the literature. PICO stands for: Patient or population, Intervention, Comparison or control, and Outcome.
  • It is important to include team members who have clinical expertise related to the research topic. You also want to have at least one team member with expertise systematic review methodology, one team member with expertise in evidence searching (e.g. a medical librarian), and a biostatistician if you intend to perform a meta-analysis on your findings.
  • A protocol is critical for your process. It spells out your search plan and your inclusion and exclusion criteria for the evidence you will discover. Sticking to your previously published protocol increases transparency and reduces bias in the process of gathering evidence.
  • You may need to do some scoping searches as you develop your protocol, in order to help refine your research question.
  • Once your protocol is finalized, you can work with a medical librarian on search strategies for multiple literature databases.
  • Evidence may exist beyond the published literature. Gray literature searching is necessary to correct for publication bias.
  • At least two independent screeners review titles and abstracts first, then full text.
  • Various quality checklists (especially for RCTs) exist. You may also want to read about Cochrane's methods and  risk of bias tool.
  • Data must be extracted in a structured, documented way for included studies.
  • Meta-analyses statistically combine results from multiple studies to gain more power, potentially detecting a different effect, through a larger sample size than the individual studies. A biostatistician should be part of the research team if a meta-analysis is conducted.
  • It may not be possible to perform a meta-analysis on the existing evidence. In this case, evidence can be synthesized narratively.
  • PRISMA is a popular reporting standard required by many journals.
  • Check to see if a specialized reporting standard exists for your subfield.

What is a protocol?

  • A protocol lays out your plan for the systematic review. It specifies the systematic review authors, the rationale and objectives for the review, the inclusion and exclusion criteria for study eligibility, the databases to be searched along with the search strategy, and the process for managing, screening, analyzing, and synthesizing the results.

Why write a protocol?

  • As with any other study, a systematic review needs a plan. The protocol provides the team with a road map for completion.

Why publish a protocol?

  • A published protocol makes your plan public. This accountability mitigates bias that can result from changing the research topic, or study eligibility criteria, based on results that were discovered during the study.
  • It also informs other researchers of your ongoing work, preventing possible duplication of efforts.
  • For more, read this article: Why prospective registration of systematic reviews makes sense

Guidance on writing a protocol

  • PRISMA-P is an extension of the PRISMA reporting standard for protocols
  • The Cochrane Handbook part 1, chapter 4 has information on writing Cochrane protocols

Sharing/publishing protocols

  • Systematic Reviews journal
  • PROSPERO , a database of protocols (it's free to add yours)
  • << Previous: Types of Literature Reviews
  • Next: Database Searching >>
  • Last Updated: Apr 17, 2024 2:02 PM
  • URL: https://guides.library.ucla.edu/systematicreviews

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?
  • Steps of a Systematic Review
  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Introduction to Systematic Review

  • Introduction
  • Types of literature reviews
  • Other Libguides
  • Systematic review as part of a dissertation
  • Tutorials & Guidelines & Examples from non-Medical Disciplines

Depending on your learning style, please explore the resources in various formats on the tabs above.

For additional tutorials, visit the SR Workshop Videos  from UNC at Chapel Hill outlining each stage of the systematic review process.

Know the difference! Systematic review vs. literature review

what is the systematic plan for conducting research

Types of literature reviews along with associated methodologies

JBI Manual for Evidence Synthesis .  Find definitions and methodological guidance.

- Systematic Reviews - Chapters 1-7

- Mixed Methods Systematic Reviews -  Chapter 8

- Diagnostic Test Accuracy Systematic Reviews -  Chapter 9

- Umbrella Reviews -  Chapter 10

- Scoping Reviews -  Chapter 11

- Systematic Reviews of Measurement Properties -  Chapter 12

Systematic reviews vs scoping reviews - 

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal , 26 (2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 (28). htt p s://doi.org/ 10.1186/2046-4053-1-28

Munn, Z., Peters, M., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018).  Systematic review or  scoping review ?  Guidance for authors when choosing between a systematic or scoping review approach.  BMC medical research methodology, 18 (1), 143. https://doi.org/10.1186/s12874-018-0611-x. Also, check out the  Libguide from Weill Cornell Medicine  for the  differences between a systematic review and a scoping review  and when to embark on either one of them.

Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements . Health Information & Libraries Journal , 36 (3), 202–222. https://doi.org/10.1111/hir.12276

Temple University. Review Types . - This guide provides useful descriptions of some of the types of reviews listed in the above article.

UMD Health Sciences and Human Services Library.  Review Types . - Guide describing Literature Reviews, Scoping Reviews, and Rapid Reviews.

Whittemore, R., Chao, A., Jang, M., Minges, K. E., & Park, C. (2014). Methods for knowledge synthesis: An overview. Heart & Lung: The Journal of Acute and Critical Care, 43 (5), 453–461. https://doi.org/10.1016/j.hrtlng.2014.05.014

Differences between a systematic review and other types of reviews

Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). ‘ Scoping the scope ’ of a cochrane review. Journal of Public Health , 33 (1), 147–150. https://doi.org/10.1093/pubmed/fdr015

Kowalczyk, N., & Truluck, C. (2013). Literature reviews and systematic reviews: What is the difference? Radiologic Technology , 85 (2), 219–222.

White, H., Albers, B., Gaarder, M., Kornør, H., Littell, J., Marshall, Z., Matthew, C., Pigott, T., Snilstveit, B., Waddington, H., & Welch, V. (2020). Guidance for producing a Campbell evidence and gap map . Campbell Systematic Reviews, 16 (4), e1125. https://doi.org/10.1002/cl2.1125. Check also this comparison between evidence and gaps maps and systematic reviews.

Rapid Reviews Tutorials

Rapid Review Guidebook  by the National Collaborating Centre of Methods and Tools (NCCMT)

Hamel, C., Michaud, A., Thuku, M., Skidmore, B., Stevens, A., Nussbaumer-Streit, B., & Garritty, C. (2021). Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews.  Journal of clinical epidemiology ,  129 , 74–85. https://doi.org/10.1016/j.jclinepi.2020.09.041

  • Müller, C., Lautenschläger, S., Meyer, G., & Stephan, A. (2017). Interventions to support people with dementia and their caregivers during the transition from home care to nursing home care: A systematic review . International Journal of Nursing Studies, 71 , 139–152. https://doi.org/10.1016/j.ijnurstu.2017.03.013
  • Bhui, K. S., Aslam, R. W., Palinski, A., McCabe, R., Johnson, M. R. D., Weich, S., … Szczepura, A. (2015). Interventions to improve therapeutic communications between Black and minority ethnic patients and professionals in psychiatric services: Systematic review . The British Journal of Psychiatry, 207 (2), 95–103. https://doi.org/10.1192/bjp.bp.114.158899
  • Rosen, L. J., Noach, M. B., Winickoff, J. P., & Hovell, M. F. (2012). Parental smoking cessation to protect young children: A systematic review and meta-analysis . Pediatrics, 129 (1), 141–152. https://doi.org/10.1542/peds.2010-3209

Scoping Review

  • Hyshka, E., Karekezi, K., Tan, B., Slater, L. G., Jahrig, J., & Wild, T. C. (2017). The role of consumer perspectives in estimating population need for substance use services: A scoping review . BMC Health Services Research, 171-14.  https://doi.org/10.1186/s12913-017-2153-z
  • Olson, K., Hewit, J., Slater, L.G., Chambers, T., Hicks, D., Farmer, A., & ... Kolb, B. (2016). Assessing cognitive function in adults during or following chemotherapy: A scoping review . Supportive Care In Cancer, 24 (7), 3223-3234. https://doi.org/10.1007/s00520-016-3215-1
  • Pham, M. T., Rajić, A., Greig, J. D., Sargeant, J. M., Papadopoulos, A., & McEwen, S. A. (2014). A scoping review of scoping reviews: Advancing the approach and enhancing the consistency . Research Synthesis Methods, 5 (4), 371–385. https://doi.org/10.1002/jrsm.1123
  • Scoping Review Tutorial from UNC at Chapel Hill

Qualitative Systematic Review/Meta-Synthesis

  • Lee, H., Tamminen, K. A., Clark, A. M., Slater, L., Spence, J. C., & Holt, N. L. (2015). A meta-study of qualitative research examining determinants of children's independent active free play . International Journal Of Behavioral Nutrition & Physical Activity, 12 (5), 121-12. https://doi.org/10.1186/s12966-015-0165-9

Videos on systematic reviews

Systematic Reviews: What are they? Are they right for my research? - 47 min. video recording with a closed caption option.

More training videos  on systematic reviews:   

Books on Systematic Reviews

Cover Art

Books on Meta-analysis

what is the systematic plan for conducting research

  • University of Toronto Libraries  - very detailed with good tips on the sensitivity and specificity of searches.
  • Monash University  - includes an interactive case study tutorial. 
  • Dalhousie University Libraries - a comprehensive How-To Guide on conducting a systematic review.

Guidelines for a systematic review as part of the dissertation

  • Guidelines for Systematic Reviews in the Context of Doctoral Education Background  by University of Victoria (PDF)
  • Can I conduct a Systematic Review as my Master’s dissertation or PhD thesis? Yes, It Depends!  by Farhad (blog)
  • What is a Systematic Review Dissertation Like? by the University of Edinburgh (50 min video) 

Further readings on experiences of PhD students and doctoral programs with systematic reviews

Puljak, L., & Sapunar, D. (2017). Acceptance of a systematic review as a thesis: Survey of biomedical doctoral programs in Europe . Systematic Reviews , 6 (1), 253. https://doi.org/10.1186/s13643-017-0653-x

Perry, A., & Hammond, N. (2002). Systematic reviews: The experiences of a PhD Student . Psychology Learning & Teaching , 2 (1), 32–35. https://doi.org/10.2304/plat.2002.2.1.32

Daigneault, P.-M., Jacob, S., & Ouimet, M. (2014). Using systematic review methods within a Ph.D. dissertation in political science: Challenges and lessons learned from practice . International Journal of Social Research Methodology , 17 (3), 267–283. https://doi.org/10.1080/13645579.2012.730704

UMD Doctor of Philosophy Degree Policies

Before you embark on a systematic review research project, check the UMD PhD Policies to make sure you are on the right path. Systematic reviews require a team of at least two reviewers and an information specialist or a librarian. Discuss with your advisor the authorship roles of the involved team members. Keep in mind that the  UMD Doctor of Philosophy Degree Policies (scroll down to the section, Inclusion of one's own previously published materials in a dissertation ) outline such cases, specifically the following: 

" It is recognized that a graduate student may co-author work with faculty members and colleagues that should be included in a dissertation . In such an event, a letter should be sent to the Dean of the Graduate School certifying that the student's examining committee has determined that the student made a substantial contribution to that work. This letter should also note that the inclusion of the work has the approval of the dissertation advisor and the program chair or Graduate Director. The letter should be included with the dissertation at the time of submission.  The format of such inclusions must conform to the standard dissertation format. A foreword to the dissertation, as approved by the Dissertation Committee, must state that the student made substantial contributions to the relevant aspects of the jointly authored work included in the dissertation."

  • Cochrane Handbook for Systematic Reviews of Interventions - See Part 2: General methods for Cochrane reviews
  • Systematic Searches - Yale library video tutorial series 
  • Using PubMed's Clinical Queries to Find Systematic Reviews  - From the U.S. National Library of Medicine
  • Systematic reviews and meta-analyses: A step-by-step guide - From the University of Edinsburgh, Centre for Cognitive Ageing and Cognitive Epidemiology

Bioinformatics

  • Mariano, D. C., Leite, C., Santos, L. H., Rocha, R. E., & de Melo-Minardi, R. C. (2017). A guide to performing systematic literature reviews in bioinformatics .  arXiv preprint arXiv:1707.05813.

Environmental Sciences

Collaboration for Environmental Evidence. 2018.  Guidelines and Standards for Evidence synthesis in Environmental Management. Version 5.0 (AS Pullin, GK Frampton, B Livoreil & G Petrokofsky, Eds) www.environmentalevidence.org/information-for-authors .

Pullin, A. S., & Stewart, G. B. (2006). Guidelines for systematic review in conservation and environmental management. Conservation Biology, 20 (6), 1647–1656. https://doi.org/10.1111/j.1523-1739.2006.00485.x

Engineering Education

  • Borrego, M., Foster, M. J., & Froyd, J. E. (2014). Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education, 103 (1), 45–76. https://doi.org/10.1002/jee.20038

Public Health

  • Hannes, K., & Claes, L. (2007). Learn to read and write systematic reviews: The Belgian Campbell Group . Research on Social Work Practice, 17 (6), 748–753. https://doi.org/10.1177/1049731507303106
  • McLeroy, K. R., Northridge, M. E., Balcazar, H., Greenberg, M. R., & Landers, S. J. (2012). Reporting guidelines and the American Journal of Public Health’s adoption of preferred reporting items for systematic reviews and meta-analyses . American Journal of Public Health, 102 (5), 780–784. https://doi.org/10.2105/AJPH.2011.300630
  • Pollock, A., & Berge, E. (2018). How to do a systematic review.   International Journal of Stroke, 13 (2), 138–156. https://doi.org/10.1177/1747493017743796
  • Institute of Medicine. (2011). Finding what works in health care: Standards for systematic reviews . https://doi.org/10.17226/13059
  • Wanden-Berghe, C., & Sanz-Valero, J. (2012). Systematic reviews in nutrition: Standardized methodology . The British Journal of Nutrition, 107 Suppl 2, S3-7. https://doi.org/10.1017/S0007114512001432

Social Sciences

  • Bronson, D., & Davis, T. (2012).  Finding and evaluating evidence: Systematic reviews and evidence-based practice (Pocket guides to social work research methods). Oxford: Oxford University Press.
  • Petticrew, M., & Roberts, H. (2006).  Systematic reviews in the social sciences: A practical guide . Malden, MA: Blackwell Pub.
  • Cornell University Library Guide -  Systematic literature reviews in engineering: Example: Software Engineering
  • Biolchini, J., Mian, P. G., Natali, A. C. C., & Travassos, G. H. (2005). Systematic review in software engineering .  System Engineering and Computer Science Department COPPE/UFRJ, Technical Report ES, 679 (05), 45.
  • Biolchini, J. C., Mian, P. G., Natali, A. C. C., Conte, T. U., & Travassos, G. H. (2007). Scientific research ontology to support systematic review in software engineering . Advanced Engineering Informatics, 21 (2), 133–151.
  • Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering . [Technical Report]. Keele, UK, Keele University, 33(2004), 1-26.
  • Weidt, F., & Silva, R. (2016). Systematic literature review in computer science: A practical guide .  Relatórios Técnicos do DCC/UFJF ,  1 .
  • Academic Phrasebank - Get some inspiration and find some terms and phrases for writing your research paper
  • Oxford English Dictionary  - Use to locate word variants and proper spelling
  • << Previous: Library Help
  • Next: Steps of a Systematic Review >>
  • Last Updated: Apr 19, 2024 12:47 PM
  • URL: https://lib.guides.umd.edu/SR

Conducting a Systematic Review: A Practical Guide

  • Living reference work entry
  • Later version available View entry history
  • First Online: 13 January 2018
  • Cite this living reference work entry

what is the systematic plan for conducting research

  • Freya MacMillan 2 ,
  • Kate A. McBride 3 ,
  • Emma S. George 4 &
  • Genevieve Z. Steiner 5  

470 Accesses

It can be challenging to conduct a systematic review with limited experience and skills in undertaking such a task. This chapter provides a practical guide to undertaking a systematic review, providing step-by-step instructions to guide the individual through the process from start to finish. The chapter begins with defining what a systematic review is, reviewing its various components, turning a research question into a search strategy, developing a systematic review protocol, followed by searching for relevant literature and managing citations. Next, the chapter focuses on documenting the characteristics of included studies and summarizing findings, extracting data, methods for assessing risk of bias and considering heterogeneity, and undertaking meta-analyses. Last, the chapter explores creating a narrative and interpreting findings. Practical tips and examples from existing literature are utilized throughout the chapter to assist readers in their learning. By the end of this chapter, the reader will have the knowledge to conduct their own systematic review.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ. 2001;322(7294):1115–7.

Article   Google Scholar  

Butler A, Hall H, Copnell B. A guide to writing a qualitative systematic review protocol to enhance evidence-based practice in nursing and health care. Worldviews Evid-Based Nurs. 2016;13(3):241–9.

Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, … Young B. How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006;6(1):27–44. https://doi.org/10.1177/1468794106058867 .

Greenhalgh T. How to read a paper: the basics of evidence-based medicine. 4th ed. Chichester/Hoboken: Wiley-Blackwell; 2010.

Google Scholar  

Hannes K, Lockwood C, Pearson A. A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qual Health Res. 2010;20(12):1736–43. https://doi.org/10.1177/1049732310378656 .

Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions (Version 5.1.0 [updated March 2011]). The Cochrane Collaboration; 2011.  http://handbook-5-1.cochrane.org/

Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, … Sterne JAC. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343. https://doi.org/10.1136/bmj.d5928 .

Hillier S, Grimmer-Somers K, Merlin T, Middleton P, Salisbury J, Tooher R, Weston A. FORM: an Australian method for formulating and grading recommendations in evidence-based clinical guidelines. BMC Med Res Methodol. 2011;11:23. https://doi.org/10.1186/1471-2288-11-23 .

Humphreys DK, Panter J, Ogilvie D. Questioning the application of risk of bias tools in appraising evidence from natural experimental studies: critical reflections on Benton et al., IJBNPA 2016. Int J Behav Nutr Phys Act. 2017; 14 (1):49. https://doi.org/10.1186/s12966-017-0500-4 .

King R, Hooper B, Wood W. Using bibliographic software to appraise and code data in educational systematic review research. Med Teach. 2011;33(9):719–23. https://doi.org/10.3109/0142159x.2011.558138 .

Koelemay MJ, Vermeulen H. Quick guide to systematic reviews and meta-analysis. Eur J Vasc Endovasc Surg. 2016;51(2):309. https://doi.org/10.1016/j.ejvs.2015.11.010 .

Lucas PJ, Baird J, Arai L, Law C, Roberts HM. Worked examples of alternative methods for the synthesis of qualitative and quantitative research in systematic reviews. BMC Med Res Methodol. 2007;7:4–4. https://doi.org/10.1186/1471-2288-7-4 .

MacMillan F, Kirk A, Mutrie N, Matthews L, Robertson K, Saunders DH. A systematic review of physical activity and sedentary behavior intervention studies in youth with type 1 diabetes: study characteristics, intervention design, and efficacy. Pediatr Diabetes. 2014;15(3):175–89. https://doi.org/10.1111/pedi.12060 .

MacMillan F, Karamacoska D, El Masri A, McBride KA, Steiner GZ, Cook A, … George ES. A systematic review of health promotion intervention studies in the police force: study characteristics, intervention design and impacts on health. Occup Environ Med. 2017. https://doi.org/10.1136/oemed-2017-104430 .

Matthews L, Kirk A, MacMillan F, Mutrie N. Can physical activity interventions for adults with type 2 diabetes be translated into practice settings? A systematic review using the RE-AIM framework. Transl Behav Med. 2014;4(1):60–78. https://doi.org/10.1007/s13142-013-0235-y .

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1:2. https://doi.org/10.1186/1471-2288-1-2 .

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. https://doi.org/10.1371/journal.pmed.1000097 .

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. https://doi.org/10.1186/2046-4053-4-1 .

Mulrow CD, Cook DJ, Davidoff F. Systematic reviews: critical links in the great chain of evidence. Ann Intern Med. 1997;126(5):389–91.

Peters MDJ. Managing and coding references for systematic reviews and scoping reviews in EndNote. Med Ref Serv Q. 2017;36(1):19–31. https://doi.org/10.1080/02763869.2017.1259891 .

Steiner GZ, Mathersul DC, MacMillan F, Camfield DA, Klupp NL, Seto SW, … Chang DH. A systematic review of intervention studies examining nutritional and herbal therapies for mild cognitive impairment and dementia using neuroimaging methods: study characteristics and intervention efficacy. Evid Based Complement Alternat Med. 2017;2017:21. https://doi.org/10.1155/2017/6083629 .

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, … Higgins JP. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355. https://doi.org/10.1136/bmj.i4919 .

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Tong A, Palmer S, Craig JC, Strippoli GFM. A guide to reading and using systematic reviews of qualitative research. Nephrol Dial Transplant. 2016;31(6):897–903. https://doi.org/10.1093/ndt/gfu354 .

Uman LS. Systematic reviews and meta-analyses. J Can Acad Child Adolesc Psychiatry. 2011;20(1):57–9.

Download references

Author information

Authors and affiliations.

School of Science and Health and Translational Health Research Institute, Sydney, NSW, Australia

Freya MacMillan

Translational Health Research Institute, Western Sydney University, Sydney, NSW, Australia

Kate A. McBride

School of Science and Health, Western Sydney University, Sydney, NSW, Australia

Emma S. George

NICM, Western Sydney University, Sydney, NSW, Australia

Genevieve Z. Steiner

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Freya MacMillan .

Editor information

Editors and affiliations.

Health, Locked Bag 1797, CA.02.35, Western Sydney Univ, School of Science & Health, Locked Bag 1797, CA.02.35, Penrith, New South Wales, Australia

Pranee Liamputtong

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this entry

Cite this entry.

MacMillan, F., McBride, K.A., George, E.S., Steiner, G.Z. (2018). Conducting a Systematic Review: A Practical Guide. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences . Springer, Singapore. https://doi.org/10.1007/978-981-10-2779-6_113-1

Download citation

DOI : https://doi.org/10.1007/978-981-10-2779-6_113-1

Received : 18 December 2017

Accepted : 02 January 2018

Published : 13 January 2018

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-2779-6

Online ISBN : 978-981-10-2779-6

eBook Packages : Springer Reference Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

Chapter history

DOI: https://doi.org/10.1007/978-981-10-2779-6_113-2

DOI: https://doi.org/10.1007/978-981-10-2779-6_113-1

  • Find a journal
  • Track your research
  • Systematic Review
  • Open access
  • Published: 24 April 2024

The immediate impacts of TV programs on preschoolers' executive functions and attention: a systematic review

  • Sara Arian Namazi 1 &
  • Saeid Sadeghi 1  

BMC Psychology volume  12 , Article number:  226 ( 2024 ) Cite this article

Metrics details

Previous research has presented varying perspectives on the potential effect of screen media use among preschoolers. In this study, we systematically reviewed experimental studies that investigated how pacing and fantasy features of TV programs affect children's attention and executive functions (EFs).

A systematic search was conducted across eight online databases to identify pertinent studies published until August 2023. We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) guidelines.

Fifteen papers involving 1855 participants aged 2–7 years fulfilled all the inclusion criteria for this review and were entered into the narrative synthesis. Despite the challenge of reaching general conclusions and encountering conflicting outcomes, a nuanced analysis reveals distinct patterns within various subgroups. The impact of pacing on attention is discernible, particularly in bottom-up attention processes, although the nature of this effect remains contradictory. Conversely, consistent findings emerge regarding top-down attention, suggesting any impact. Moreover, a subgroup analysis of different EF components yields valuable insights, highlighting the negative effect of fantasy on inhibitory control within the EF framework.

The complexity of these outcomes highlights the need for further research, considering factors such as content, child-specific characteristics, environmental factors, and methodological approaches. These findings collectively emphasize the necessity of conducting more comprehensive and detailed research, especially in terms of the underlying mechanisms and their impact on brain function.

Peer Review reports

Introduction

In the last few decades, the advancement of technology has made digital devices a significant part of children's lives [ 1 ]. Children are now using digital devices at a younger age as devices are more readily available at home, school, and in society as a whole [ 2 , 3 , 4 ]. Studies have shown that excessive screen time is associated with obesity and sleep problems, as well as lowered social and motor development scores in young children [ 5 , 6 ]. In recent years, researchers have been studying the interaction between digital devices and children's cognitive development [ 7 ].

The term “digital devices” refers to devices that can create, generate, share, communicate, receive, store, display, or process information, including, but not limited to, laptops, tablets, desktops, televisions (TVs), mobile phones, and smartphones [ 8 ]. TV is one of the digital devices well-studied for its effects on children and refers to shows (e.g. live-action, puppets, …) and cartoons that children watch on TVs and other touchscreen devices [ 9 ]. The effects of TV content are determined by many factors, including fantastical content and the program's pacing [ 10 ]. Pacing refers to how fast audio and visual elements change [ 11 ]. Video pace can be assessed through varying filming techniques, like changing the camera's perspective [ 12 ] or transitioning between scenes [ 13 ]. The concept of fantasy is about phenomena that defy the laws of reality, such as Superman [ 14 ].

Recent studies have examined whether TV (the pace and fantasy events in the programs) affects children's cognitive development, particularly regarding attention and executive functions (EFs). Attention is a multifaceted cognitive mechanism characterized by the allocation of resources towards distinct stimuli or tasks, thereby facilitating heightened processing and perception of relevant information [ 15 , 16 ]. There is a difference between attention and higher cognitive functions (e.g., executive functions). The attention process occurs between perception, memory, and higher cognitive functions. In this way, information can flow from perception to memory and higher cognitive functions and vice versa [ 17 , 18 ]. Many models have been developed to explain attention ability, and some of these models include components that are related to EF. EFs encompass a spectrum of cognitive processes essential for solving goal-oriented problems. This term comprises diverse higher-order cognitive functions including reasoning, working memory, problem-solving, planning, inhibitory control, attention, multitasking, and flexibility [ 19 , 20 , 21 ]. These functions are often referred to as "cool" EF, as the underlying cognitive mechanisms operate with limited emotional arousal [ 22 ]. In contrast, "hot" EF involves emotion or motivation, such as rewards or punishment tacking [ 22 , 23 ]. Within this classification, two subsets encompass basic EFs like working memory, inhibition, attention control, and cognitive flexibility, along with higher-order (higher-level) EFs such as reasoning, problem-solving, and planning, which stem from these basic ones [ 20 ].

Due to the complexity of the topic, studies investigating the relationship between TV programs and attention or EF have adopted diverse assessment methods. In some studies, children's involvement in tasks during free play or direct testing has been used to measure attention [ 24 ]. Another substantial portion of these studies adopted the model of EF proposed by Miyake et al. [ 25 ], which divided EF into three components: inhibitory control (the ability of a person to inhibit dominant or automatic responses in favor of less prominent data), working memory (the capacity to hold and manipulate various sets of information) and flexibility (shifting attention) [ 10 , 26 , 27 ]. Alternatively, some studies have measured EF through two dimensions: "hot" and "cool" [ 13 , 14 ]. Another subset of related research has focused on higher-order EF tests, encompassing domains such as planning and problem-solving. Additionally, a few studies have measured EF in a very general way, with tasks that address different parts of EF (assessed through tasks involving color separation or completing puzzles as quickly as possible) [ 28 ].

As an illustration, Cooper et al. [ 12 ] investigated the influence of pacing on attention using a direct task and demonstrated a positive effect on performance in EF tasks. In another study by Lillard and Peterson [ 13 ], the impact of pacing on Cool EF was investigated, revealing a reduced performance in EF tasks after exposure to fast-paced programs. Regarding higher-order EFs, the 2022 study [ 29 ] concluded that exposure to a fast-paced TV program did not immediately affect children's problem-solving abilities. Moreover, Jiang et al. [ 26 ] evaluated EFs based on Miyake's model, indicating that fantastical events negatively affected inhibitory control and flexibility, whereas working memory remained unaffected.

A limited capacity model and the attention system are essential for explaining the underlying mechanisms behind how TV pacing impacts children's cognitive performance. It has been proposed that fast-paced programs, which are characterized by rapid changes in the scene, capture attention in a bottom-up manner through orienting responses to scene changes, primarily engaging sensory rather than the prefrontal cortex [ 30 , 31 ]. In this way, fast-paced programs could overwhelm cognitive resources, aligning with the "overstimulation hypothesis" [ 32 , 33 , 34 ]. This hypothesis posits that exposure to such programs may lead the mind to anticipate high levels of stimulation, which can reduce children's attention spans and influence their performance [ 31 , 32 ]. Furthermore, a study by Carey [ 35 ] revealed young children's anticipations about the occurrence of events. Likewise, Kahneman [ 36 ] proposed the concept of a single pool of attentional resources and suggested that processing fantastical events overloads limited cognitive resources. Watching TV programs engages the bottom-up cognitive processing system. Consequently, the top-down cognitive processing system may be delayed in re-engaging in subsequent cognitive tasks after program viewing [ 14 ]. This suggests that exposure to fast-paced and fantastical TV programs has temporary effects on children's attention and executive functioning.

Research examining the immediate impact of these two features on children's attention and EF has yielded conflicting outcomes. Several studies indicate that fast-paced television programs have a negative effect on children's attention and EFs [ 13 , 28 , 37 , 38 ]. In contrast, some studies have shown positive results [ 12 , 39 ], while other studies found no significant impact [ 14 , 27 , 29 , 40 ]. Similar findings are observed for the fantasy feature. Some studies have shown that higher levels of fantastical content led to lower performance on cognitive tests [ 10 , 14 , 26 , 27 , 41 , 42 ], while contrary findings are also reported [ 39 , 43 ].

Therefore, it remains unclear how television content affects children's attention and EFs. Due to this, it is necessary to identify any gaps in the prior research, which can lead to effective strategies to investigate TV programs' effects. Previous reviews: (1) summarized the relationship between screen time and EF [ 44 ]; (2) adopted a comprehensive approach by combining diverse research methodologies, yet omitted some recent studies [ 24 ]; and (3) summarized the influence of media on self-regulation, although they emphasized several studies, overlooking a subset of investigations concerning the immediate impact of TV programs [ 45 ]. None of these reviews have specifically focused on the outcomes of experimental research. To investigate the effects of programs, experimental studies seem to be a more accurate research method. Experiments allow the control of certain variables and manipulation of an independent variable (such as the pace of the program and fantasy). This review aims to explore the immediate impact of TV pacing and fantasy features on children's attention and EF, as well as the potential factors contributing to the variations in outcomes.

Search strategy

This systematic review follows the guidelines set by the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) protocol [ 46 ]. We searched eight online databases on 2 August 2023: APA PsycARTICLES, Cochrane Library, EBSCO (APA PsycINFO), Google Scholar (limited to first three pages), Ovid, ProQuest, PubMed (MedLINE), and Web of Science. The search strategy utilized the article abstract and ignored the date and language restrictions: child* OR preschool* AND television OR TV OR cartoon AND executive function OR attention OR inhibit* OR flexibility OR working memory AND immediate* OR short-term OR pace OR fantasy. This strategy was tailored to suit the requirements of each database. Additionally, to account for any potentially overlooked studies, citation searching was conducted for the Lillard et al. [ 14 ] article on Google Scholar on 7 August 2023. However, only studies with relevant titles and abstracts were included in the review screening.

Study selection

The studies had to meet these criteria to be included in the review: (1) participants were children younger than seven years (preschool); (2) the study assessed the impact of TV programs on children's attention or EFs; (3) the independent variable was the exposure to a TV program (including cartoons and non-animated programs, while excluding advertisements), with immediate measurement of its impact on children's attention or EF; (4) the study measured the effect of pacing and fantasy features present in TV programs; (5) the study had an experimental design; and (6) the research was published as journal articles in English. Furthermore, any study where a participant had been diagnosed with a disorder was excluded from the review. The initial identification yielded 328 potentially relevant studies, from which 67 duplicates were eliminated using EndNote 20's automated tool [ 47 ]. Additionally, the manual review led to the elimination of 42 more duplicates, while six non-English studies were further removed. The remaining 203 studies were screened for title and abstract relevance. Subsequently, two screeners reviewed the full text and included 15 as eligible studies. Any conflict between screeners regarding eligibility was resolved through discussions. The PRISMA chart that summarizes these processes can be seen in Fig.  1 .

figure 1

PRISMA flow diagram [ 48 ] showing the number of studies that were removed at each stage of the literature search

Data extraction and synthesis

The relevant data from the selected studies were extracted on a form by two reviewers, and any conflict was resolved through discussion. The data extraction form had information about the characteristics of each study: authors’ names, titles of manuscripts, publication dates, sample sizes, the mean and standard deviation of participant ages, the proportion of females within the sample, TV program name, features and length, type of cognitive functions (EFs or attention) measured in the study along with their assessment methods and variables used for controlling or checking differences between groups. Additionally, eligible outcomes were as follows: the effect of fast and slow-paced TV programs, the effect of fantastical and realistic TV programs, and variable interactions. In our research, the data synthesis was conducted using narrative synthesis for the included studies. This choice was driven by the conflicting results observed across the various studies. Although a single reviewer composed the narratives, all decisions were reached through discussions involving two reviewers.

Quality assessment

The evaluation of study quality was conducted utilizing the Downs and Black [ 49 ] checklist, which has 27 items. However, not all of these items apply to every type of study design. Following a similar approach as Uzundağ et al. [ 45 ], for the experimental studies, a subset of 21 relevant items was employed. The study's quality check result can be found in Table  1 .

A total of 1855 children aged between two and seven years participated in the 15 studies (49.43%, female). Among these studies, seven exclusively investigated the impact of pacing, with four exploring its effects on attention and three on EF. Additionally, three studies examined the impact of pacing and fantasy, with only one focusing on attention, while five studies specifically concentrated on the fantasy effect on EF. The sample sizes varied from 20 to 279 participants, while the duration of video exposure ranged from 3.5 to 40 min. The mean age of participants, as reported in 13 studies, was 59.56 months (SD = 9.94). Notably, only seven studies involved a pre-test, eight studies controlled for the overall media exposure, and four considered socioeconomic status (SES).

Five of the conducted studies measured attention. As for EF, the studies explored a diverse range of EF components: inhibitory control was measured in five studies, cognitive flexibility in four, working memory in three, composite cool EF in three, and hot EF in two, with one study each dedicated to measuring planning, problem-solving, and general EF (motor EF). For assessment, attention was operationalized through either the observation of children's behavior during free play or direct task measurement. In all these studies, EF was directly assessed through various tasks.

Experimental investigations into the impact of TV program pacing on preschoolers' attention have yielded inconsistent outcomes. Among the initial two studies, fast-paced TV programs negatively impacted children’s attention. Geist and Gibson [ 37 ] examined the effects of rapid TV program pacing on 62 children aged 4 to 6. Their findings demonstrated that children exposed to a fast-paced program displayed more frequent activity switches and allocated less time to tasks during the post-viewing period, in contrast to the control group. This pattern was interpreted as indicative of a shortened attention span in children. However, it cannot be definitively determined whether the observed negative impact could be attributed to content, pacing, or an interplay of both factors. Furthermore, no pre-viewing attention test was included, which complicates the interpretation of the results. To address the pacing/content dilemma, Kostyrka-Allchorne et al. [ 38 ] adopted the methodology employed by Cooper et al. [ 12 ]. They created experimental videos with identical content, varying only in the number of edits (pace). In this study, 70 children aged 2 to 4.5 years were exposed to one of two 4-min edited videos featuring a narrator reading a children's story. The fast-paced group displayed more frequent shifts of attention between toys than the slow-paced group, despite the lack of initial behavioral differences between the groups before watching the videos. By coping with the pacing/content issue and incorporating younger participants, this study provides insights, albeit with video durations that notably differ from typical children's program episodes.

In contrast to the studies mentioned earlier, the subsequent two studies propose that fast-paced TV programs may not significantly impact or might even yield positive ones on children's attention. To elaborate, Anderson et al. [ 40 ] initiated their research by subjecting 4-year-old children to a 40-min fast-paced or slow-paced version of  Sesame Street , while a control group listened to a parent reading a story. The findings failed to provide substantial support for the immediate effects of TV program pacing on the behavior and attention of preschoolers. In a subsequent study, Cooper et al. [ 12 ] presented a 3.5-min video of a narrator reading a story to children aged 4 to 7. This investigation employed edited versions of the video to create both fast-paced and slow-paced versions with identical content. Through applying an attention networks task, post-viewing evaluation alerting, orienting, and executive control. The outcomes revealed that even a very brief exposure to programs can impact children's orienting networks and error rates. Moreover, a noteworthy interaction emerged between age and pacing: 4-year-olds displayed lower orientation scores in the fast-paced group compared to the slow-paced one, while the reverse occurred for the 6-year-olds. In summary, these two studies maintained consistent video content by manipulating pacing, focusing solely on evaluating the pacing effect. However, it's important to acknowledge that Anderson et al. [ 40 ] utilized TV programs with a slower pace than contemporary ones, and Cooper et al. [ 12 ] subjected children to programs for 3.5 min—considerably shorter than the typical time children spend watching TV programs [ 14 ]. Refer to Table  1 for a concise overview of attention studies.

Regarding EF, research examining the influence of pacing has also produced inconsistent outcomes. Lillard and Peterson [ 13 ] explored the immediate impact of fast-paced TV content on the EF of 60 four-year-olds. In this study, participants were exposed to a 9-min cartoon episode (fast or slow-paced content) or were engaged in drawing (serving as the control condition). The results indicated that children who viewed the fast-paced cartoon performed notably poorer on a post-viewing Cool and Hot EF assessment when compared to the other groups. This finding underscores the significant influence of pacing on children's EF. Additionally, Sanketh et al. [ 28 ] investigated the impact of a TV program's pacing on children's motor EF. Involving a sample of 279 four- to six-year-olds, the study began with a pre-viewing test to ensure developmental equivalence among participants. The findings revealed that children exposed to the fast-paced cartoon exhibited slower performance on motor EF tasks compared to their counterparts in the other two groups. This outcome suggested that ten minutes of viewing a fast-paced cartoon yielded an immediate negative impact on the motor EF of 4- to 6-year-old children. However, it's important to note that these two studies could not differentiate between the effects of pacing and content.

In contrast to these studies, Rose et al. [ 29 ] more recently delved into the effects of TV program pacing on problem-solving abilities through ecologically valid research. In this study, each child underwent exposure to both fast and slow programs during two distinct sessions to ensure comparability and control over other variables. Notably, no significant differences emerged in the problem-solving task between the fast and slow programs. The study identified no significant differences in problem-solving performance between the two pacing conditions. However, following exposure to the fast-paced program, both age groups demonstrated a non-significant increase in EF scores (p = 0.71). Additionally, the study by Rose et al. [ 29 ] aimed to ensure content parity between the fast and slow programs, leading to a smaller pacing difference compared to certain other studies. Refer to Table  2 for a concise overview of EF studies.

Continuing the exploration of the distinct impacts of TV program content, particularly in the context of fantasy, Lillard et al. [ 14 ] introduced a novel dimension to the discussion. The concept of "fantastical" versus "non-fantastical" (also termed "realistic" or "unrealistic") content emerged as a notable category within TV programming. This idea prompted three separate research studies, all aiming to disentangle the effects of pacing from fantasy on children's EF. To address this inquiry, all three studies employed a common approach, utilizing four TV programs that varied along two dimensions: fast and fantastical, fast and non-fantastical, slow and fantastical, or slow and non-fantastical. Of these three, only one study focused on attention.

Kostyrka-Allchorne et al. [ 39 ] conducted a study in 2019 with 187 children aged 3.5 to 5 years, exposing them to 5-min self-produced videos. Their findings indicated that there is a significant interaction between pacing and fantasy, while neither factor displayed an individual effect. Notably, exposure to the fast-paced video led to quicker responses, but only when the story was non-fantastical. However, due to the brief length of the videos, it's uncertain if the stimuli adequately challenged cognitive resources (see Table  1 ).

There is a more extensive body of literature on EF (all three mentioned studies) that accurately separates the effect of pace from fantasy. The outcomes of these studies indicated a lack of influence from pacing, while the impact of fantasy and the interplay between pacing and fantasy yielded conflicting results. Lillard et al. [ 14 ] conducted three distinct studies to test their hypotheses, building upon their prior research findings. Study 1 involved diverse videos with an extended duration (11 min) compared to the 2011 study [ 13 ], focusing on 4- and 6-year-olds. The findings indicated that children's Cool EF scores were notably lower in the two fast and fantastical conditions compared to the control group. Conversely, children in the slow and non-fantastical condition performed better in the hot EF task. Study 2 aimed to discern whether solely fast and fantastical entertainment TV programs, as opposed to educational ones, influenced children's EF. The results indicated that even when designed with educational intent, watching a fast and fantastical TV program led to lower EF scores than reading a book. Additionally, the EF performance following exposure to the educational program was similar to that of the entertaining program. In the final study, Lillard et al. [ 14 ] aimed to differentiate the contributions of fantasy versus pacing (fast or slow). The analysis revealed that fantastical content has an impact on EF, although fast-paced did not show a similar effect. However, this particular study focused on a single age group without considering potential age-related nuances in the development of EF.

Moreover, Kostyrka-Allchorne et al.'s [ 39 ] findings indicated that children in two fantastical conditions had higher inhibitory control scores than those in the alternative condition, yet no discernible pacing effect was observed. In a parallel vein, within the same investigative framework as Lillard et al.’s [ 14 ] Study 3, Fan et al. [ 27 ] explored the age-related influence on the impact of TV program features on EF of children aged 4 to 7 years. Employing four 11-min cartoons for exposure, the study revealed that following fantastical TV program viewing, children's performance on subsequent EF tasks declined. Albeit, the pacing did not exert a comparable effect. The most significant interaction emerged between fantasy and age, indicating a heightened impact of fantasy on inhibitory control among younger children. Unlike the earlier studies, this study emphasized EF development and encompassed a broader age range of children. In summation, these three research studies reveal inconsistent results. To address the novelty aspect inherent in EF tests, Fan et al. [ 27 ] adopted parent questionnaires to account for pre-viewing EF levels. In contrast, the other two studies incorporated at least one task during the pre-viewing session to assess EF.

Expanding upon the findings of Lillard et al. [ 14 ], subsequent studies focused exclusively on the impact of fantasy, omitting the pacing feature. Out of the five studies, four of them collectively suggest that fantastical TV programs tend to exert a negative impact on children's EF.

Li et al. [ 42 ] undertook a comparative study to assess the effects of viewing versus interacting with fantastical or non-fantastical events on inhibitory control. Through two experimental studies, participants were involved in a video game or a video clip showcasing identical events from the game. The findings indicated that watching fantastical programs led to a reduction in inhibitory control, while interaction with them did not produce a similar effect. Moreover, children in the game condition perceived the fantastical events to be less fantastical. Notably, inhibitory control showed improvement after both watching and interacting with non-fantastical content. It is worth noting that while this study employed direct tasks to address pre-viewing EF levels, the number of fantastical events was not standardized and varied across programs and game conditions. To refine the understanding of the fantasy effect, Jiang et al. [ 26 ] introduced three levels of fantasy in their investigation. The findings revealed that working memory scores did not significantly differ across conditions. However, a nonlinear pattern emerged about the effects of fantasy on inhibitory control and cognitive flexibility, with children in the mid-fantasy group demonstrating comparatively poorer performance. Notably, the potential moderating influence of gender on the relationship between fantastical events and EF lacked conclusive evidence. Continuing from the groundwork laid by Lillard et al. [ 14 ], Rhodes et al. [ 10 ] undertook a study investigating the impact of fantasy on 80 children aged 5 to 6 years. Employing two complete episodes of cartoons utilized by Lillard et al. [ 14 ], they revealed that children in the fantastical condition exhibited lower performance on inhibition, working memory, and cognitive flexibility tasks during the post-viewing session. Notably, the disparity in planning tasks did not yield a statistically significant difference. It is worth highlighting that despite employing cartoons from a different study, they were not matched in terms of pace and language factors, which might have influenced their effect on EF.

In a study aligned with the ones mentioned earlier, Li et al. [ 41 ] examined whether watching TV programs featuring fantastical events had a diminishing impact on the post-viewing EF of 4- to 6-year-olds. They exposed 90 children to Mickey Mouse Clubhouse (non-fantastical), Tom and Jerry (fantastical), or typical classroom activities (control). The outcomes indicated significantly lower scores on behavioral EF tasks for children in the fantastical condition compared to the other groups. In their pursuit, Li et al. [ 41 ] additionally conducted supplementary experiments. The analysis of eye tracking data revealed heightened and briefer eye fixations, while fNIRS data indicated elevated Coxy-Hb levels in the prefrontal cortex (PFC) of the fantastical group, aligning with models of limited cognitive resources. Similar to the preceding study, a notable distinction between the two cartoons existed. Mickey Mouse Clubhouse constituted one episode with a single narrative, whereas Tom and Jerry comprised three distinct episodes with separate stories (episodic narratives). Moreover, the differentiation between fantastical events and comedic violence within Tom and Jerry remains unclear.

Conversely, a recent investigation by Wang and Moriguchi [ 43 ], adopting the methodology established by Li et al. [ 42 ], presented divergent outcomes. After exposure to fantastical content, 3 to 6.5-year-old children's cognitive flexibility and prefrontal activation were assessed. There were no observable alterations in performance or neural activity. In summary, the initial four studies, each exclusively focused on assessing the impact of fantasy, consistently suggest a negative effect. However, the most recent one and the investigation conducted by Kostyrka-Allchorne et al. [ 39 ] produced contrasting outcomes, with one indicating a positive impact and the other showing no discernible effect. It is essential to note that Wang and Moriguchi's [ 43 ] study covers a wide age range between 3 and 6.5 years and does not consider the potential effect of age. Additionally, the brief duration spent on fantasy content raises concerns, as it may not have allowed sufficient time for any potential effect. Despite drawing inspiration from the methodology used in Li et al.'s [ 42 ] study, the number of fantasy events in this recent study was not standardized.

As a result, the impact of exposure to fantastical TV programs on children's EF remains unclear, while the influence of pacing can be more certainly dismissed (see Table  2 ). Additionally, in the field of attention, it is not possible to draw conclusions based on the study results for both features.

We conducted the current systematic review to gain a better understanding of how TV programs' pace and fantasy may impact children's attention and EF by synthesizing results from multiple experimental studies. The synthesis of the reviewed studies and their outcomes has highlighted variations in how pacing and fantasy influence attention and different aspects of EF. The discussion will now delve into the potential explanations for these observed effects.

Numerous studies have investigated the influence of pacing on children's attention. Anderson et al. [ 40 ] and Kostyrka-Allchorne et al. [ 39 ] found no significant effects on attention, while Geist and Gibson [ 37 ] and Kostyrka-Allchorne et al. [ 38 ] reported a negative impact. In contrast, Cooper et al. [ 12 ] observed positive results. To explain these results, it's crucial to look at the methodologies employed in attention measurement. Anderson et al. [ 40 ], Geist and Gibson [ 37 ], and Kostyrka-Allchorne et al. [ 38 ] used child observation during free play, whereas Anderson et al. [ 40 ] used the Matching Familiar Figures task, Cooper et al. [ 12 ] the Attention Networks Task, and Kostyrka-Allchorne et al. [ 39 ] the Continuous Performance Task (CPT).

Observational studies during free play suggest that exposure to fast-paced programs leads to more frequent toy switching in children. This rapid switching corresponds to accelerated bottom-up attention [ 39 ]. However, Anderson et al. [ 40 ] measurements during free play did not reveal this phenomenon. Additionally, exposure to fast-paced programs may diminish children's capacity for reflective processing [ 50 ]. Nevertheless, this effect did not manifest in the results of the Matching Familiar Figures task. Anderson et al. [ 40 ] showed that neither reflection nor impulsivity (linked to the top-down system) were affected by fast-paced programs.

In the CPT task, a salience stimulus triggers an automatic orienting response, engaging the bottom-up attention [ 31 , 51 ]. This system is similar to the processing of fast-paced program stimuli, leading to quicker responses. Conversely, tasks requiring attention allocation based on instructions involve goal-based processing (top-down system), demanding more effort and resulting in a slower response [ 39 ]. In the Attention Networks Task (ANT), the orienting network involves attention shifting in response to relevant stimuli. However, it is unable to evaluate the bottom-up and top-down attention systems separately [ 52 ]. The findings of this task indicate that 4-year-old participants watching a slow-paced program showed higher and quicker performance in the orienting network. However, results from 6-year-olds were opposing. This result aligns with reduced error rates in children exposed to a fast-paced program. Furthermore, no discernible distinctions emerged in the executive control network, indicative of top-down attentional processes.

While it is assumed that the mechanisms of the attention system and the allocation of resources can explain the observed results, not all findings can be accounted for through this framework. First, it was hypothesized that the engagement of the bottom-up attentional system following exposure to a fast-paced program would tax executive resources [ 13 ] and affect tasks that need the top-down processing system. However, Bushman and Miller's [ 30 ] research contradicted this notion, indicating that rapidly presented stimuli exclusively stimulate sensory processing rather than the prefrontal cortex. Consequently, the fast-paced program exposure does not involve prefrontal neurotransmitters. Thus, this program is unlikely to impact subsequent tasks reliant on the prefrontal cortex (top-down processing). In light of these, there is a need for further exploration of the proposed hypotheses concerning the mechanisms that underlie the impact of program pacing on attention.

Fantasy and pacing interaction

Kostyrka-Allchorne et al. [ 39 ] uncovered a positive impact resulting from the interaction between fantasy and pacing. This result implies that when watching a fast-paced TV program, improvements in bottom-up attention may be observed, but only if there are no features in the program that trigger executive processing (fantasy stimulus). This discovery underscores the significance of examining the interaction between these factors rather than analyzing them in isolation.

The exploration of fantasy's impact on attention has been limited to a single study conducted by Kostyrka-Allchorne et al. [ 39 ]. The assumption is that watching a fantastical program heightens orienting responses and triggers bottom-up processing, which continues in subsequent tasks [ 14 ]. Consequently, similar to the impact of the fast-paced program, a quicker response in bottom-up attention tasks can be seen. Alternatively, comprehending fantasy features might require extensive engagement in executive processes. Due to the limited capacity of these resources, they could become overwhelmed [ 14 ], leading to diminished performance in tasks related to top-down attention. However, the outcomes of the Continuous Performance Task (CPT) do not reveal any difference between the results of children in the high and low fantasy groups. This underscores the necessity for further research in this particular domain.

Inhibitory control

Exploring pacing's potential influence has been limited to just two studies conducted by Fan et al. [ 25 ] and Kostyrka-Allchorne et al. [ 39 ]. These studies failed to identify any significant effects of pacing on inhibitory control. The study results contradict the assumptions made about the underlying aspects. Yet, these findings align with Bushman and Miller's [ 30 ] study. Therefore, it can be inferred that the pacing feature, possibly because it does not engage the prefrontal cortex, does not impact subsequent tasks reliant on the top-down system, such as inhibitory control.

There was a more extensive body of research that examined the impact of fantasy. The collective of these studies from Fan et al. [ 27 ], Jiang et al. [ 26 ], Li et al. [ 42 ], and Rhodes et al. [ 10 ] have consistently revealed a trend: exposure to fantastical TV programs leads to a reduction in inhibitory control. However, Kostyrka-Allchorne et al. [ 39 ] diverged from this trend as the only one that did not conform. It's worth highlighting that Jiang et al. [ 26 ] indicated the potential for varying impacts of mild fantasy, suggesting a non-linear relationship between the level of fantasy and the EF component such as inhibitory control.

In these studies, a variety of tasks were employed to evaluate inhibitory control. Li et al. [ 42 ] used the go-no-go task to measure response inhibition. Jiang et al. [ 24 ] employed the flanker task, whereas Rhodes et al. [ 10 ], Fan et al. [ 27 ], and Kostyrka-Allchorne et al. [ 39 ] used the Day-Night task based on the Stroop paradigm to measure interference control. Although both response inhibition and interference control are considered aspects of inhibitory control, their measurement approaches exhibit differences [ 53 ]. Notably, the variation in tasks employed does not account for the differences in results, as evidenced by the Kostyrka-Allchorne et al. [ 39 ] study, which, despite using the Day-Night task like the other two studies, reported results contrary to the overall trend.

Additionally, the processing of fantastical events depicted in cartoons appears to trigger distinct neural circuits, particularly the anterior cingulate cortex (ACC), which is associated with inhibitory control [ 54 , 55 ]. Through information processing theories, it seems that fantastical animations require increased cognitive resources in the ACC, resulting in a temporary depletion of resources available for subsequent tasks [ 14 , 34 ]. However, Kostyrka-Allchorne et al. [ 39 ] suggested that this trigger leads to enhanced performance.

Working memory and cognitive flexibility

The investigation into the impact of pacing remains limited to a single study. In this study conducted by Fan et al. [ 27 ], it was established that pace does not exert a significant effect on working memory and cognitive flexibility. Similar to previous research, this result indicates that pacing does not affect tasks related to the top-down system.

Jiang et al. [ 26 ] did not identify any significant impact on working memory. However, both Fan et al. [ 27 ] and Rhodes et al. [ 10 ], in their respective studies, observed a decline in working memory after exposure to fantasy TV programs. Upon looking at the tasks used by these articles to measure working memory, we find that Jiang et al. [ 26 ] used List sorting working memory, while Rhodes et al. [ 10 ] and Fan et al. [ 27 ] used backward digit span. Regarding cognitive flexibility, Wang and Moriguchi [ 43 ] did not observe a fantasy effect on flexibility, whereas Fan et al. [ 27 ], Jiang et al. [ 26 ], and Rhodes et al. [ 10 ] identified a negative impact of fantasy. The task employed by Wang and Moriguchi [ 43 ] to measure flexibility was the same as that used by Jiang et al. [ 26 ] and Rhodes et al. [ 10 ], the standard Dimensional Change Card Sort Task. Only Fan et al. [ 27 ] utilized a different task, the Flexible Item Section. These two tasks are almost the same, and there is no discernible difference in their impact on the results. Although the fantasy cartoon used in Wang and Moriguchi's [ 43 ] study features only seven fantasy events, this quantity is significantly lower than the programs used in other studies and is closer to the number of programs considered realistic.

Higher-order EFs

Higher-order EFs have received limited attention within the context of TV content effects. Only Rose et al. [ 29 ] measured the influence of pacing on problem-solving, revealing no significant differences, aligning with similar findings from other studies.

Research on the impact of fantasy is also lacking. Rhodes et al. [ 10 ] explored how fantasy impacts planning and found no discernible effect. Notably, our review reveals a gap, with no additional studies examining the influence of fantasy on other higher-order EFs. This highlights the need for further investigation into the broader effects of fantasy on various aspects of EF.

Broader dimensions of EF

In addition to studies focusing on specific components of EF, there have been investigations into EF in a broader way. For Cool and Hot EF, Lillard and Peterson [ 13 ] reported a negative impact of pacing, while Lillard et al. [ 14 ] did not observe any. Moreover, in the realm of general EF, only Sanketh et al. [ 28 ] delved into the effect of pacing on EF (motor EF), revealing a negative influence.

Examining the impact of fantasy on Cool EF, two studies, Lillard et al. [ 14 ] and Li et al. [ 41 ], found a negative influence. However, in the context of Hot EF, Lillard et al. [ 14 ] did not identify any discernible impact.

Together, drawing conclusive findings about the effects of pacing on attention and fantasy on attention and components of EF is challenging due to conflicting results or a limited number of studies. As we consider studies with contradictory results, it becomes evident that various influential factors come into play. These factors encompass the content of the programs, individual child characteristics, environmental influences, and the methodologies employed in the studies. Despite some attempts to control for specific factors, it remains clear that the presence of these variables can contribute to discrepancies between study outcomes. Consequently, there is a pressing need for more comprehensive investigations that carefully consider and account for these variables. This approach would lead to a more nuanced understanding of the relationship between TV program features and children's attention and EF. Future research should address these gaps and consider a broader range of factors to arrive at more conclusive insights.

Influential factors

Tv program content.

In studies focusing on the immediate effects of TV programs, the content emerges as a determinant of its impact. Hence, it becomes crucial to ensure that other content-related aspects, apart from the independent variable, are identical across experimental groups. However, when utilizing existing TV programs, maintaining control over this factor becomes exceedingly challenging. Distinct programs possess varying characteristics, with some designed for educational purposes for children, while others primarily serve entertainment. This dichotomy of educational versus entertainment is a trait that studies have identified as influential in their impact on EF (for more details, refer to Fan et al. [ 56 ]). Another salient feature of programs is the type of language employed within them. Language intricately links to EF, and the processing of unfamiliar vocabulary could potentially impose greater cognitive demands on children, especially evident in the context of fantastical TV programs [ 57 ].

Only a limited number of studies successfully matched the inherent content features of programs by making their videos. For instance, Cooper et al. [ 12 ], Kostyrka-Allchrone et al. [ 38 ], and Kostyrka-Allchrone et al. [ 39 ] created a live-action adaptation of a storybook. However, these videos differed from the typical programs children encounter daily and the pacing measuring method varied between live-action videos and animations (such as changes in camera angles). Consequently, these discrepancies between live actions and animations can contribute to disparate outcomes. It appears that children exhibit greater attention to animated content compared to live-action programs [ 58 ]. Additionally, the quantity of fantasy events in programs identified as fantasy is a noteworthy factor in the research. For instance, in Wang and Moriguchi's [ 43 ] study, the fantasy program featured only seven events, placing it closer to realistic programs with four events rather than high-fantasy ones, which typically have more than 16 events. Moreover, Jiang et al. [ 26 ] indicated the potential for varying impacts of mild fantasy, suggesting a non-linear relationship between the level of fantasy and the EF component. In this study, a TV program categorized as mid-fantasy contained 17 fantasy events, a number close to those considered high fantasy in other studies. Meanwhile, the cartoon characterized as high fantasy in Jiang et al. [ 26 ] study featured 31 fantasy events, a level rarely included in other research studies. These variations highlight the importance of considering the quantity and level of fantasy events when examining their impact on children's attention and EF.

Individual child characteristics

Recent study reviews have prompted inquiries into the differential susceptibility of children to the influence of TV programs. An essential consideration in this context is the child's age, as previous research indicates a developmental trajectory of cognitive functions about age [ 59 ]. As a result, an exploration of age's role in the interaction between TV programs and attention or EF becomes imperative. Although some studies like Fan et al.'s [ 27 ] have addressed the influence of age, younger age groups have yet to be incorporated into this line of investigation. Another dimension pertains to personality traits, which can modulate a child's responsiveness to their environment, including environmental sensitivity (SPS) [ 60 , 61 ].

Environmental characteristics

The surrounding environment and its attributes constitute factors that can influence the impact of programs on attention and EF. One noteworthy environmental factor is SES, a determinant of the family's standing. In correlational studies, SES has emerged as a variable in moderating the relationship between TV program exposure and EF [ 62 , 63 ]. Thus, an increased emphasis on assessing the role of SES within experimental designs is warranted.

Study methodologies

Beyond considerations encompassing TV content and child characteristics, the methodological approaches adopted in studies exert a noteworthy influence. Regarding this matter, some studies have omitted pre-test assessments due to the novelty of the EF measurement tools. Therefore, the analysis of post-TV program exposure changes becomes more intricate within these studies. On the other hand, attention and executive functions cover a wide range of aspects and can be measured using multiple instruments. The tools employed in existing literature serve distinct purposes and measure specific aspects of these cognitive functions. This heterogeneity in the selection of these tools can contribute to the contradictions observed in the study results. Therefore, future researchers must exercise greater caution in selecting their assessment instruments. Adopting a more consistent approach to measuring attention or different components of EFs may enable more efficient research.

Furthermore, studies examining the impact of pace employ various methods to measure the pacing of TV programs. For instance, some research utilizes the Sense Detector app [ 13 , 14 ], which assesses the frames rather than the scenes of a program. Consequently, the numerical representation of a program's pace may differ when using this app compared to when the coder counts scenes [ 27 ] or employs tools to edit and accelerate the program [ 29 ]. This variability in measurement methods introduces the possibility that a program deemed fast-paced in one study might be categorized as having an average pace when using a different measurement approach. This underscores the importance of standardizing methods for assessing program pace to enhance consistency across studies and ensure accurate interpretations of the findings.

These multifaceted factors, collectively contribute to the intricate landscape shaping the relationship between TV program features and children's EF. Gaps within the existing body of research underscore the necessity for more comprehensive investigations that meticulously account for these variables.

Limitations

Several limitations are noteworthy within the scope of this review. To initiate, it does not encompass unpublished studies or student theses that may have explored the pertinent question. This decision aligns with the established inclusion criteria to uphold a standard level of study quality. Additionally, during the process of identifying relevant studies, no search was conducted on gray literature platforms. Another limitation arises from the failure to report the scores of pace and fantasy assigned to the TV programs in the studies. These scores are crucial for categorizing programs as fast-paced or slow-paced and determining the level of fantasy. The absence of these numerical scores in some studies has made it difficult to quantify and compare the pacing and fantasy across the reviewed literature. Moreover, this review only looks after findings from experimental studies that investigated the short-term impact of TV programs on children, while this study design has limitations. Experimental studies have challenges in generalizing findings to real-world situations, and observed short-term effects may not transform into long-term ones [ 11 , 27 ]. Although, these short-term changes from experimental studies can be significant intrinsically [ 13 , 14 ]. For instance, recent studies have indicated an increase in the use of media by kindergarten and preschool teachers in the classroom [ 64 , 65 ]. Using these contents, such as TV programs, can have afterward effects on classroom learning conditions [ 66 ].

Conclusions

In summary, this systematic review significantly advances our understanding of the intricate relationship between TV pace, fantasy, and their impact on children's attention and executive functions (EFs). For a visual representation of these relationships, please refer to Fig.  2 . Concerning attention, there were limited studies available to conclude the impact of fantasy. Within the context of bottom-up attention, the influence of pace is discernible, although its mechanism remains elusive and exhibits variability across studies. On the contrary, there is no clear evidence of a pacing effect on the top-down system. Combining insights from experimental studies reveals the intricate ways TV programs influence specific aspects of EF. For instance, inhibitory control appears to be negatively impacted by the presence of fantastical events. Moreover, the complex interplay among factors such as content, child characteristics, environment, and methodology underscore the critical need for further comprehensive and nuanced investigations into this domain and its underlying mechanisms. As our understanding of this intricate relationship deepens, future research will play a pivotal role in guiding the development of informed guidelines for media consumption and its potential effects on children's cognitive development.

figure 2

Conceptual map of the relationship between TV programs' pace, fantasy, and children's attention and EFs

Availability of data and materials

All relevant data are within the paper.

Abbreviations

  • Executive function

Socioeconomic status

Continuous Performance Task

Anterior cingulate cortex

Rideout V, Saphir M, Pai S, Rudd A. Zero to eight: Children's media use in America 2013. Common Sense Media. 2013. https://www.commonsensemedia.org/sites/default/files/research/zero-to-eight-2013.pdf .

Jordan AB, Woodard EH. Electronic childhood: The availability and use of household media by 2-to 3-year-olds. Zero to Three. 2001;22(2):4–9.

Google Scholar  

Ofcom, U. Children and parents: Media use and attitudes report 2018. Ofcom Website: London, UK; 2019. https://www.ofcom.org.uk/research-and-data/media-literacy-research/childrens/children-and-parents-media-use-and-attitudes-report-2018

Rideout V, Saphir M, Tsang V, Bozdech B. Zero to eight: Children's media use in America. Common Sense Media. 2011. https://www.commonsensemedia.org/sites/default/files/research/zerotoeightfinal2011.pdf .

Muppalla SK, Vuppalapati S, Pulliahgaru AR, Sreenivasulu H, kumar Muppalla S. Effects of Excessive Screen Time on Child Development: An Updated Review and Strategies for Management. Cureus. 2023;15(6). https://doi.org/10.7759/cureus.40608 .

Shalani B, Azadfallah P, Farahani H. Correlates of screen time in children and adolescents: a systematic review study. J Modern Rehabil. 2021;15(4):187–208. https://doi.org/10.18502/jmr.v15i4.7740 .

Article   Google Scholar  

Takeuchi H, Taki Y, Hashizume H, Asano K, Asano M, Sassa Y, Yokota S, Kotozaki Y, Nouchi R, Kawashima R. The impact of television viewing on brain structures: cross-sectional and longitudinal analyses. Cereb Cortex. 2015;25(5):1188–97. https://doi.org/10.1093/cercor/bht315 .

Article   PubMed   Google Scholar  

UNICEF. Children and digital devices: Protecting children’s online safety on digital devices. UNICEF Albania; 2020. https://www.unicef.org/albania/media/2881/file/Childrenandthedigitaldevices.pdf .

Kostyrka-Allchorne K, Cooper NR, Simpson A. Touchscreen generation: children’s current media use, parental supervision methods and attitudes towards contemporary media. Acta Paediatr. 2017;106(4):654–62. https://doi.org/10.1111/apa.13707 .

Rhodes SM, Stewart TM, Kanevski M. Immediate impact of fantastical television content on children’s executive functions. Br J Dev Psychol. 2020;38(2):268–88. https://doi.org/10.1111/bjdp.12318 .

Hinten AE. The short-and long-term effects of television pace and fantasy rates on children's executive functioning. Doctoral dissertation, University of Otago; 2021. http://hdl.handle.net/10523/12361 .

Cooper NR, Uller C, Pettifer J, Stolc FC. Conditioning attentional skills: Examining the effects of the pace of television editing on children’s attention. Acta Paediatr. 2009;98(10):1651–5. https://doi.org/10.1111/j.1651-2227.2009.01377.x .

Lillard AS, Peterson J. The immediate impact of different types of television on young children’s executive function. Pediatrics. 2011;128(4):644–9. https://doi.org/10.1542/peds.2010-1919 .

Article   PubMed   PubMed Central   Google Scholar  

Lillard AS, Drell MB, Richey EM, Boguszewski K, Smith ED. Further examination of the immediate impact of television on children’s executive function. Dev Psychol. 2015;51(6):792. https://doi.org/10.1037/a0039097 .

Hatfield G. Attention in early scientific psychology. Vis Attention. 1998;1:3–25 https://repository.upenn.edu/handle/20.500.14332/37558 .

Posner MI, Snyder CR, Davidson BJ. Attention and the detection of signals. J Exp Psychol Gen. 1980;109(2):160. https://doi.org/10.1037/0096-3445.109.2.160 .

Katsuki F, Constantinidis C. Bottom-up and top-down attention: different processes and overlapping neural systems. Neuroscientist. 2014;20(5):509–21. https://doi.org/10.1177/1073858413514136 .

Nejati V. Principles of cognitive rehabilitation. Elsevier Science and Technology; 2022.

Chan RC, Shum D, Toulopoulou T, Chen EY. Assessment of executive functions: Review of instruments and identification of critical issues. Arch Clin Neuropsychol. 2008;23(2):201–16. https://doi.org/10.1016/j.acn.2007.08.010 .

Diamond A. Executive functions. Annu Rev Psychol. 2013;64:135–68. https://doi.org/10.1146/annurev-psych-113011-143750 .

Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 2001;24(1):167–202. https://doi.org/10.1146/annurev.neuro.24.1.167 .

Grafman J, Litvan I. Importance of deficits in executive functions. Lancet. 1999;354(9194):1921–3. https://doi.org/10.1016/S0140-6736(99)90438-5 .

Zelazo PD, Müller U. Executive function in typical and atypical development. Blackwell handbook of childhood cognitive development. 2002:445–69. doi: https://doi.org/10.1002/9780470996652 .

Kostyrka-Allchorne K, Cooper NR, Simpson A. The relationship between television exposure and children’s cognition and behaviour: A systematic review. Dev Rev. 2017;44:19–58. https://doi.org/10.1016/j.dr.2016.12.002 .

Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cogn Psychol. 2000;41(1):49–100. https://doi.org/10.1006/cogp.1999.0734 .

Jiang Y, Fu R, Xing S. The effects of fantastical television content on Chinese preschoolers’ executive function. PsyCh J. 2019;8(4):480–90. https://doi.org/10.1002/pchj.277 .

Fan L, Zhan M, Qing W, Gao T, Wang M. The short-term impact of animation on the executive function of children aged 4 to 7. Int J Environ Res Public Health. 2021;18(16):8616. https://doi.org/10.3390/ijerph18168616 .

Sanketh PP, Solomon S, Lalitha Krishnan SS, Ravichandran K. The effect of cartoon on the immediate motor executive function of 4–6 year old children. Int J Contemp Pediatr. 2017;4(5):1648. https://doi.org/10.18203/2349-3291.ijcp20173648 .

Rose SE, Lamont AM, Reyland N. Watching television in a home environment: effects on children’s attention, problem solving and comprehension. Media Psychol. 2022;25(2):208–33. https://doi.org/10.1080/15213269.2021.1901744 .

Buschman TJ, Miller EK. Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science. 2007;315(5820):1860–2. https://doi.org/10.1126/science.1138071 .

Singer JL. The power and limitations of television: A cognitive-affective analysis. Hillsdale, N.J: Lawrence Erlbaum; 1980.

Christakis DA, Zimmerman FJ, DiGiuseppe DL, McCarty CA. Early television exposure and subsequent attentional problems in children. Pediatrics. 2004;113(4):708–13. https://doi.org/10.1542/peds.113.4.708 .

Lang A, Bolls P, Potter RF, Kawahara K. The effects of production pacing and arousing content on the information processing of television messages. J Broadcast Electron Media. 1999;43(4):451–75. https://doi.org/10.1080/08838159909364504 .

Lang A. The limited capacity model of mediated message processing. J Commun. 2000;50(1):46–70. https://doi.org/10.1111/j.1460-2466.2000.tb02833.x .

Carey S. The origin of concepts. J Cogn Dev. 2000;1(1):37–41. https://doi.org/10.1207/S15327647JCD0101N_3 .

Kahneman D. Attention and effort. Englewood Cliffs, NJ: Prentice-Hall. 1973;1063:218–26.

Geist EA, Gibson M. The effect of network and public television programs on four and five year olds ability to attend to educational tasks. J Instr Psychol. 2000;27(4):250.

Kostyrka-Allchorne K, Cooper NR, Gossmann AM, Barber KJ, Simpson A. Differential effects of film on preschool children’s behaviour dependent on editing pace. Acta Paediatr. 2017;106(5):831–6. https://doi.org/10.1111/apa.13770 .

Kostyrka-Allchorne K, Cooper NR, Simpson A. Disentangling the effects of video pace and story realism on children’s attention and response inhibition. Cogn Dev. 2019;49:94–104. https://doi.org/10.1016/j.cogdev.2018.12.003 .

Anderson DR, Levin SR, Lorch EP. The effects of TV program pacing on the behavior of preschool children. AV Commun Rev. 1977;25(2):159–66. https://doi.org/10.1007/BF02769779 .

Li H, Hsueh Y, Yu H, Kitzmann KM. Viewing fantastical events in animated television shows: immediate effects on Chinese preschoolers’ executive function. Front Psychol. 2020;11: 583174. https://doi.org/10.3389/fpsyg.2020.583174 .

Li H, Subrahmanyam K, Bai X, Xie X, Liu T. Viewing fantastical events versus touching fantastical events: Short-term effects on children’s inhibitory control. Child Dev. 2018;89(1):48–57. https://doi.org/10.1111/cdev.12820 .

Wang J, Moriguchi Y. Viewing and playing fantastical events does not affect children's cognitive flexibility and prefrontal activation. Heliyon. 2023;9(6). https://doi.org/10.1016/j.heliyon.2023.e16892 .

Bustamante JC, Fernández-Castilla B, Alcaraz-Iborra M. Relation between executive functions and screen time exposure in under 6 year-olds: A meta-analysis. Computers in Human Behavior. 2023:107739. https://doi.org/10.1016/j.chb.2023.107739 .

Uzundağ BA, Altundal MN, Keşşafoğlu D. Screen Media Exposure in Early Childhood and Its Relation to Children’s Self-Regulation. Human Behavior and Emerging Technologies. 2022;2022. doi: https://doi.org/10.1155/2022/4490166 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. 2021;88: 105906. https://doi.org/10.1136/bmj.n7 .

The EndNote Team. EndNote. EndNote X9 ed. Philad, PA: Clarivate; 2013. http://www.endnote.com .

Haddaway NR, Page MJ, Pritchard CC, McGuinness LA. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst Rev. 2022;18(2): e1230. https://doi.org/10.1002/cl2.1230 .

Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84. https://doi.org/10.1136/jech.52.6.377 .

Wright JC, Huston AC, Ross RP, Calvert SL, Rolandelli D, Weeks LA, Raeissi P, Potts R. Pace and continuity of television programs: Effects on children’s attention and comprehension. Dev Psychol. 1984;20(4):653. https://doi.org/10.1037/0012-1649.20.4.653 .

Posner MI, Snyder CR, Solso R. Attention and cognitive control. In: Balota DA, Marsh EJ, editors. Cognitive psychology: Key readings. Psychology Press; 2004. p. 205:55–85.

Casagrande M, Marotta A, Martella D, Volpari E, Agostini F, Favieri F, Forte G, Rea M, Ferri R, Giordano V, Doricchi F. Assessing the three attentional networks in children from three to six years: A child-friendly version of the Attentional Network Test for Interaction. Behav Res Methods. 2022;54(3):1403–15. https://doi.org/10.3758/s13428-021-01668-5 .

van Velzen LS, Vriend C, de Wit SJ, van den Heuvel OA. Response inhibition and interference control in obsessive–compulsive spectrum disorders. Front Hum Neurosci. 2014;8:419. https://doi.org/10.3389/fnhum.2014.00419 .

Sarter M, Gehring WJ, Kozak R. More attention must be paid: the neurobiology of attentional effort. Brain Res Rev. 2006;51(2):145–60. https://doi.org/10.1016/j.brainresrev.2005.11.002 .

Shenhav A, Botvinick MM, Cohen JD. The expected value of control: an integrative theory of anterior cingulate cortex function. Neuron. 2013;79(2):217–40. https://doi.org/10.1016/j.neuron.2013.07.007 .

Fan L, Lu M, Qi X, Xin J. Do animations impair executive function in young children? Effects of animation types on the executive function of children aged four to seven years. Int J Environ Res Public Health. 2022;19(15):8962. https://doi.org/10.3390/ijerph19158962 .

Gooch D, Thompson P, Nash HM, Snowling MJ, Hulme C. The development of executive function and language skills in the early school years. J Child Psychol Psychiatry. 2016;57(2):180–7. https://doi.org/10.1111/jcpp.12458 .

Wright JC, Huston AC. A matter of form: Potentials of television for young viewers. Am Psychol. 1983;38(7):835. https://doi.org/10.1037/0003-066X.38.7.835 .

Garon N, Bryson SE, Smith IM. Executive function in preschoolers: a review using an integrative framework. Psychol Bull. 2008;134(1):31. https://doi.org/10.1037/0033-2909.134.1.31 .

Acevedo BP. The basics of sensory processing sensitivity. In: Acevedo BP, editor. The highly sensitive brain: Research, assessment, and treatment of sensory processing sensitivity. Academic Press; 2020. p. 1–15.

Hopkins EJ, Weisberg DS. The youngest readers’ dilemma: A review of children’s learning from fictional sources. Dev Rev. 2017;43:48–70. https://doi.org/10.1016/j.dr.2016.11.001 .

Blankson AN, O’Brien M, Leerkes EM, Calkins SD, Marcovitch S. Do hours spent viewing television at ages 3 and 4 predict vocabulary and executive functioning at age 5? Merrill-Palmer Q. 2015;61(2):264–89. https://doi.org/10.13110/merrpalmquar1982.61.2.0264 .

Linebarger DL, Barr R, Lapierre MA, Piotrowski JT. Associations between parenting, media use, cumulative risk, and children’s executive functioning. J Dev Behav Pediatr. 2014;35(6):367–77. https://doi.org/10.1097/DBP.0000000000000069 .

Gerritsen S, Morton SM, Wall CR. Physical activity and screen use policy and practices in childcare: results from a survey of early childhood education services in New Zealand. Aust N Z J Public Health. 2016;40(4):319–25. https://doi.org/10.1111/1753-6405.12529 .

Pila S, Blackwell CK, Lauricella AR, Wartella E. Technology in the lives of educators and early childhood programs: 2018 Survey. Center on Media and Human Development, Northwestern University

Dore RA, Dynia JM. Technology and media use in preschool classrooms: Prevalence, purposes, and contexts. Front Educ. 2020;5: 600305. https://doi.org/10.3389/feduc.2020.600305 .

Download references

Acknowledgements

This research received no specific grant.

Author information

Authors and affiliations.

Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran

Sara Arian Namazi & Saeid Sadeghi

You can also search for this author in PubMed   Google Scholar

Contributions

SA and SS contributed equally to all parts of the manuscript and both have read and approved the final version.

Corresponding author

Correspondence to Saeid Sadeghi .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent of publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Namazi, S.A., Sadeghi, S. The immediate impacts of TV programs on preschoolers' executive functions and attention: a systematic review. BMC Psychol 12 , 226 (2024). https://doi.org/10.1186/s40359-024-01738-1

Download citation

Received : 07 February 2024

Accepted : 17 April 2024

Published : 24 April 2024

DOI : https://doi.org/10.1186/s40359-024-01738-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fast-paced TV program
  • Slow-paced TV program
  • Systematic review

BMC Psychology

ISSN: 2050-7283

what is the systematic plan for conducting research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.11(2); 2019 Feb

Logo of cureus

Planning and Conducting Clinical Research: The Whole Process

Boon-how chew.

1 Family Medicine, Universiti Putra Malaysia, Serdang, MYS

The goal of this review was to present the essential steps in the entire process of clinical research. Research should begin with an educated idea arising from a clinical practice issue. A research topic rooted in a clinical problem provides the motivation for the completion of the research and relevancy for affecting medical practice changes and improvements. The research idea is further informed through a systematic literature review, clarified into a conceptual framework, and defined into an answerable research question. Engagement with clinical experts, experienced researchers, relevant stakeholders of the research topic, and even patients can enhance the research question’s relevance, feasibility, and efficiency. Clinical research can be completed in two major steps: study designing and study reporting. Three study designs should be planned in sequence and iterated until properly refined: theoretical design, data collection design, and statistical analysis design. The design of data collection could be further categorized into three facets: experimental or non-experimental, sampling or census, and time features of the variables to be studied. The ultimate aims of research reporting are to present findings succinctly and timely. Concise, explicit, and complete reporting are the guiding principles in clinical studies reporting.

Introduction and background

Medical and clinical research can be classified in many different ways. Probably, most people are familiar with basic (laboratory) research, clinical research, healthcare (services) research, health systems (policy) research, and educational research. Clinical research in this review refers to scientific research related to clinical practices. There are many ways a clinical research's findings can become invalid or less impactful including ignorance of previous similar studies, a paucity of similar studies, poor study design and implementation, low test agent efficacy, no predetermined statistical analysis, insufficient reporting, bias, and conflicts of interest [ 1 - 4 ]. Scientific, ethical, and moral decadence among researchers can be due to incognizant criteria in academic promotion and remuneration and too many forced studies by amateurs and students for the sake of research without adequate training or guidance [ 2 , 5 - 6 ]. This article will review the proper methods to conduct medical research from the planning stage to submission for publication (Table ​ (Table1 1 ).

a Feasibility and efficiency are considered during the refinement of the research question and adhered to during data collection.

Epidemiologic studies in clinical and medical fields focus on the effect of a determinant on an outcome [ 7 ]. Measurement errors that happen systematically give rise to biases leading to invalid study results, whereas random measurement errors will cause imprecise reporting of effects. Precision can usually be increased with an increased sample size provided biases are avoided or trivialized. Otherwise, the increased precision will aggravate the biases. Because epidemiologic, clinical research focuses on measurement, measurement errors are addressed throughout the research process. Obtaining the most accurate estimate of a treatment effect constitutes the whole business of epidemiologic research in clinical practice. This is greatly facilitated by clinical expertise and current scientific knowledge of the research topic. Current scientific knowledge is acquired through literature reviews or in collaboration with an expert clinician. Collaboration and consultation with an expert clinician should also include input from the target population to confirm the relevance of the research question. The novelty of a research topic is less important than the clinical applicability of the topic. Researchers need to acquire appropriate writing and reporting skills from the beginning of their careers, and these skills should improve with persistent use and regular reviewing of published journal articles. A published clinical research study stands on solid scientific ground to inform clinical practice given the article has passed through proper peer-reviews, revision, and content improvement.

Systematic literature reviews

Systematic literature reviews of published papers will inform authors of the existing clinical evidence on a research topic. This is an important step to reduce wasted efforts and evaluate the planned study [ 8 ]. Conducting a systematic literature review is a well-known important step before embarking on a new study [ 9 ]. A rigorously performed and cautiously interpreted systematic review that includes in-process trials can inform researchers of several factors [ 10 ]. Reviewing the literature will inform the choice of recruitment methods, outcome measures, questionnaires, intervention details, and statistical strategies – useful information to increase the study’s relevance, value, and power. A good review of previous studies will also provide evidence of the effects of an intervention that may or may not be worthwhile; this would suggest either no further studies are warranted or that further study of the intervention is needed. A review can also inform whether a larger and better study is preferable to an additional small study. Reviews of previously published work may yield few studies or low-quality evidence from small or poorly designed studies on certain intervention or observation; this may encourage or discourage further research or prompt consideration of a first clinical trial.

Conceptual framework

The result of a literature review should include identifying a working conceptual framework to clarify the nature of the research problem, questions, and designs, and even guide the latter discussion of the findings and development of possible solutions. Conceptual frameworks represent ways of thinking about a problem or how complex things work the way they do [ 11 ]. Different frameworks will emphasize different variables and outcomes, and their inter-relatedness. Each framework highlights or emphasizes different aspects of a problem or research question. Often, any single conceptual framework presents only a partial view of reality [ 11 ]. Furthermore, each framework magnifies certain elements of the problem. Therefore, a thorough literature search is warranted for authors to avoid repeating the same research endeavors or mistakes. It may also help them find relevant conceptual frameworks including those that are outside one’s specialty or system. 

Conceptual frameworks can come from theories with well-organized principles and propositions that have been confirmed by observations or experiments. Conceptual frameworks can also come from models derived from theories, observations or sets of concepts or even evidence-based best practices derived from past studies [ 11 ].

Researchers convey their assumptions of the associations of the variables explicitly in the conceptual framework to connect the research to the literature. After selecting a single conceptual framework or a combination of a few frameworks, a clinical study can be completed in two fundamental steps: study design and study report. Three study designs should be planned in sequence and iterated until satisfaction: the theoretical design, data collection design, and statistical analysis design [ 7 ]. 

Study designs

Theoretical Design

Theoretical design is the next important step in the research process after a literature review and conceptual framework identification. While the theoretical design is a crucial step in research planning, it is often dealt with lightly because of the more alluring second step (data collection design). In the theoretical design phase, a research question is designed to address a clinical problem, which involves an informed understanding based on the literature review and effective collaboration with the right experts and clinicians. A well-developed research question will have an initial hypothesis of the possible relationship between the explanatory variable/exposure and the outcome. This will inform the nature of the study design, be it qualitative or quantitative, primary or secondary, and non-causal or causal (Figure ​ (Figure1 1 ).

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i01.jpg

A study is qualitative if the research question aims to explore, understand, describe, discover or generate reasons underlying certain phenomena. Qualitative studies usually focus on a process to determine how and why things happen [ 12 ]. Quantitative studies use deductive reasoning, and numerical statistical quantification of the association between groups on data often gathered during experiments [ 13 ]. A primary clinical study is an original study gathering a new set of patient-level data. Secondary research draws on the existing available data and pooling them into a larger database to generate a wider perspective or a more powerful conclusion. Non-causal or descriptive research aims to identify the determinants or associated factors for the outcome or health condition, without regard for causal relationships. Causal research is an exploration of the determinants of an outcome while mitigating confounding variables. Table ​ Table2 2 shows examples of non-causal (e.g., diagnostic and prognostic) and causal (e.g., intervention and etiologic) clinical studies. Concordance between the research question, its aim, and the choice of theoretical design will provide a strong foundation and the right direction for the research process and path. 

A problem in clinical epidemiology is phrased in a mathematical relationship below, where the outcome is a function of the determinant (D) conditional on the extraneous determinants (ED) or more commonly known as the confounding factors [ 7 ]:

For non-causal research, Outcome = f (D1, D2…Dn) For causal research, Outcome = f (D | ED)

A fine research question is composed of at least three components: 1) an outcome or a health condition, 2) determinant/s or associated factors to the outcome, and 3) the domain. The outcome and the determinants have to be clearly conceptualized and operationalized as measurable variables (Table ​ (Table3; 3 ; PICOT [ 14 ] and FINER [ 15 ]). The study domain is the theoretical source population from which the study population will be sampled, similar to the wording on a drug package insert that reads, “use this medication (study results) in people with this disease” [ 7 ].

The interpretation of study results as they apply to wider populations is known as generalization, and generalization can either be statistical or made using scientific inferences [ 16 ]. Generalization supported by statistical inferences is seen in studies on disease prevalence where the sample population is representative of the source population. By contrast, generalizations made using scientific inferences are not bound by the representativeness of the sample in the study; rather, the generalization should be plausible from the underlying scientific mechanisms as long as the study design is valid and nonbiased. Scientific inferences and generalizations are usually the aims of causal studies. 

Confounding: Confounding is a situation where true effects are obscured or confused [ 7 , 16 ]. Confounding variables or confounders affect the validity of a study’s outcomes and should be prevented or mitigated in the planning stages and further managed in the analytical stages. Confounders are also known as extraneous determinants in epidemiology due to their inherent and simultaneous relationships to both the determinant and outcome (Figure ​ (Figure2), 2 ), which are usually one-determinant-to-one outcome in causal clinical studies. The known confounders are also called observed confounders. These can be minimized using randomization, restriction, or a matching strategy. Residual confounding has occurred in a causal relationship when identified confounders were not measured accurately. Unobserved confounding occurs when the confounding effect is present as a variable or factor not observed or yet defined and, thus, not measured in the study. Age and gender are almost universal confounders followed by ethnicity and socio-economic status.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i02.jpg

Confounders have three main characteristics. They are a potential risk factor for the disease, associated with the determinant of interest, and should not be an intermediate variable between the determinant and the outcome or a precursor to the determinant. For example, a sedentary lifestyle is a cause for acute coronary syndrome (ACS), and smoking could be a confounder but not cardiorespiratory unfitness (which is an intermediate factor between a sedentary lifestyle and ACS). For patients with ACS, not having a pair of sports shoes is not a confounder – it is a correlate for the sedentary lifestyle. Similarly, depression would be a precursor, not a confounder.

Sample size consideration: Sample size calculation provides the required number of participants to be recruited in a new study to detect true differences in the target population if they exist. Sample size calculation is based on three facets: an estimated difference in group sizes, the probability of α (Type I) and β (Type II) errors chosen based on the nature of the treatment or intervention, and the estimated variability (interval data) or proportion of the outcome (nominal data) [ 17 - 18 ]. The clinically important effect sizes are determined based on expert consensus or patients’ perception of benefit. Value and economic consideration have increasingly been included in sample size estimations. Sample size and the degree to which the sample represents the target population affect the accuracy and generalization of a study’s reported effects. 

Pilot study: Pilot studies assess the feasibility of the proposed research procedures on small sample size. Pilot studies test the efficiency of participant recruitment with minimal practice or service interruptions. Pilot studies should not be conducted to obtain a projected effect size for a larger study population because, in a typical pilot study, the sample size is small, leading to a large standard error of that effect size. This leads to bias when projected for a large population. In the case of underestimation, this could lead to inappropriately terminating the full-scale study. As the small pilot study is equally prone to bias of overestimation of the effect size, this would lead to an underpowered study and a failed full-scale study [ 19 ]. 

The Design of Data Collection

The “perfect” study design in the theoretical phase now faces the practical and realistic challenges of feasibility. This is the step where different methods for data collection are considered, with one selected as the most appropriate based on the theoretical design along with feasibility and efficiency. The goal of this stage is to achieve the highest possible validity with the lowest risk of biases given available resources and existing constraints. 

In causal research, data on the outcome and determinants are collected with utmost accuracy via a strict protocol to maximize validity and precision. The validity of an instrument is defined as the degree of fidelity of the instrument, measuring what it is intended to measure, that is, the results of the measurement correlate with the true state of an occurrence. Another widely used word for validity is accuracy. Internal validity refers to the degree of accuracy of a study’s results to its own study sample. Internal validity is influenced by the study designs, whereas the external validity refers to the applicability of a study’s result in other populations. External validity is also known as generalizability and expresses the validity of assuming the similarity and comparability between the study population and the other populations. Reliability of an instrument denotes the extent of agreeableness of the results of repeated measurements of an occurrence by that instrument at a different time, by different investigators or in a different setting. Other terms that are used for reliability include reproducibility and precision. Preventing confounders by identifying and including them in data collection will allow statistical adjustment in the later analyses. In descriptive research, outcomes must be confirmed with a referent standard, and the determinants should be as valid as those found in real clinical practice.

Common designs for data collection include cross-sectional, case-control, cohort, and randomized controlled trials (RCTs). Many other modern epidemiology study designs are based on these classical study designs such as nested case-control, case-crossover, case-control without control, and stepwise wedge clustered RCTs. A cross-sectional study is typically a snapshot of the study population, and an RCT is almost always a prospective study. Case-control and cohort studies can be retrospective or prospective in data collection. The nested case-control design differs from the traditional case-control design in that it is “nested” in a well-defined cohort from which information on the cohorts can be obtained. This design also satisfies the assumption that cases and controls represent random samples of the same study base. Table ​ Table4 4 provides examples of these data collection designs.

Additional aspects in data collection: No single design of data collection for any research question as stated in the theoretical design will be perfect in actual conduct. This is because of myriad issues facing the investigators such as the dynamic clinical practices, constraints of time and budget, the urgency for an answer to the research question, and the ethical integrity of the proposed experiment. Therefore, feasibility and efficiency without sacrificing validity and precision are important considerations in data collection design. Therefore, data collection design requires additional consideration in the following three aspects: experimental/non-experimental, sampling, and timing [ 7 ]:

Experimental or non-experimental: Non-experimental research (i.e., “observational”), in contrast to experimental, involves data collection of the study participants in their natural or real-world environments. Non-experimental researches are usually the diagnostic and prognostic studies with cross-sectional in data collection. The pinnacle of non-experimental research is the comparative effectiveness study, which is grouped with other non-experimental study designs such as cross-sectional, case-control, and cohort studies [ 20 ]. It is also known as the benchmarking-controlled trials because of the element of peer comparison (using comparable groups) in interpreting the outcome effects [ 20 ]. Experimental study designs are characterized by an intervention on a selected group of the study population in a controlled environment, and often in the presence of a similar group of the study population to act as a comparison group who receive no intervention (i.e., the control group). Thus, the widely known RCT is classified as an experimental design in data collection. An experimental study design without randomization is referred to as a quasi-experimental study. Experimental studies try to determine the efficacy of a new intervention on a specified population. Table ​ Table5 5 presents the advantages and disadvantages of experimental and non-experimental studies [ 21 ].

a May be an issue in cross-sectional studies that require a long recall to the past such as dietary patterns, antenatal events, and life experiences during childhood.

Once an intervention yields a proven effect in an experimental study, non-experimental and quasi-experimental studies can be used to determine the intervention’s effect in a wider population and within real-world settings and clinical practices. Pragmatic or comparative effectiveness are the usual designs used for data collection in these situations [ 22 ].

Sampling/census: Census is a data collection on the whole source population (i.e., the study population is the source population). This is possible when the defined population is restricted to a given geographical area. A cohort study uses the census method in data collection. An ecologic study is a cohort study that collects summary measures of the study population instead of individual patient data. However, many studies sample from the source population and infer the results of the study to the source population for feasibility and efficiency because adequate sampling provides similar results to the census of the whole population. Important aspects of sampling in research planning are sample size and representation of the population. Sample size calculation accounts for the number of participants needed to be in the study to discover the actual association between the determinant and outcome. Sample size calculation relies on the primary objective or outcome of interest and is informed by the estimated possible differences or effect size from previous similar studies. Therefore, the sample size is a scientific estimation for the design of the planned study.

A sampling of participants or cases in a study can represent the study population and the larger population of patients in that disease space, but only in prevalence, diagnostic, and prognostic studies. Etiologic and interventional studies do not share this same level of representation. A cross-sectional study design is common for determining disease prevalence in the population. Cross-sectional studies can also determine the referent ranges of variables in the population and measure change over time (e.g., repeated cross-sectional studies). Besides being cost- and time-efficient, cross-sectional studies have no loss to follow-up; recall bias; learning effect on the participant; or variability over time in equipment, measurement, and technician. A cross-sectional design for an etiologic study is possible when the determinants do not change with time (e.g., gender, ethnicity, genetic traits, and blood groups). 

In etiologic research, comparability between the exposed and the non-exposed groups is more important than sample representation. Comparability between these two groups will provide an accurate estimate of the effect of the exposure (risk factor) on the outcome (disease) and enable valid inference of the causal relation to the domain (the theoretical population). In a case-control study, a sampling of the control group should be taken from the same study population (study base), have similar profiles to the cases (matching) but do not have the outcome seen in the cases. Matching important factors minimizes the confounding of the factors and increases statistical efficiency by ensuring similar numbers of cases and controls in confounders’ strata [ 23 - 24 ]. Nonetheless, perfect matching is neither necessary nor achievable in a case-control study because a partial match could achieve most of the benefits of the perfect match regarding a more precise estimate of odds ratio than statistical control of confounding in unmatched designs [ 25 - 26 ]. Moreover, perfect or full matching can lead to an underestimation of the point estimates [ 27 - 28 ].

Time feature: The timing of data collection for the determinant and outcome characterizes the types of studies. A cross-sectional study has the axis of time zero (T = 0) for both the determinant and the outcome, which separates it from all other types of research that have time for the outcome T > 0. Retrospective or prospective studies refer to the direction of data collection. In retrospective studies, information on the determinant and outcome have been collected or recorded before. In prospective studies, this information will be collected in the future. These terms should not be used to describe the relationship between the determinant and the outcome in etiologic studies. Time of exposure to the determinant, the time of induction, and the time at risk for the outcome are important aspects to understand. Time at risk is the period of time exposed to the determinant risk factors. Time of induction is the time from the sufficient exposure to the risk or causal factors to the occurrence of a disease. The latent period is when the occurrence of a disease without manifestation of the disease such as in “silence” diseases for example cancers, hypertension and type 2 diabetes mellitus which is detected from screening practices. Figure ​ Figure3 3 illustrates the time features of a variable. Variable timing is important for accurate data capture. 

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i03.jpg

The Design of Statistical Analysis

Statistical analysis of epidemiologic data provides the estimate of effects after correcting for biases (e.g., confounding factors) measures the variability in the data from random errors or chance [ 7 , 16 , 29 ]. An effect estimate gives the size of an association between the studied variables or the level of effectiveness of an intervention. This quantitative result allows for comparison and assessment of the usefulness and significance of the association or the intervention between studies. This significance must be interpreted with a statistical model and an appropriate study design. Random errors could arise in the study resulting from unexplained personal choices by the participants. Random error is, therefore, when values or units of measurement between variables change in non-concerted or non-directional manner. Conversely, when these values or units of measurement between variables change in a concerted or directional manner, we note a significant relationship as shown by statistical significance. 

Variability: Researchers almost always collect the needed data through a sampling of subjects/participants from a population instead of a census. The process of sampling or multiple sampling in different geographical regions or over different periods contributes to varied information due to the random inclusion of different participants and chance occurrence. This sampling variation becomes the focus of statistics when communicating the degree and intensity of variation in the sampled data and the level of inference in the population. Sampling variation can be influenced profoundly by the total number of participants and the width of differences of the measured variable (standard deviation). Hence, the characteristics of the participants, measurements and sample size are all important factors in planning a study.

Statistical strategy: Statistical strategy is usually determined based on the theoretical and data collection designs. Use of a prespecified statistical strategy (including the decision to dichotomize any continuous data at certain cut-points, sub-group analysis or sensitive analyses) is recommended in the study proposal (i.e., protocol) to prevent data dredging and data-driven reports that predispose to bias. The nature of the study hypothesis also dictates whether directional (one-tailed) or non-directional (two-tailed) significance tests are conducted. In most studies, two-sided tests are used except in specific instances when unidirectional hypotheses may be appropriate (e.g., in superiority or non-inferiority trials). While data exploration is discouraged, epidemiological research is, by nature of its objectives, statistical research. Hence, it is acceptable to report the presence of persistent associations between any variables with plausible underlying mechanisms during the exploration of the data. The statistical methods used to produce the results should be explicitly explained. Many different statistical tests are used to handle various kinds of data appropriately (e.g., interval vs discrete), and/or the various distribution of the data (e.g., normally distributed or skewed). For additional details on statistical explanations and underlying concepts of statistical tests, readers are recommended the references as cited in this sentence [ 30 - 31 ]. 

Steps in statistical analyses: Statistical analysis begins with checking for data entry errors. Duplicates are eliminated, and proper units should be confirmed. Extremely low, high or suspicious values are confirmed from the source data again. If this is not possible, this is better classified as a missing value. However, if the unverified suspicious data are not obviously wrong, they should be further examined as an outlier in the analysis. The data checking and cleaning enables the analyst to establish a connection with the raw data and to anticipate possible results from further analyses. This initial step involves descriptive statistics that analyze central tendency (i.e., mode, median, and mean) and dispersion (i.e., (minimum, maximum, range, quartiles, absolute deviation, variance, and standard deviation) of the data. Certain graphical plotting such as scatter plot, a box-whiskers plot, histogram or normal Q-Q plot are helpful at this stage to verify data normality in distribution. See Figure ​ Figure4 4 for the statistical tests available for analyses of different types of data.

An external file that holds a picture, illustration, etc.
Object name is cureus-0011-00000004112-i04.jpg

Once data characteristics are ascertained, further statistical tests are selected. The analytical strategy sometimes involves the transformation of the data distribution for the selected tests (e.g., log, natural log, exponential, quadratic) or for checking the robustness of the association between the determinants and their outcomes. This step is also referred to as inferential statistics whereby the results are about hypothesis testing and generalization to the wider population that the study’s sampled participants represent. The last statistical step is checking whether the statistical analyses fulfill the assumptions of that particular statistical test and model to avoid violation and misleading results. These assumptions include evaluating normality, variance homogeneity, and residuals included in the final statistical model. Other statistical values such as Akaike information criterion, variance inflation factor/tolerance, and R2 are also considered when choosing the best-fitted models. Transforming raw data could be done, or a higher level of statistical analyses can be used (e.g., generalized linear models and mixed-effect modeling). Successful statistical analysis allows conclusions of the study to fit the data. 

Bayesian and Frequentist statistical frameworks: Most of the current clinical research reporting is based on the frequentist approach and hypotheses testing p values and confidence intervals. The frequentist approach assumes the acquired data are random, attained by random sampling, through randomized experiments or influences, and with random errors. The distribution of the data (its point estimate and confident interval) infers a true parameter in the real population. The major conceptual difference between Bayesian statistics and frequentist statistics is that in Bayesian statistics, the parameter (i.e., the studied variable in the population) is random and the data acquired is real (true or fix). Therefore, the Bayesian approach provides a probability interval for the parameter. The studied parameter is random because it could vary and be affected by prior beliefs, experience or evidence of plausibility. In the Bayesian statistical approach, this prior belief or available knowledge is quantified into a probability distribution and incorporated into the acquired data to get the results (i.e., the posterior distribution). This uses mathematical theory of Bayes’ Theorem to “turn around” conditional probabilities.

The goal of research reporting is to present findings succinctly and timely via conference proceedings or journal publication. Concise and explicit language use, with all the necessary details to enable replication and judgment of the study applicability, are the guiding principles in clinical studies reporting.

Writing for Reporting

Medical writing is very much a technical chore that accommodates little artistic expression. Research reporting in medicine and health sciences emphasize clear and standardized reporting, eschewing adjectives and adverbs extensively used in popular literature. Regularly reviewing published journal articles can familiarize authors with proper reporting styles and help enhance writing skills. Authors should familiarize themselves with standard, concise, and appropriate rhetoric for the intended audience, which includes consideration for journal reviewers, editors, and referees. However, proper language can be somewhat subjective. While each publication may have varying requirements for submission, the technical requirements for formatting an article are usually available via author or submission guidelines provided by the target journal. 

Research reports for publication often contain a title, abstract, introduction, methods, results, discussion, and conclusions section, and authors may want to write each section in sequence. However, best practices indicate the abstract and title should be written last. Authors may find that when writing one section of the report, ideas come to mind that pertains to other sections, so careful note taking is encouraged. One effective approach is to organize and write the result section first, followed by the discussion and conclusions sections. Once these are drafted, write the introduction, abstract, and the title of the report. Regardless of the sequence of writing, the author should begin with a clear and relevant research question to guide the statistical analyses, result interpretation, and discussion. The study findings can be a motivator to propel the author through the writing process, and the conclusions can help the author draft a focused introduction.

Writing for Publication

Specific recommendations on effective medical writing and table generation are available [ 32 ]. One such resource is Effective Medical Writing: The Write Way to Get Published, which is an updated collection of medical writing articles previously published in the Singapore Medical Journal [ 33 ]. The British Medical Journal’s Statistics Notes series also elucidates common and important statistical concepts and usages in clinical studies. Writing guides are also available from individual professional societies, journals, or publishers such as Chest (American College of Physicians) medical writing tips, PLoS Reporting guidelines collection, Springer’s Journal Author Academy, and SAGE’s Research methods [ 34 - 37 ]. Standardized research reporting guidelines often come in the form of checklists and flow diagrams. Table ​ Table6 6 presents a list of reporting guidelines. A full compilation of these guidelines is available at the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network website [ 38 ] which aims to improve the reliability and value of medical literature by promoting transparent and accurate reporting of research studies. Publication of the trial protocol in a publicly available database is almost compulsory for publication of the full report in many potential journals.

Graphics and Tables

Graphics and tables should emphasize salient features of the underlying data and should coherently summarize large quantities of information. Although graphics provide a break from dense prose, authors must not forget that these illustrations should be scientifically informative, not decorative. The titles for graphics and tables should be clear, informative, provide the sample size, and use minimal font weight and formatting only to distinguish headings, data entry or to highlight certain results. Provide a consistent number of decimal points for the numerical results, and with no more than four for the P value. Most journals prefer cell-delineated tables created using the table function in word processing or spreadsheet programs. Some journals require specific table formatting such as the absence or presence of intermediate horizontal lines between cells.

Decisions of authorship are both sensitive and important and should be made at an early stage by the study’s stakeholders. Guidelines and journals’ instructions to authors abound with authorship qualifications. The guideline on authorship by the International Committee of Medical Journal Editors is widely known and provides a standard used by many medical and clinical journals [ 39 ]. Generally, authors are those who have made major contributions to the design, conduct, and analysis of the study, and who provided critical readings of the manuscript (if not involved directly in manuscript writing). 

Picking a target journal for submission

Once a report has been written and revised, the authors should select a relevant target journal for submission. Authors should avoid predatory journals—publications that do not aim to advance science and disseminate quality research. These journals focus on commercial gain in medical and clinical publishing. Two good resources for authors during journal selection are Think-Check-Submit and the defunct Beall's List of Predatory Publishers and Journals (now archived and maintained by an anonymous third-party) [ 40 , 41 ]. Alternatively, reputable journal indexes such as Thomson Reuters Journal Citation Reports, SCOPUS, MedLine, PubMed, EMBASE, EBSCO Publishing's Electronic Databases are available areas to start the search for an appropriate target journal. Authors should review the journals’ names, aims/scope, and recently published articles to determine the kind of research each journal accepts for publication. Open-access journals almost always charge article publication fees, while subscription-based journals tend to publish without author fees and instead rely on subscription or access fees for the full text of published articles.

Conclusions

Conducting a valid clinical research requires consideration of theoretical study design, data collection design, and statistical analysis design. Proper study design implementation and quality control during data collection ensures high-quality data analysis and can mitigate bias and confounders during statistical analysis and data interpretation. Clear, effective study reporting facilitates dissemination, appreciation, and adoption, and allows the researchers to affect real-world change in clinical practices and care models. Neutral or absence of findings in a clinical study are as important as positive or negative findings. Valid studies, even when they report an absence of expected results, still inform scientific communities of the nature of a certain treatment or intervention, and this contributes to future research, systematic reviews, and meta-analyses. Reporting a study adequately and comprehensively is important for accuracy, transparency, and reproducibility of the scientific work as well as informing readers.

Acknowledgments

The author would like to thank Universiti Putra Malaysia and the Ministry of Higher Education, Malaysia for their support in sponsoring the Ph.D. study and living allowances for Boon-How Chew.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The materials presented in this paper is being organized by the author into a book.

IMAGES

  1. 4 components of a systematic review

    what is the systematic plan for conducting research

  2. Research Process: 8 Steps in Research Process

    what is the systematic plan for conducting research

  3. The Systematic Review Process

    what is the systematic plan for conducting research

  4. Basics of Systematic Review

    what is the systematic plan for conducting research

  5. How to Conduct a Systematic Review

    what is the systematic plan for conducting research

  6. What is a Systematic Review

    what is the systematic plan for conducting research

VIDEO

  1. Systematic plan for print 1

  2. Systematic Review Part I

  3. Systematic review_01

  4. How to do quality research?

  5. Standalone Systematic Literature Review (literature) (review) (systematic) (slr)

  6. Research Learning Series (RLS): Tips for the Optimal Systematic Review and Meta Analysis

COMMENTS

  1. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  2. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  3. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  4. Guidance on Conducting a Systematic Literature Review

    Literature review is an essential feature of academic research. Fundamentally, knowledge advancement must be built on prior existing work. To push the knowledge frontier, we must know where the frontier is. By reviewing relevant literature, we understand the breadth and depth of the existing body of work and identify gaps to explore.

  5. Five steps to conducting a systematic review

    Reasons for inclusion and exclusion should be recorded. Step 3: Assessing the quality of studies. Study quality assessment is relevant to every step of a review. Question formulation (Step 1) and study selection criteria (Step 2) should describe the minimum acceptable level of design.

  6. Introduction to systematic review and meta-analysis

    It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...

  7. How to do a Systematic Review: Conducting and Reporting ...

    Step 2: Develop the Protocol. Once the research question has been defined, the next step is to develop a protocol for the systematic review. This should include a detailed plan for how the review will be conducted, including the search strategy, inclusion and exclusion criteria, and methods for data extraction and analysis.

  8. Conducting a Systematic Review: A Practical Guide

    Abstract. It can be challenging to conduct a systematic review with limited experience and skills in undertaking such a task. This chapter provides a practical guide to undertaking a systematic review, providing step-by-step instructions to guide the individual through the process from start to finish. The chapter begins with defining what a ...

  9. LibGuides: Systematic Reviews: Manuals and Reporting Guidelines

    Reporting guidelines. Handbooks and manuals provide practical methodological guidance for undertaking a systematic review. They contain detailed steps on how to plan, conduct, organize, and present your review. This is the best place to go if you have any questions about the best practices for any of the steps in the process.

  10. PDF Conducting a Systematic Review: Methodology and Steps

    guiding reviewers on conducting a systematic review, using examples from published systematic reviews and different types of studies. To illustrate the approach, we use example research questions and elaborate the stepwise proposed methodology for conducting a systematic review. Some of the potential research questions are: 1.

  11. PDF Undertaking a Systematic Review: What You Need to Know

    Systematic Review Components. Starts with a clearly articulated question. Uses explicit, rigorous methods to identify, critically appraise, and synthesize relevant studies. Appraises relevant published and unpublished evidence for validity before combining and analyzing data. Reports methodology, studies included in the review, and conclusions ...

  12. Steps of a Systematic Review

    Image: https://pixabay.com Steps to conducting a systematic review: PIECES. P: Planning - the methods of the systematic review are generally decided before conducting it. I: Identifying - searching for studies which match the preset criteria in a systematic manner E: Evaluating - sort all retrieved articles (included or excluded) and assess the risk of bias for each included study

  13. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to ...

  14. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  15. PDF Guidance notes on planning a systematic review

    A systematic review is a means of identifying, evaluating and interpreting all available research relevant to a particular research question, or topic area, or phenomenon of interest. Individual studies contributing to a systematic review are called primary studies; a systematic review is a form of secondary study.

  16. Library Guides: Systematic reviews: Create a protocol (plan)

    Review protocols. The plan for a systematic review is called a protocol and defines the steps that will be undertaken in the review. The Cochrane Collaboration defines a protocol as the plan or set of steps to be followed in a study. A protocol for a systematic review should describe the rationale for the review, the objectives, the methods ...

  17. Planning Your Systematic Review

    The objective of organizing the review team is to pull together a group of researchers as well as key users and stakeholders who have the necessary skills and clinical content knowledge to produce a high-quality SR. Standard 2.1 Establish a team with appropriate expertise and experience to conduct the systematic review.

  18. What is a Systematic Review (SR)?

    If you are a Masters or a PhD student conducting a systematic review for your dissertation or thesis, then this is the book for you! Written by an expert team of authors with years of experience in conducting systematic reviews and supervising students doing systematic reviews, the book provides a roadmap to guide you through the process.

  19. PDF Conducting a Systematic Review: A Practical Guide

    recently, systematic reviews on qualitative data have been increasing in popularity; the process by which qualitative data are synthesized is called a meta-synthesis (see also " Meta-synthesis of Qualitative Research"). When conducting a systematic review, authors develop detailed search strategies

  20. How to Conduct Scientific Research?

    Scientific method should be neutral, objective, rational, and as a result, should be able to approve or disapprove the hypothesis. The research plan should include the procedure to obtain data and evaluate the variables. It should ensure that analyzable data are obtained. It should also include plans on the statistical analysis to be performed.

  21. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  22. The immediate impacts of TV programs on preschoolers' executive

    Previous research has presented varying perspectives on the potential effect of screen media use among preschoolers. In this study, we systematically reviewed experimental studies that investigated how pacing and fantasy features of TV programs affect children's attention and executive functions (EFs). A systematic search was conducted across eight online databases to identify pertinent ...

  23. Planning and Conducting Clinical Research: The Whole Process

    Systematic literature reviews of published papers will inform authors of the existing clinical evidence on a research topic. This is an important step to reduce wasted efforts and evaluate the planned study . Conducting a systematic literature review is a well-known important step before embarking on a new study .