U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

How-to conduct a systematic literature review: A quick guide for computer science research

Angela carrera-rivera.

a Faculty of Engineering, Mondragon University

William Ochoa

Felix larrinaga.

b Design Innovation Center(DBZ), Mondragon University

Associated Data

  • No data was used for the research described in the article.

Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following:

  • • Clearly defined strategies to follow for a systematic literature review in computer science research, and
  • • Algorithmic method to tackle a systematic literature review.

Graphical abstract

Image, graphical abstract

Specifications table

Method details

A Systematic Literature Review (SLR) is a research methodology to collect, identify, and critically analyze the available research studies (e.g., articles, conference proceedings, books, dissertations) through a systematic procedure [12] . An SLR updates the reader with current literature about a subject [6] . The goal is to review critical points of current knowledge on a topic about research questions to suggest areas for further examination [5] . Defining an “Initial Idea” or interest in a subject to be studied is the first step before starting the SLR. An early search of the relevant literature can help determine whether the topic is too broad to adequately cover in the time frame and whether it is necessary to narrow the focus. Reading some articles can assist in setting the direction for a formal review., and formulating a potential research question (e.g., how is semantics involved in Industry 4.0?) can further facilitate this process. Once the focus has been established, an SLR can be undertaken to find more specific studies related to the variables in this question. Although there are multiple approaches for performing an SLR ( [5] , [26] , [27] ), this work aims to provide a step-by-step and practical guide while citing useful examples for computer-science research. The methodology presented in this paper comprises two main phases: “Planning” described in section 2, and “Conducting” described in section 3, following the depiction of the graphical abstract.

Defining the protocol is the first step of an SLR since it describes the procedures involved in the review and acts as a log of the activities to be performed. Obtaining opinions from peers while developing the protocol, is encouraged to ensure the review's consistency and validity, and helps identify when modifications are necessary [20] . One final goal of the protocol is to ensure the replicability of the review.

Define PICOC and synonyms

The PICOC (Population, Intervention, Comparison, Outcome, and Context) criteria break down the SLR's objectives into searchable keywords and help formulate research questions [ 27 ]. PICOC is widely used in the medical and social sciences fields to encourage researchers to consider the components of the research questions [14] . Kitchenham & Charters [6] compiled the list of PICOC elements and their corresponding terms in computer science, as presented in Table 1 , which includes keywords derived from the PICOC elements. From that point on, it is essential to think of synonyms or “alike” terms that later can be used for building queries in the selected digital libraries. For instance, the keyword “context awareness” can also be linked to “context-aware”.

Planning Step 1 “Defining PICOC keywords and synonyms”.

Formulate research questions

Clearly defined research question(s) are the key elements which set the focus for study identification and data extraction [21] . These questions are formulated based on the PICOC criteria as presented in the example in Table 2 (PICOC keywords are underlined).

Research questions examples.

Select digital library sources

The validity of a study will depend on the proper selection of a database since it must adequately cover the area under investigation [19] . The Web of Science (WoS) is an international and multidisciplinary tool for accessing literature in science, technology, biomedicine, and other disciplines. Scopus is a database that today indexes 40,562 peer-reviewed journals, compared to 24,831 for WoS. Thus, Scopus is currently the largest existing multidisciplinary database. However, it may also be necessary to include sources relevant to computer science, such as EI Compendex, IEEE Xplore, and ACM. Table 3 compares the area of expertise of a selection of databases.

Planning Step 3 “Select digital libraries”. Description of digital libraries in computer science and software engineering.

Define inclusion and exclusion criteria

Authors should define the inclusion and exclusion criteria before conducting the review to prevent bias, although these can be adjusted later, if necessary. The selection of primary studies will depend on these criteria. Articles are included or excluded in this first selection based on abstract and primary bibliographic data. When unsure, the article is skimmed to further decide the relevance for the review. Table 4 sets out some criteria types with descriptions and examples.

Planning Step 4 “Define inclusion and exclusion criteria”. Examples of criteria type.

Define the Quality Assessment (QA) checklist

Assessing the quality of an article requires an artifact which describes how to perform a detailed assessment. A typical quality assessment is a checklist that contains multiple factors to evaluate. A numerical scale is used to assess the criteria and quantify the QA [22] . Zhou et al. [25] presented a detailed description of assessment criteria in software engineering, classified into four main aspects of study quality: Reporting, Rigor, Credibility, and Relevance. Each of these criteria can be evaluated using, for instance, a Likert-type scale [17] , as shown in Table 5 . It is essential to select the same scale for all criteria established on the quality assessment.

Planning Step 5 “Define QA assessment checklist”. Examples of QA scales and questions.

Define the “Data Extraction” form

The data extraction form represents the information necessary to answer the research questions established for the review. Synthesizing the articles is a crucial step when conducting research. Ramesh et al. [15] presented a classification scheme for computer science research, based on topics, research methods, and levels of analysis that can be used to categorize the articles selected. Classification methods and fields to consider when conducting a review are presented in Table 6 .

Planning Step 6 “Define data extraction form”. Examples of fields.

The data extraction must be relevant to the research questions, and the relationship to each of the questions should be included in the form. Kitchenham & Charters [6] presented more pertinent data that can be captured, such as conclusions, recommendations, strengths, and weaknesses. Although the data extraction form can be updated if more information is needed, this should be treated with caution since it can be time-consuming. It can therefore be helpful to first have a general background in the research topic to determine better data extraction criteria.

After defining the protocol, conducting the review requires following each of the steps previously described. Using tools can help simplify the performance of this task. Standard tools such as Excel or Google sheets allow multiple researchers to work collaboratively. Another online tool specifically designed for performing SLRs is Parsif.al 1 . This tool allows researchers, especially in the context of software engineering, to define goals and objectives, import articles using BibTeX files, eliminate duplicates, define selection criteria, and generate reports.

Build digital library search strings

Search strings are built considering the PICOC elements and synonyms to execute the search in each database library. A search string should separate the synonyms with the boolean operator OR. In comparison, the PICOC elements are separated with parentheses and the boolean operator AND. An example is presented next:

(“Smart Manufacturing” OR “Digital Manufacturing” OR “Smart Factory”) AND (“Business Process Management” OR “BPEL” OR “BPM” OR “BPMN”) AND (“Semantic Web” OR “Ontology” OR “Semantic” OR “Semantic Web Service”) AND (“Framework” OR “Extension” OR “Plugin” OR “Tool”

Gather studies

Databases that feature advanced searches enable researchers to perform search queries based on titles, abstracts, and keywords, as well as for years or areas of research. Fig. 1 presents the example of an advanced search in Scopus, using titles, abstracts, and keywords (TITLE-ABS-KEY). Most of the databases allow the use of logical operators (i.e., AND, OR). In the example, the search is for “BIG DATA” and “USER EXPERIENCE” or “UX” as a synonym.

Fig 1

Example of Advanced search on Scopus.

In general, bibliometric data of articles can be exported from the databases as a comma-separated-value file (CSV) or BibTeX file, which is helpful for data extraction and quantitative and qualitative analysis. In addition, researchers should take advantage of reference-management software such as Zotero, Mendeley, Endnote, or Jabref, which import bibliographic information onto the software easily.

Study Selection and Refinement

The first step in this stage is to identify any duplicates that appear in the different searches in the selected databases. Some automatic procedures, tools like Excel formulas, or programming languages (i.e., Python) can be convenient here.

In the second step, articles are included or excluded according to the selection criteria, mainly by reading titles and abstracts. Finally, the quality is assessed using the predefined scale. Fig. 2 shows an example of an article QA evaluation in Parsif.al, using a simple scale. In this scenario, the scoring procedure is the following YES= 1, PARTIALLY= 0.5, and NO or UNKNOWN = 0 . A cut-off score should be defined to filter those articles that do not pass the QA. The QA will require a light review of the full text of the article.

Fig 2

Performing quality assessment (QA) in Parsif.al.

Data extraction

Those articles that pass the study selection are then thoroughly and critically read. Next, the researcher completes the information required using the “data extraction” form, as illustrated in Fig. 3 , in this scenario using Parsif.al tool.

Fig 3

Example of data extraction form using Parsif.al.

The information required (study characteristics and findings) from each included study must be acquired and documented through careful reading. Data extraction is valuable, especially if the data requires manipulation or assumptions and inferences. Thus, information can be synthesized from the extracted data for qualitative or quantitative analysis [16] . This documentation supports clarity, precise reporting, and the ability to scrutinize and replicate the examination.

Analysis and Report

The analysis phase examines the synthesized data and extracts meaningful information from the selected articles [10] . There are two main goals in this phase.

The first goal is to analyze the literature in terms of leading authors, journals, countries, and organizations. Furthermore, it helps identify correlations among topic s . Even when not mandatory, this activity can be constructive for researchers to position their work, find trends, and find collaboration opportunities. Next, data from the selected articles can be analyzed using bibliometric analysis (BA). BA summarizes large amounts of bibliometric data to present the state of intellectual structure and emerging trends in a topic or field of research [4] . Table 7 sets out some of the most common bibliometric analysis representations.

Techniques for bibliometric analysis and examples.

Several tools can perform this type of analysis, such as Excel and Google Sheets for statistical graphs or using programming languages such as Python that has available multiple  data visualization libraries (i.e. Matplotlib, Seaborn). Cluster maps based on bibliographic data(i.e keywords, authors) can be developed in VosViewer which makes it easy to identify clusters of related items [18] . In Fig. 4 , node size is representative of the number of papers related to the keyword, and lines represent the links among keyword terms.

Fig 4

[1] Keyword co-relationship analysis using clusterization in vos viewer.

This second and most important goal is to answer the formulated research questions, which should include a quantitative and qualitative analysis. The quantitative analysis can make use of data categorized, labelled, or coded in the extraction form (see Section 1.6). This data can be transformed into numerical values to perform statistical analysis. One of the most widely employed method is frequency analysis, which shows the recurrence of an event, and can also represent the percental distribution of the population (i.e., percentage by technology type, frequency of use of different frameworks, etc.). Q ualitative analysis includes the narration of the results, the discussion indicating the way forward in future research work, and inferring a conclusion.

Finally, the literature review report should state the protocol to ensure others researchers can replicate the process and understand how the analysis was performed. In the protocol, it is essential to present the inclusion and exclusion criteria, quality assessment, and rationality beyond these aspects.

The presentation and reporting of results will depend on the structure of the review given by the researchers conducting the SLR, there is no one answer. This structure should tie the studies together into key themes, characteristics, or subgroups [ 28 ].

SLR can be an extensive and demanding task, however the results are beneficial in providing a comprehensive overview of the available evidence on a given topic. For this reason, researchers should keep in mind that the entire process of the SLR is tailored to answer the research question(s). This article has detailed a practical guide with the essential steps to conducting an SLR in the context of computer science and software engineering while citing multiple helpful examples and tools. It is envisaged that this method will assist researchers, and particularly early-stage researchers, in following an algorithmic approach to fulfill this task. Finally, a quick checklist is presented in Appendix A as a companion of this article.

CRediT author statement

Angela Carrera-Rivera: Conceptualization, Methodology, Writing-Original. William Ochoa-Agurto : Methodology, Writing-Original. Felix Larrinaga : Reviewing and Supervision Ganix Lasa: Reviewing and Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

Funding : This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant No. 814078.

Carrera-Rivera, A., Larrinaga, F., & Lasa, G. (2022). Context-awareness for the design of Smart-product service systems: Literature review. Computers in Industry, 142, 103730.

1 https://parsif.al/

Data Availability

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • The PRISMA 2020...

The PRISMA 2020 statement: an updated guideline for reporting systematic reviews

PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews

  • Related content
  • Peer review
  • Matthew J Page , senior research fellow 1 ,
  • Joanne E McKenzie , associate professor 1 ,
  • Patrick M Bossuyt , professor 2 ,
  • Isabelle Boutron , professor 3 ,
  • Tammy C Hoffmann , professor 4 ,
  • Cynthia D Mulrow , professor 5 ,
  • Larissa Shamseer , doctoral student 6 ,
  • Jennifer M Tetzlaff , research product specialist 7 ,
  • Elie A Akl , professor 8 ,
  • Sue E Brennan , senior research fellow 1 ,
  • Roger Chou , professor 9 ,
  • Julie Glanville , associate director 10 ,
  • Jeremy M Grimshaw , professor 11 ,
  • Asbjørn Hróbjartsson , professor 12 ,
  • Manoj M Lalu , associate scientist and assistant professor 13 ,
  • Tianjing Li , associate professor 14 ,
  • Elizabeth W Loder , professor 15 ,
  • Evan Mayo-Wilson , associate professor 16 ,
  • Steve McDonald , senior research fellow 1 ,
  • Luke A McGuinness , research associate 17 ,
  • Lesley A Stewart , professor and director 18 ,
  • James Thomas , professor 19 ,
  • Andrea C Tricco , scientist and associate professor 20 ,
  • Vivian A Welch , associate professor 21 ,
  • Penny Whiting , associate professor 17 ,
  • David Moher , director and professor 22
  • 1 School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
  • 2 Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
  • 3 Université de Paris, Centre of Epidemiology and Statistics (CRESS), Inserm, F 75004 Paris, France
  • 4 Institute for Evidence-Based Healthcare, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Australia
  • 5 University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Annals of Internal Medicine
  • 6 Knowledge Translation Program, Li Ka Shing Knowledge Institute, Toronto, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 7 Evidence Partners, Ottawa, Canada
  • 8 Clinical Research Institute, American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
  • 9 Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
  • 10 York Health Economics Consortium (YHEC Ltd), University of York, York, UK
  • 11 Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada
  • 12 Centre for Evidence-Based Medicine Odense (CEBMO) and Cochrane Denmark, Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Open Patient data Exploratory Network (OPEN), Odense University Hospital, Odense, Denmark
  • 13 Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Canada; Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Canada; Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Canada
  • 14 Department of Ophthalmology, School of Medicine, University of Colorado Denver, Denver, Colorado, United States; Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  • 15 Division of Headache, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA; Head of Research, The BMJ , London, UK
  • 16 Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington, Bloomington, Indiana, USA
  • 17 Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
  • 18 Centre for Reviews and Dissemination, University of York, York, UK
  • 19 EPPI-Centre, UCL Social Research Institute, University College London, London, UK
  • 20 Li Ka Shing Knowledge Institute of St. Michael's Hospital, Unity Health Toronto, Toronto, Canada; Epidemiology Division of the Dalla Lana School of Public Health and the Institute of Health Management, Policy, and Evaluation, University of Toronto, Toronto, Canada; Queen's Collaboration for Health Care Quality Joanna Briggs Institute Centre of Excellence, Queen's University, Kingston, Canada
  • 21 Methods Centre, Bruyère Research Institute, Ottawa, Ontario, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • 22 Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
  • Correspondence to: M J Page matthew.page{at}monash.edu
  • Accepted 4 January 2021

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field, from which future research priorities can be identified; they can address questions that otherwise could not be answered by individual studies; they can identify problems in primary research that should be rectified in future studies; and they can generate or evaluate theories about how or why phenomena occur. Systematic reviews therefore generate various types of knowledge for different users of reviews (such as patients, healthcare providers, researchers, and policy makers). 1 2 To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). Up-to-date reporting guidance facilitates authors achieving this. 3

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (hereafter referred to as PRISMA 2009) 4 5 6 7 8 9 10 is a reporting guideline designed to address poor reporting of systematic reviews. 11 The PRISMA 2009 statement comprised a checklist of 27 items recommended for reporting in systematic reviews and an “explanation and elaboration” paper 12 13 14 15 16 providing additional reporting guidance for each item, along with exemplars of reporting. The recommendations have been widely endorsed and adopted, as evidenced by its co-publication in multiple journals, citation in over 60 000 reports (Scopus, August 2020), endorsement from almost 200 journals and systematic review organisations, and adoption in various disciplines. Evidence from observational studies suggests that use of the PRISMA 2009 statement is associated with more complete reporting of systematic reviews, 17 18 19 20 although more could be done to improve adherence to the guideline. 21

Many innovations in the conduct of systematic reviews have occurred since publication of the PRISMA 2009 statement. For example, technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence, 22 23 24 methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate, 25 26 27 and new methods have been developed to assess the risk of bias in results of included studies. 28 29 Evidence on sources of bias in systematic reviews has accrued, culminating in the development of new tools to appraise the conduct of systematic reviews. 30 31 Terminology used to describe particular review processes has also evolved, as in the shift from assessing “quality” to assessing “certainty” in the body of evidence. 32 In addition, the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols, 33 34 disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. To capture these advances in the reporting of systematic reviews necessitated an update to the PRISMA 2009 statement.

Summary points

To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found

The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies

The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders

Development of PRISMA 2020

A complete description of the methods used to develop PRISMA 2020 is available elsewhere. 35 We identified PRISMA 2009 items that were often reported incompletely by examining the results of studies investigating the transparency of reporting of published reviews. 17 21 36 37 We identified possible modifications to the PRISMA 2009 statement by reviewing 60 documents providing reporting guidance for systematic reviews (including reporting guidelines, handbooks, tools, and meta-research studies). 38 These reviews of the literature were used to inform the content of a survey with suggested possible modifications to the 27 items in PRISMA 2009 and possible additional items. Respondents were asked whether they believed we should keep each PRISMA 2009 item as is, modify it, or remove it, and whether we should add each additional item. Systematic review methodologists and journal editors were invited to complete the online survey (110 of 220 invited responded). We discussed proposed content and wording of the PRISMA 2020 statement, as informed by the review and survey results, at a 21-member, two-day, in-person meeting in September 2018 in Edinburgh, Scotland. Throughout 2019 and 2020, we circulated an initial draft and five revisions of the checklist and explanation and elaboration paper to co-authors for feedback. In April 2020, we invited 22 systematic reviewers who had expressed interest in providing feedback on the PRISMA 2020 checklist to share their views (via an online survey) on the layout and terminology used in a preliminary version of the checklist. Feedback was received from 15 individuals and considered by the first author, and any revisions deemed necessary were incorporated before the final version was approved and endorsed by all co-authors.

The PRISMA 2020 statement

Scope of the guideline.

The PRISMA 2020 statement has been designed primarily for systematic reviews of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. However, the checklist items are applicable to reports of systematic reviews evaluating other interventions (such as social or educational interventions), and many items are applicable to systematic reviews with objectives other than evaluating interventions (such as evaluating aetiology, prevalence, or prognosis). PRISMA 2020 is intended for use in systematic reviews that include synthesis (such as pairwise meta-analysis or other statistical synthesis methods) or do not include synthesis (for example, because only one eligible study is identified). The PRISMA 2020 items are relevant for mixed-methods systematic reviews (which include quantitative and qualitative studies), but reporting guidelines addressing the presentation and synthesis of qualitative data should also be consulted. 39 40 PRISMA 2020 can be used for original systematic reviews, updated systematic reviews, or continually updated (“living”) systematic reviews. However, for updated and living systematic reviews, there may be some additional considerations that need to be addressed. Where there is relevant content from other reporting guidelines, we reference these guidelines within the items in the explanation and elaboration paper 41 (such as PRISMA-Search 42 in items 6 and 7, Synthesis without meta-analysis (SWiM) reporting guideline 27 in item 13d). Box 1 includes a glossary of terms used throughout the PRISMA 2020 statement.

Glossary of terms

Systematic review —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question 43

Statistical synthesis —The combination of quantitative results of two or more studies. This encompasses meta-analysis of effect estimates (described below) and other methods, such as combining P values, calculating the range and distribution of observed effects, and vote counting based on the direction of effect (see McKenzie and Brennan 25 for a description of each method)

Meta-analysis of effect estimates —A statistical technique used to synthesise results when study effect estimates and their variances are available, yielding a quantitative summary of results 25

Outcome —An event or measurement collected for participants in a study (such as quality of life, mortality)

Result —The combination of a point estimate (such as a mean difference, risk ratio, or proportion) and a measure of its precision (such as a confidence/credible interval) for a particular outcome

Report —A document (paper or electronic) supplying information about a particular study. It could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report, or any other document providing relevant information

Record —The title or abstract (or both) of a report indexed in a database or website (such as a title or abstract for an article indexed in Medline). Records that refer to the same report (such as the same journal article) are “duplicates”; however, records that refer to reports that are merely similar (such as a similar abstract submitted to two different conferences) should be considered unique.

Study —An investigation, such as a clinical trial, that includes a defined group of participants and one or more interventions and outcomes. A “study” might have multiple reports. For example, reports could include the protocol, statistical analysis plan, baseline characteristics, results for the primary outcome, results for harms, results for secondary outcomes, and results for additional mediator and moderator analyses

PRISMA 2020 is not intended to guide systematic review conduct, for which comprehensive resources are available. 43 44 45 46 However, familiarity with PRISMA 2020 is useful when planning and conducting systematic reviews to ensure that all recommended information is captured. PRISMA 2020 should not be used to assess the conduct or methodological quality of systematic reviews; other tools exist for this purpose. 30 31 Furthermore, PRISMA 2020 is not intended to inform the reporting of systematic review protocols, for which a separate statement is available (PRISMA for Protocols (PRISMA-P) 2015 statement 47 48 ). Finally, extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses, 49 meta-analyses of individual participant data, 50 systematic reviews of harms, 51 systematic reviews of diagnostic test accuracy studies, 52 and scoping reviews 53 ; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.

How to use PRISMA 2020

The PRISMA 2020 statement (including the checklists, explanation and elaboration, and flow diagram) replaces the PRISMA 2009 statement, which should no longer be used. Box 2 summarises noteworthy changes from the PRISMA 2009 statement. The PRISMA 2020 checklist includes seven sections with 27 items, some of which include sub-items ( table 1 ). A checklist for journal and conference abstracts for systematic reviews is included in PRISMA 2020. This abstract checklist is an update of the 2013 PRISMA for Abstracts statement, 54 reflecting new and modified content in PRISMA 2020 ( table 2 ). A template PRISMA flow diagram is provided, which can be modified depending on whether the systematic review is original or updated ( fig 1 ).

Noteworthy changes to the PRISMA 2009 statement

Inclusion of the abstract reporting checklist within PRISMA 2020 (see item #2 and table 2 ).

Movement of the ‘Protocol and registration’ item from the start of the Methods section of the checklist to a new Other section, with addition of a sub-item recommending authors describe amendments to information provided at registration or in the protocol (see item #24a-24c).

Modification of the ‘Search’ item to recommend authors present full search strategies for all databases, registers and websites searched, not just at least one database (see item #7).

Modification of the ‘Study selection’ item in the Methods section to emphasise the reporting of how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process (see item #8).

Addition of a sub-item to the ‘Data items’ item recommending authors report how outcomes were defined, which results were sought, and methods for selecting a subset of results from included studies (see item #10a).

Splitting of the ‘Synthesis of results’ item in the Methods section into six sub-items recommending authors describe: the processes used to decide which studies were eligible for each synthesis; any methods required to prepare the data for synthesis; any methods used to tabulate or visually display results of individual studies and syntheses; any methods used to synthesise results; any methods used to explore possible causes of heterogeneity among study results (such as subgroup analysis, meta-regression); and any sensitivity analyses used to assess robustness of the synthesised results (see item #13a-13f).

Addition of a sub-item to the ‘Study selection’ item in the Results section recommending authors cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded (see item #16b).

Splitting of the ‘Synthesis of results’ item in the Results section into four sub-items recommending authors: briefly summarise the characteristics and risk of bias among studies contributing to the synthesis; present results of all statistical syntheses conducted; present results of any investigations of possible causes of heterogeneity among study results; and present results of any sensitivity analyses (see item #20a-20d).

Addition of new items recommending authors report methods for and results of an assessment of certainty (or confidence) in the body of evidence for an outcome (see items #15 and #22).

Addition of a new item recommending authors declare any competing interests (see item #26).

Addition of a new item recommending authors indicate whether data, analytic code and other materials used in the review are publicly available and if so, where they can be found (see item #27).

PRISMA 2020 item checklist

  • View inline

PRISMA 2020 for Abstracts checklist*

Fig 1

PRISMA 2020 flow diagram template for systematic reviews. The new design is adapted from flow diagrams proposed by Boers, 55 Mayo-Wilson et al. 56 and Stovold et al. 57 The boxes in grey should only be completed if applicable; otherwise they should be removed from the flow diagram. Note that a “report” could be a journal article, preprint, conference abstract, study register entry, clinical study report, dissertation, unpublished manuscript, government report or any other document providing relevant information.

  • Download figure
  • Open in new tab
  • Download powerpoint

We recommend authors refer to PRISMA 2020 early in the writing process, because prospective consideration of the items may help to ensure that all the items are addressed. To help keep track of which items have been reported, the PRISMA statement website ( http://www.prisma-statement.org/ ) includes fillable templates of the checklists to download and complete (also available in the data supplement on bmj.com). We have also created a web application that allows users to complete the checklist via a user-friendly interface 58 (available at https://prisma.shinyapps.io/checklist/ and adapted from the Transparency Checklist app 59 ). The completed checklist can be exported to Word or PDF. Editable templates of the flow diagram can also be downloaded from the PRISMA statement website.

We have prepared an updated explanation and elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we refer to as elements). 41 The bullet-point structure is new to PRISMA 2020 and has been adopted to facilitate implementation of the guidance. 60 61 An expanded checklist, which comprises an abridged version of the elements presented in the explanation and elaboration paper, with references and some examples removed, is available in the data supplement on bmj.com. Consulting the explanation and elaboration paper is recommended if further clarity or information is required.

Journals and publishers might impose word and section limits, and limits on the number of tables and figures allowed in the main report. In such cases, if the relevant information for some items already appears in a publicly accessible review protocol, referring to the protocol may suffice. Alternatively, placing detailed descriptions of the methods used or additional results (such as for less critical outcomes) in supplementary files is recommended. Ideally, supplementary files should be deposited to a general-purpose or institutional open-access repository that provides free and permanent access to the material (such as Open Science Framework, Dryad, figshare). A reference or link to the additional information should be included in the main report. Finally, although PRISMA 2020 provides a template for where information might be located, the suggested location should not be seen as prescriptive; the guiding principle is to ensure the information is reported.

Use of PRISMA 2020 has the potential to benefit many stakeholders. Complete reporting allows readers to assess the appropriateness of the methods, and therefore the trustworthiness of the findings. Presenting and summarising characteristics of studies contributing to a synthesis allows healthcare providers and policy makers to evaluate the applicability of the findings to their setting. Describing the certainty in the body of evidence for an outcome and the implications of findings should help policy makers, managers, and other decision makers formulate appropriate recommendations for practice or policy. Complete reporting of all PRISMA 2020 items also facilitates replication and review updates, as well as inclusion of systematic reviews in overviews (of systematic reviews) and guidelines, so teams can leverage work that is already done and decrease research waste. 36 62 63

We updated the PRISMA 2009 statement by adapting the EQUATOR Network’s guidance for developing health research reporting guidelines. 64 We evaluated the reporting completeness of published systematic reviews, 17 21 36 37 reviewed the items included in other documents providing guidance for systematic reviews, 38 surveyed systematic review methodologists and journal editors for their views on how to revise the original PRISMA statement, 35 discussed the findings at an in-person meeting, and prepared this document through an iterative process. Our recommendations are informed by the reviews and survey conducted before the in-person meeting, theoretical considerations about which items facilitate replication and help users assess the risk of bias and applicability of systematic reviews, and co-authors’ experience with authoring and using systematic reviews.

Various strategies to increase the use of reporting guidelines and improve reporting have been proposed. They include educators introducing reporting guidelines into graduate curricula to promote good reporting habits of early career scientists 65 ; journal editors and regulators endorsing use of reporting guidelines 18 ; peer reviewers evaluating adherence to reporting guidelines 61 66 ; journals requiring authors to indicate where in their manuscript they have adhered to each reporting item 67 ; and authors using online writing tools that prompt complete reporting at the writing stage. 60 Multi-pronged interventions, where more than one of these strategies are combined, may be more effective (such as completion of checklists coupled with editorial checks). 68 However, of 31 interventions proposed to increase adherence to reporting guidelines, the effects of only 11 have been evaluated, mostly in observational studies at high risk of bias due to confounding. 69 It is therefore unclear which strategies should be used. Future research might explore barriers and facilitators to the use of PRISMA 2020 by authors, editors, and peer reviewers, designing interventions that address the identified barriers, and evaluating those interventions using randomised trials. To inform possible revisions to the guideline, it would also be valuable to conduct think-aloud studies 70 to understand how systematic reviewers interpret the items, and reliability studies to identify items where there is varied interpretation of the items.

We encourage readers to submit evidence that informs any of the recommendations in PRISMA 2020 (via the PRISMA statement website: http://www.prisma-statement.org/ ). To enhance accessibility of PRISMA 2020, several translations of the guideline are under way (see available translations at the PRISMA statement website). We encourage journal editors and publishers to raise awareness of PRISMA 2020 (for example, by referring to it in journal “Instructions to authors”), endorsing its use, advising editors and peer reviewers to evaluate submitted systematic reviews against the PRISMA 2020 checklists, and making changes to journal policies to accommodate the new reporting recommendations. We recommend existing PRISMA extensions 47 49 50 51 52 53 71 72 be updated to reflect PRISMA 2020 and advise developers of new PRISMA extensions to use PRISMA 2020 as the foundation document.

We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders. Ultimately, we hope that uptake of the guideline will lead to more transparent, complete, and accurate reporting of systematic reviews, thus facilitating evidence based decision making.

Acknowledgments

We dedicate this paper to the late Douglas G Altman and Alessandro Liberati, whose contributions were fundamental to the development and implementation of the original PRISMA statement.

We thank the following contributors who completed the survey to inform discussions at the development meeting: Xavier Armoiry, Edoardo Aromataris, Ana Patricia Ayala, Ethan M Balk, Virginia Barbour, Elaine Beller, Jesse A Berlin, Lisa Bero, Zhao-Xiang Bian, Jean Joel Bigna, Ferrán Catalá-López, Anna Chaimani, Mike Clarke, Tammy Clifford, Ioana A Cristea, Miranda Cumpston, Sofia Dias, Corinna Dressler, Ivan D Florez, Joel J Gagnier, Chantelle Garritty, Long Ge, Davina Ghersi, Sean Grant, Gordon Guyatt, Neal R Haddaway, Julian PT Higgins, Sally Hopewell, Brian Hutton, Jamie J Kirkham, Jos Kleijnen, Julia Koricheva, Joey SW Kwong, Toby J Lasserson, Julia H Littell, Yoon K Loke, Malcolm R Macleod, Chris G Maher, Ana Marušic, Dimitris Mavridis, Jessie McGowan, Matthew DF McInnes, Philippa Middleton, Karel G Moons, Zachary Munn, Jane Noyes, Barbara Nußbaumer-Streit, Donald L Patrick, Tatiana Pereira-Cenci, Ba’ Pham, Bob Phillips, Dawid Pieper, Michelle Pollock, Daniel S Quintana, Drummond Rennie, Melissa L Rethlefsen, Hannah R Rothstein, Maroeska M Rovers, Rebecca Ryan, Georgia Salanti, Ian J Saldanha, Margaret Sampson, Nancy Santesso, Rafael Sarkis-Onofre, Jelena Savović, Christopher H Schmid, Kenneth F Schulz, Guido Schwarzer, Beverley J Shea, Paul G Shekelle, Farhad Shokraneh, Mark Simmonds, Nicole Skoetz, Sharon E Straus, Anneliese Synnot, Emily E Tanner-Smith, Brett D Thombs, Hilary Thomson, Alexander Tsertsvadze, Peter Tugwell, Tari Turner, Lesley Uttley, Jeffrey C Valentine, Matt Vassar, Areti Angeliki Veroniki, Meera Viswanathan, Cole Wayant, Paul Whaley, and Kehu Yang. We thank the following contributors who provided feedback on a preliminary version of the PRISMA 2020 checklist: Jo Abbott, Fionn Büttner, Patricia Correia-Santos, Victoria Freeman, Emily A Hennessy, Rakibul Islam, Amalia (Emily) Karahalios, Kasper Krommes, Andreas Lundh, Dafne Port Nascimento, Davina Robson, Catherine Schenck-Yglesias, Mary M Scott, Sarah Tanveer and Pavel Zhelnov. We thank Abigail H Goben, Melissa L Rethlefsen, Tanja Rombey, Anna Scott, and Farhad Shokraneh for their helpful comments on the preprints of the PRISMA 2020 papers. We thank Edoardo Aromataris, Stephanie Chang, Toby Lasserson and David Schriger for their helpful peer review comments on the PRISMA 2020 papers.

Contributors: JEM and DM are joint senior authors. MJP, JEM, PMB, IB, TCH, CDM, LS, and DM conceived this paper and designed the literature review and survey conducted to inform the guideline content. MJP conducted the literature review, administered the survey and analysed the data for both. MJP prepared all materials for the development meeting. MJP and JEM presented proposals at the development meeting. All authors except for TCH, JMT, EAA, SEB, and LAM attended the development meeting. MJP and JEM took and consolidated notes from the development meeting. MJP and JEM led the drafting and editing of the article. JEM, PMB, IB, TCH, LS, JMT, EAA, SEB, RC, JG, AH, TL, EMW, SM, LAM, LAS, JT, ACT, PW, and DM drafted particular sections of the article. All authors were involved in revising the article critically for important intellectual content. All authors approved the final version of the article. MJP is the guarantor of this work. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: There was no direct funding for this research. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618) and was previously supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535) during the conduct of this research. JEM is supported by an Australian NHMRC Career Development Fellowship (1143429). TCH is supported by an Australian NHMRC Senior Research Fellowship (1154607). JMT is supported by Evidence Partners Inc. JMG is supported by a Tier 1 Canada Research Chair in Health Knowledge Transfer and Uptake. MML is supported by The Ottawa Hospital Anaesthesia Alternate Funds Association and a Faculty of Medicine Junior Research Chair. TL is supported by funding from the National Eye Institute (UG1EY020522), National Institutes of Health, United States. LAM is supported by a National Institute for Health Research Doctoral Research Fellowship (DRF-2018-11-ST2-048). ACT is supported by a Tier 2 Canada Research Chair in Knowledge Synthesis. DM is supported in part by a University Research Chair, University of Ottawa. The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/conflicts-of-interest/ and declare: EL is head of research for the BMJ ; MJP is an editorial board member for PLOS Medicine ; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology ; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews . None of these authors were involved in the peer review process or decision to publish. TCH has received personal fees from Elsevier outside the submitted work. EMW has received personal fees from the American Journal for Public Health , for which he is the editor for systematic reviews. VW is editor in chief of the Campbell Collaboration, which produces systematic reviews, and co-convenor of the Campbell and Cochrane equity methods group. DM is chair of the EQUATOR Network, IB is adjunct director of the French EQUATOR Centre and TCH is co-director of the Australasian EQUATOR Centre, which advocates for the use of reporting guidelines to improve the quality of reporting in research articles. JMT received salary from Evidence Partners, creator of DistillerSR software for systematic reviews; Evidence Partners was not involved in the design or outcomes of the statement, and the views expressed solely represent those of the author.

Provenance and peer review: Not commissioned; externally peer reviewed.

Patient and public involvement: Patients and the public were not involved in this methodological research. We plan to disseminate the research widely, including to community participants in evidence synthesis organisations.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Gurevitch J ,
  • Koricheva J ,
  • Nakagawa S ,
  • Liberati A ,
  • Tetzlaff J ,
  • Altman DG ,
  • PRISMA Group
  • Tricco AC ,
  • Sampson M ,
  • Shamseer L ,
  • Leoncini E ,
  • de Belvis G ,
  • Ricciardi W ,
  • Fowler AJ ,
  • Leclercq V ,
  • Beaudart C ,
  • Ajamieh S ,
  • Rabenda V ,
  • Tirelli E ,
  • O’Mara-Eves A ,
  • McNaught J ,
  • Ananiadou S
  • Marshall IJ ,
  • Noel-Storr A ,
  • Higgins JPT ,
  • Chandler J ,
  • McKenzie JE ,
  • López-López JA ,
  • Becker BJ ,
  • Campbell M ,
  • Sterne JAC ,
  • Savović J ,
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC ,
  • Whiting P ,
  • Higgins JP ,
  • ROBIS group
  • Hultcrantz M ,
  • Stewart L ,
  • Bossuyt PM ,
  • Flemming K ,
  • McInnes E ,
  • France EF ,
  • Cunningham M ,
  • Rethlefsen ML ,
  • Kirtley S ,
  • Waffenschmidt S ,
  • PRISMA-S Group
  • ↵ Higgins JPT, Thomas J, Chandler J, et al, eds. Cochrane Handbook for Systematic Reviews of Interventions : Version 6.0. Cochrane, 2019. Available from https://training.cochrane.org/handbook .
  • Dekkers OM ,
  • Vandenbroucke JP ,
  • Cevallos M ,
  • Renehan AG ,
  • ↵ Cooper H, Hedges LV, Valentine JV, eds. The Handbook of Research Synthesis and Meta-Analysis. Russell Sage Foundation, 2019.
  • IOM (Institute of Medicine)
  • PRISMA-P Group
  • Salanti G ,
  • Caldwell DM ,
  • Stewart LA ,
  • PRISMA-IPD Development Group
  • Zorzela L ,
  • Ioannidis JP ,
  • PRISMAHarms Group
  • McInnes MDF ,
  • Thombs BD ,
  • and the PRISMA-DTA Group
  • Beller EM ,
  • Glasziou PP ,
  • PRISMA for Abstracts Group
  • Mayo-Wilson E ,
  • Dickersin K ,
  • MUDS investigators
  • Stovold E ,
  • Beecher D ,
  • Noel-Storr A
  • McGuinness LA
  • Sarafoglou A ,
  • Boutron I ,
  • Giraudeau B ,
  • Porcher R ,
  • Chauvin A ,
  • Schulz KF ,
  • Schroter S ,
  • Stevens A ,
  • Weinstein E ,
  • Macleod MR ,
  • IICARus Collaboration
  • Kirkham JJ ,
  • Petticrew M ,
  • Tugwell P ,
  • PRISMA-Equity Bellagio group

systematic literature review paper pdf

Easy guide to conducting a systematic review

Affiliations.

  • 1 Discipline of Child and Adolescent Health, University of Sydney, Sydney, New South Wales, Australia.
  • 2 Department of Nephrology, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • 3 Education Department, The Children's Hospital at Westmead, Sydney, New South Wales, Australia.
  • PMID: 32364273
  • DOI: 10.1111/jpc.14853

A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a predefined search strategy to identify and appraise all available published literature on a specific topic. The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

Keywords: research; research design; systematic review.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

Publication types

  • Systematic Review
  • Research Design*

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

systematic literature review paper pdf

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

Systematic literature review paper: the regional innovation system-university-science park nexus

  • Open access
  • Published: 02 January 2021
  • Volume 46 , pages 2017–2050, ( 2021 )

Cite this article

You have full access to this open access article

  • T. Theeranattapong 1 ,
  • D. Pickernell   ORCID: orcid.org/0000-0003-0912-095X 1 &
  • C. Simms   ORCID: orcid.org/0000-0001-5172-4453 1  

8012 Accesses

20 Citations

1 Altmetric

Explore all metrics

Recent work on Region Innovation Systems (RIS) has emphasised the importance of universities. Until recently, however, related insights into the dynamics of this relationship in respect of the specific role of the science park have been limited. This paper presents a systematic review identifying the key roles of each actor in relation to innovation. We link the dynamic roles performed by the university between science parks and the RIS. Our results enable us to identify how the key activities performed by the university change during its interrelations within the RIS and with the science park. Our analysis of the literature distinguishes between three sets of relationships through which the university plays differing roles: RIS-university, RIS-university-science park, and university-science park. Respectively, the University’s relationships between these different RIS actors focuses on: resource sharing, brokerage, and commercialisation-exploitation. Secondly, we find that within each of these relationship types the university can perform three types of roles: on knowledge co-creation, acting as conduit, and inter-organisational relationship building. Distinguishing between these differing relationships and roles enables us to identify a total of nine dynamic roles performed by the University, which include: provision of information, channels of communication, infrastructure, regional networking, building research collaboration, acting as knowledge intermediaries, economic development, technological change and commercialisation processes, and start up creation and commercialisation. The review identifies several gaps in the literature in need of further research, and suggests that university relationships with RIS, interlinked with those between the university and science park itself, are important factors affecting science park innovation performance.

Similar content being viewed by others

systematic literature review paper pdf

From high-tech clusters to open innovation ecosystems: a systematic literature review of the relationship between science and technology parks and universities

Leyla A. Sandoval Hamón, Soraya M. Ruiz Peñalver, … Rune Dahl Fitjar

systematic literature review paper pdf

Innovation Process in Universities – A Bibliometric Analysis

systematic literature review paper pdf

Regional innovation system research trends: toward knowledge management and entrepreneurial ecosystems

Pedro López-Rubio, Norat Roig-Tierno & Alicia Mas-Tur

Avoid common mistakes on your manuscript.

1 Introduction

Universities’ traditional roles, of teaching and research, are increasingly being supplemented by government policies aimed at increasing the “entrepreneurial” activities as a way to help develop the economy, for example through student start-ups (Wright et al. 2017 ). Whilst it is not new, the “entrepreneurial university” concept adopted by a growing number of universities has supplemented the two traditional roles of universities with the need to help develop regional economies (e.g. Etzkowitz and Leydesdorff 1999 ; Gunasekara 2006 ; Malairaja and Zawdie 2008 ). Consequently universities, through the concept of the triple helix (Etzkowitz and Leydesdorff 2000 ), are increasingly participating in entrepreneurial activities (see for example, Etzkowitz et al. 2000 ).

Policy makers and governments are increasingly looking to Universities to contribute to the regional innovation system (RIS) and/or entrepreneurial ecosystem (Feldman et al. 2019 ), as part of building the knowledge based economy and fostering regional competitiveness. This role of the university in regional economic and social development has heavily influenced policy over the past 20 years (Acs et al. 2009 ; Etzkowitz and Leydesdorff 1996 , 1999 ), further altering the role of universities. Science parks (SPs) act as an important tool in regional development policy, and can be considered as property-based policy interventions to support commercialization of research results from universities (Appold 2004 ; Vedovello 2002 ). This paper, therefore attempts to link the literatures concerning RIS and SPs, via the role of the entrepreneurial university, to provide understanding of how SPs are conceptualised and how these literatures link findings on universities and SP’s to the RIS.

In both RIS and science park literatures universities play a critical role. They form a key and integral component in the RIS and have important linkages with science parks. There has, however, been no systematic integrated investigation into how the roles performed by the university change depending on the nature of the interaction and the actor involved. Moreover, only a limited number of studies focused on science parks incorporate the RIS, with even fewer focused on the university as a key stakeholder within this. This gap therefore requires us first to integrate findings from these two literatures and identify what they have found and focused on thus far. Then we focus specifically on two basic research questions: First, what are the key roles and foci of the university in its relationships with the science park, within the RIS? Second, how do these key roles and foci change through interrelations between university, science park and the surrounding RIS environment?

In order to answer these questions this paper follows a systematic literature review approach constructed from literatures on “science parks” and “RIS incorporating science parks” with the intention of linking both literatures together. This approach provides a framework of protocols through which the relevant literature is identified, findings reported, and contribution of the study and research gaps identified (Macpherson and Holt 2007 ; Tranfield et al. 2003 ).

The findings of our review provide three key contributions to the literature. First, we clarify the three types of relationships between the university and the key stakeholders: knowledge co-creation, acting as conduit, and inter-organisational relationship building. Second we identify three specific roles performed by the University: resource sharing, brokerage, and commercialisation-exploitation. Third, we identify how the University’s roles change during its interactions between RIS-university, RIS-university-science park, and university-science park, identifying nine specific sets of activities the University performs, which depend on the focus-role interdependency.

The next section begins with a brief discussion of the ex ante literature, focusing on the definition of RIS actors and dynamics of the local innovation ecosystems. This is followed by a description of the research methodology. A review protocol is developed, publications are selected and grouped and classified, prior to reporting the results of the subsequent analysis. Finally, we identify gaps for future research.

2 Ex ante literature review

The RIS approach incorporates the development of the “entrepreneurial university” with knowledge spillovers. The interrelationships between the triple helix actors to encourage learning processes in the region also form key aspects of the RIS, which has resulted in universities expanding and updating their research agendas to better meet industrial needs and enhance links with industry (Vedovello 2002 ). For example, academic researchers are able to commercialise their research results and exchange knowledge with firms located on the science park. In so doing, science parks offer a crucial resource network for new technology-based firms (NTBFs) (Westhead 1997 ). This then both fosters and supplements the role of the science park as an interactive mechanism for systemic university-industry cooperation (Asheim and Coenen 2006 ; Vedovello 2002 ).

The RIS represents an interconnected context and resource, defined in terms of both actors and dynamics within the local innovation ecosystem. In terms of actors, the importance of innovative local agencies (Asheim and Isaksen 2002 ), regional and local government governance actors and institutions (including universities), also science parks (Zhang 2015 ), other key infrastructure providers (Gerstlberger 2004 ; Takeda et al. 2008 ), and the international connections (Lew et al. 2018 ) have been identified.

In terms of dynamics, Zhang ( 2015 ) then highlights national and local policies in human resources and land development, whilst Asheim and Coenen ( 2005 ) identify that “regional culture” is relevant to knowledge production and uptake (Rip 2002 ). Cooke and Morgan 1998 ). To design a sustainable RIS researchers have indicated that resourcing the development of relevant infrastructure is one of the criteria necessary for success, the infrastructure itself forms an essential determinant for firm location choice (Gerstlberger 2004 ; Takeda et al. 2008 ). Cooke and Morgan ( 1998 ) also identify that robust RIS also have levels of institutional thickness, with different actors playing different roles at different levels.

In the case of universities, Fuller et al. ( 2019 ), Pickernell et al. ( 2019 ) and Ishizaka et al. ( 2020 ) identify, previous dichotomous definitions of universities into being research or teaching focused, can now be seen to be too simplistic. Universities instead exist along a spectrum between these two extremes, offering different combinations of supporting relevant research and training activities within RIS. Whilst as will be seen, there is no consensus on what constitutes a science park, broadly they can be seen as characterised by links with academic institutions (usually research focused), supporting start-up/incubation of technology-based firms, fostering transfer of technology and business knowledge, property-based, and sustainable (Durão et al. 2005 ). Science parks also help (usually more research-focused) universities build and improve their reputation (Helmers 2019 ; Link and Scott 2017 ) within the RIS.

3 Methodology

In order to be systematic, transparent and replicable, our review involved two processes. This follows the approach of Macpherson and Holt ( 2007 ), who themselves followed refined protocols outlined by Tranfield et al. ( 2003 ) and Pittaway et al. ( 2004 ). First, we define the review protocols and map the literature by: (1) accessing, (2) retrieving and (3) judging the quality and relevance of the literature in relation to the research topic, according to explicit inclusion and exclusion criteria. As part of this we classify the quality of papers, following Turner et al. ( 2013 ) approach of selecting papers categorised by journal rating (based on the Chartered Association of Business School’s (CABS) Academic Journal Guide 2018). This produced the following review protocols and processes, summarized in the table below, and then discussed in more detail (Table  1 ).

3.1 Review protocols

The papers included in this study were identified from the electronic databases Business Source Complete, Web of Science, and Scopus restricted to English language academic papers in the categories of “technological innovations, research parks, technology, and business incubators” (Business Source Complete), “business and management” (Web of Science), and “business, management and accounting” (Scopus).

Three inclusion criteria were used within our systematic review process: (1) Papers that reviewed secondary data analysis if the purpose of the review was to identify future research or policy agendas because they offer the working assumptions to be used in this study, or primary quantitative or qualitative empirical studies. (2) Articles had to be published after 1990. This time period is selected due to the concept of RIS most consistently appearing and being developed during the 1990s, the literature on science parks also most strongly observed during this period, and also the need to focus on policy developments in the context of these more recent developments. (3) Following Savino et al. ( 2017 ) only academic journal articles were included. It must be recognised that the approach taken has deliberately excluded non-journal outputs from the review, which will potentially exclude relevant and important contributions from books, such as Link and Scott ( 2015 ) and Wright et al. ( 2019 ). This follows the approach of Keupp et al. ( 2012 ), in which book reviews, book chapters, conference proceedings and working papers were excluded.

An initial list of keywords based on ex ante analysis of the literature yielded 3 keywords. We then conducted Boolean searches on combinations of the keywords (and their variants, to acknowledge the different terminologies used around the world.) identified. For example, these searches included ‘Science park’, ‘Research park’, ‘Technopole’, ‘High-Tech park’, ‘Technology park’, ‘‘Regional Innovation System’ and Science park’. The total number of potentially relevant articles retrieved using search strings alone was 1735.

Once duplicate articles were excluded, 1089 papers remained. To then identify the papers directly related to the topic and classify these papers, the papers were evaluated systematically, beginning with the journal quality, then examining the content of abstract and introduction, literature review, and conclusion in order to exclude irrelevant articles (using the exclusion criteria in Table  2 ).

3.2 Mapping the field

Utilising the described process, 151 papers were identified as directly related to the topic. Table  3 categorises these articles by journal in Table  3 , using ratings from the CABS Guide 2018. 90 papers were published in journals rated as CABS4 or CABS3 (59.60%), while 61 papers were rated as CABS2 (40.40%), shown in Table  3 according to the numbers of selected papers published in each journal. Technovation and the Journal of Technology Transfer, given their focus, demonstrated the strongest discourse around the relevant issues in terms of papers. The average number of papers from 1990 to 2019 was approximately 5 papers per year, with concentrations for science parks around the years 2003, 2005, 2006, 2008 and RIS-science parks around year 2002 and 2005.

Focusing on the overlaps in broad topic area covered by the papers, 119 articles originate from the science park literature, seventy-one of these also indicating the roles and interactions of the university and science park. The remaining thirty-two articles had a RIS literature emphasis, whilst incorporating science parks within their analysis (i.e. RIS with a science park emphasis). These articles can be divided into: (1) eighteen papers which referred to the science park and asserted the roles and interactions of university and RIS, (2) eight papers referring to the science park but focused mainly on RIS and without the university, (3) two paper conducting research on the science park and relevant to RIS concepts without mentioning the university, and (4) four papers conducting research focused on the science park and including both RIS concepts and the role of the university.

With respect to the study locations, the literature on science park without a RIS-science park emphasis is shown in Table  4 . The results identify a concentration on single country studies, particularly in the Taiwan, UK, Sweden, China and Spain. Conversely, the literature on RIS-science park emphasis (Table  5 ) is relatively more focused on multi-country studies, with a strong focus on more developed economies.

Finally, in terms of the analytical focus of the papers, (shown in Table  6 ), secondary review papers and mixed method papers each have 10.60% of the total. Overall, there is a relative concentration on qualitative over quantitative studies, particularly for RIS with a science park emphasis, whilst most of the mixed methods papers are focused on Science park.

3.3 Reporting the findings

Following Savino et al. ( 2017 ) the analysis of the literature is divided into two sections, informed by the Research Questions. We follow processes similar to Macpherson and Holt ( 2007 ), by first providing a broad descriptive review of the literature (Tranfield et al. 2003 ), according to the broad RIS-University-Science Park framework on which the data had been initially collected, to identify context. Following this, continuous inductive and iterative coding and sensemaking processes (Williams 2002 ), compared the literature to generate summarizing themes through which the different roles of the university in RIS actors-university-science park nexus could be identified.

4 The broad RIS-university-science park context

4.1 defining the science park.

The concept of science parks can be traced back to the 1950s when the Stanford Science Park was founded by Stanford University in California. Science parks boomed throughout Europe during the 1980s and 1990s (Bakouros et al. 2002 ; Storey and Tether 1998 ), and in Asian countries in the mid-1980s (Phan et al. 2005 ). Simultaneously, a number of other types of property-based developments with similar roles to science parks exist, particularly technology parks, technopoles, innovation parks, and research parks (Sofouli and Vonortas 2007 ). According to Link and Scott ( 2003a , b ) each can distinguished as follows: (1) research parks are characterized by tenants that are mostly engaged in basic and applied research, (2) science parks (including technology parks) are characterized by tenants that are more heavily engaged in applied research and development, (3) technology or innovation parks in particular often house new start-up companies and incubation facilities. Commercial or industrial parks can also be distinguished from science parks on the basis of their tenants who apply value-adding activities to existing R&D-based products or production-orientated activities, as opposed to conducting R&D activities (Cheng et al. 2014 ; Huang et al. 2012 ; Link and Scott 2003b ). Also, whilst Technopoles and initiatives like the multimedia super corridor (MSC) often share similar goals to science parks (Boucke et al. 1994 ; Chordà 1996 ; Ramasamy et al. 2004 ), they differ in often being created by government and having a much larger physical scale (Chordà 1996 ; Ramasamy et al. 2004 ).

Given the above discussion, however, there is no uniformly accepted definition for the science park (Cheng et al. 2014 ; Fukugawa 2006 ; Hansson et al. 2005 ; Lindelöf and Löfsten 2006 ; A. Link and Link 2003 ; A. N. Link and Scott 2003b ; Löfsten and Lindelöf 2001 , 2002 , 2003 ). Phan et al. ( 2005 ) also demonstrates that no general theory for the science park exists due to the origins and consequences of the parks being varied depending on their economic geography, political and social context, as well as, economic systems. In brief, however, a science park is typically characterised by: (1) having links with academic institutions (2) supporting the start-up and incubation of technology-based firms (3) fostering the transfer of technology and business knowledge, (4) property-based initiatives, and (5) their sustainable nature (Durão et al. 2005 ). Universities then obtain income from technology transfer as well as receiving opportunities for their personnel and students from interacting at applied levels with technology-based organisations, science parks also helping universities build and improve reputation (Helmers 2019 ; Link and Scott 2017 ).

Whilst universities are often instrumental in founding science parks, this activity is more concentrated in some countries and universities than others. In the UK, the Cambridge, Heriot-Watt, and Surrey Science Parks were each set up by universities (Westhead and Batstone 1998 ) whilst in Sweden, universities have worked alongside local authorities and development agencies to encourage the formation of heterogeneous groups of parks (Lindelöf and Löfsten 2006 ). By contrast, the Kista science park evolved from a cluster centred on Ericsson into the Kista Science City and did not need a university as a precursor to its establishment (Cabral 1998 ). Whilst in Japan the “centre facility” approach involves a public–private organization takes on the role of the university to offer facilities and services to entrepreneurs (Bass 1998 ).

Ng et al. ( 2019 ) also indicate that science park ownership has diversified to include combinations of public and/or private sector actors, which can also affect the focus of their activities. The privately owned Kilometro Rosso Science Park in Italy, for example, specifically aims to promote networking amongst relevant partnerships as well as enhancing interactions between on-park and off-park firms (Corsaro and Cantù 2015 ), Layson et al. ( 2008 ) also identified that privately owned science parks often seek to limit the number of firms in the park, even where there is nominally free entry. Conversely, university owned science parks more specifically focus on knowledge spillovers (Alshumaimri et al. ( 2017 ) offering the entrepreneur access to the intellectual resources of academic staff and advice to establish a new venture (Wright et al. 2008 ), but also provide less access to commercially oriented expertise and contacts than non university-affiliated parks. According to Albahari et al. ( 2017 ), higher university involvement also positively affects tenant firm patent applications but negatively affects their innovation-related sales. Therefore, a university-owned science park is likely to be a strategic choice for a firm with both benefits and costs.

4.2 The performance of science parks

In terms of performance, science parks clearly aim to generate the growth of new technology-based firms (NTBFs), on-park firms are expected to “perform better” or benefit from greater “added value” than equivalent off-park firms (Löfsten and Lindelöf 2002 ; Radosevic and Myrzakhmet 2009 ). However, Markman et al. ( 2008 ) indicate that there is a problem in terms of defining science park effectiveness, particularly with respect to measures of on-park firm survival, wealth creation, and employment growth. To explore effectiveness, researchers have therefore compared on-park with off-park firms in terms of: innovative performance (Chan et al. 2010 ; Lindelöf and Löfsten 2003 ; Löfsten and Lindelöf 2001 ; Radosevic and Myrzakhmet 2009 ), facilities management (FM) (Dettwiler et al. 2006 ), R&D productivity of firms (Siegel et al. 2003a ; C. H. Yang et al. 2009 ), the performance of firms (Löfsten and Lindelöf 2003 ), product development (Lindelöf and Löfsten 2004 ), perceived benefits of being in a science park (Westhead and Batstone 1998 ), survival and growth rates (Ferguson and Olofsson 2004 ), improvement in economic performance and innovative capacity (Liberati et al. 2016 ), contribution to NTBFs (Fukugawa 2006 ), links with local HEIs (Storey and Tether 1998 ), R&D “inputs” and “outputs” (Westhead 1997 ), innovative output (Squicciarini 2008 ), university–industry collaboration (Malairaja and Zawdie 2008 ), performance of NTBFs (Siegel et al. 2003b ), absorptive capacity (Ubeda et al. 2019 ), local knowledge exchange and innovation promotion (Díez-Vial and Fernández-Olmos 2015 ), economic recession performance effects (Díez-Vial and Fernández-Olmos 2017 ), growth and innovativeness (Lamperti et al. 2017 ), establishment and growth of new technology-based firms (NTBFs) (Colombo and Delmastro 2002 ), cooperation for innovation (Vásquez-Urriago et al. 2016 ) and innovation performance of NTBFs (Ramírez-Alesón and Fernández-Olmos 2018 ).

Given this plethora of potential performance measures there are many identified determinants of science park performance. For example, a strong management team is recognized as a characteristic of successful science parks (Cabral 1998 ). Albahari et al. ( 2013 ), introduced a framework to analyse science park systems (SPSs). Applying it to the Italian and Spanish contexts, they found that science parks played a more essential role in Spain than in Italy because of the more coherent and specific policies supporting the parks, sounder business models, and government intervention in the medium-long term.

Guadix et al. ( 2016 ) define successful science parks as those that have overcome initial hurdles to have high land occupation rates, housing firms that generate high revenue and numbers of employees. The availability of R&D centres and academic institutions encouraging the development of specialised knowledge and knowledge transfer amongst the various stakeholder organisations are crucial in this, as demonstrated in the case of Sophia Antipolis (Barbera and Fassero 2013 ). Eto ( 2005 ) also indicated, however, that technoparks in Japan are often located in rural areas, often distant from train stations, highlighting obstacles to promoting high/new technology park performance. Hu ( 2007 ) found, therefore, that most of China’s technology parks are located in large core urban areas where technological, educational and industrial resources are also concentrated.

Science park performance can therefore be seen to be at least partly the result of public–private partnerships, with multiple organisations involved in influencing their mission and operational procedures (Phan et al. 2005 ). Government support is therefore an important factor in determining the likelihood of success. For example, studies in Japan (Westhead 1997 ; Park 2004 ) have demonstrated the importance of central and local governments in supporting the development of SP through active involvement, national and research institutes, and strategies to promote industrial R&D. Likewise, the success of BIORIO in Brazil was attributed to dynamic government funding, alongside research-orientated institutions and a research orientated private sector (Cabral 1998 ). Vaidyanathan ( 2008 ) also identified the key role of the Indian government’s business model, which fostered links between public, private, and foreign sectors. Etzkowitz and Zhou’s ( 2018 ) conclusion that the success of science parks is at least partly reliant on their being part of broader regional University-Industry-Government interactions also reinforces the importance of the wider RIS to science park success.

Beginning with the broad RIS-University-Science park framework in which innovation takes place, therefore, two basic categories of analysis were identified: (1) the main roles and (2) focus of each of the key stakeholders and the relationships between them.

4.3 The key role of the RIS in resourcing

According to Buesa et al. ( 2006 )the RIS acts as a set of public and private organisations forming a network and interacting to create and spread knowledge and innovation within a specific territory. Articles that fall within our study parameters emphasise the importance of this RIS context. Specifically, this context acts as a trigger to defining what can or cannot be achieved. Hence, alongside government support, the university’s role and the science park’s functions and performance crucially depend on the RIS. The implication being that otherwise these functions will not operate optimally.

The concept of the Regional innovation system (RIS) highlights the importance of a range of institutions, national and local policies in human resource development, local government, and designation of land development, which can include high-tech parks, science and industrial parks (Zhang 2015 ). This supports interactive learning and helps explain differences in regional innovation performance and economic growth (Asheim and Coenen 2006 ; Cooke 2002a , 2003 ). Asheim and Coenen ( 2005 ) also identify the importance of fostering “regional culture” in the development of a RIS, dynamics eventuating not only from general economic processes but also sociological circumstances relevant to knowledge production and the uptake of new knowledge (Rip 2002 ). A dense inter-organisational network within a region is therefore key to encouraging knowledge diffusion, regional learning, and effective resource transfer in RIS (Takedad et al. 2008 ), specifically when surrounded by supporting innovative agencies (Asheim and Isaksen 2002 ). Whilst Lew et al. ( 2018 ) also highlighted the importance of international connections of regional innovation actors, strong government innovation policy initiatives, and regional R&D collaboration.

The RIS therefore represents both a context and a resource, where the region is a network of connected actors, built up by regional resources within the network, allowing knowledge to be transferred across agents. (Cantner et al. 2010 ), with the impact of regional policies in the creation and development of science parks a specific area of analysis suggested by Mora-Valentín et al. ( 2018 ). This is supported by strong regional governance, defined as the capacity to develop the policies and organisations required (Cooke and Morgan 1998 ). To design a sustainable RIS researchers have indicated that resourcing the development of relevant infrastructure is one of the criteria necessary for success, the infrastructure itself an essential determinant for firm location choice (Gerstlberger 2004 ; Takeda et al. 2008 ).

4.4 The key role of the university in brokering knowledge between the RIS and science park

Universities have been identified as a major component of the RIS, and they play a crucial role in brokering knowledge (Chung 2002 ; Gunasekara 2006 ; Kramer et al. 2011 ; Lew et al. 2018 ), which differs to other parts of the RIS. Whilst universities are often crucial actors in their regions in terms of employment and economic activity (Löfsten and Lindelöf 2005 ), they play an important role as both direct and indirect sources of knowledge production, which they are able to feed or diffuse into the RIS (Cooke 2002a , b ; Lew et al. 2018 ).

Universities are therefore specifically important in both the knowledge generation and diffusion subsystem of RIS, as well as subsequent knowledge application activities and connections with firms that aim to exploit the knowledge for commercial returns (Cooke 2002a ). It is in this exploitation role, however that science parks can be seen to have a specific role in conjunction with universities.

4.5 The key role of the science park in exploiting innovation

The science park can therefore be seen to play the role of a catalytic incubator environment for transformation of pure research into production. Authors such as Feldman ( 2007 ) highlight the role of science parks in innovation exploitation (Huang et al. ( 2012 ), potentially generating smaller (Staudt et al. 1994 ) or larger (Storey and Tether 1998 ) benefits in terms of employment growth as well as via better sales and sales growth performance (Gwebu et al. 2019 ). More specifically for SME’s they have been identified as regional growth engines (Cheng et al. 2014 ) creating wealth and high value job opportunities through technology based-research and development (Chang et al. 2010 ).

Science parks also offer a social environment where proximity between firms supports key information transfer for the development of innovation (Fernández-Alles et al. 2014 ). Within science parks, firm proximity can enhance the interaction between personnel and extend the networking to support the development of innovation, as seen in the case of Hsinchu Science-based Industrial Park (HSIP) and Tainan Science-based Industrial Park (TSIP) (T.-S. Hu 2008 ). In addition, science parks can be used by the government to promote innovation in specialised sectors in specific localities, either in single sectors (e.g. biotechnologies in agro-food industry in Lombardy (Bosco 2007 ) or multiple related high-tech sectors in Hsinchu Science park (Chen et al. 2006 ). Connection between science parks could also form allowing greater exchange of knowledge in specialised sectors (Yang et al. 2009 ).

4.6 The Interrelationships between RIS, University and Science Park

Science parks also, however, utilize the physical and network infrastructure created through the RIS, alongside their relationships with the universities that support them, to facilitate flows of knowledge with the potential of commercialisation into new firms created on the science park itself to produce innovation exploitation outcomes. Thus, science parks can also be defined as intermediate structures which established around universities e.g. IDEON (Angelakis and Galanakis 2017 ) or brokerage intuitions that attract firms and other organisations for cooperation (Almeida et al. 2011 ) or innovation support infrastructure (Diaz-Puente et al. 2009 ; Doloreux and Dionne 2008 ) or facilitators of inter-organisational relationships (Pilar Latorre et al. 2017 ).

However, whilst Lenger ( 2008 ) found that for technoparks (or science parks) and university-industry joint research centres, universities are key actors, making significant contribution to RIS, the roles and interactions of science parks as well as a numbers of parks differ according to the specific RIS. For instance, Huang and Fernández-Maldonado ( 2016 ) found that in the Eindhoven city-region, each science park focused on a single field of R&D facilitating the clustering of relevant industries and acting as a hub for the regional economy. Conversely, where there is only one science park in a region, this must more broadly support regional technological strengths. For example, where this situation exists in the Beauce region of Canada, this highlights the ‘institutional thinness’ characteristic of peripheral regions (Doloreux 2004 ). Gebauer et al. ( 2005 ) also demonstrate that the multiple small innovation centres in more rural/economically peripheral areas of Western Germany often lack critical mass because of the localities in which they are located. Lecluyse et al. ( 2019 ) identified a need for more research into the relationship between economic geography and science park when analysing the contribution of the science park to the economy, Gkypali et al. ( 2016 ) suggesting that the science park needs to orientate itself within the RIS in which it finds itself.

The preceding discussion also highlights, however, that whilst the RIS, universities and science parks have different roles in the innovation process, there are also clear, strongly overlapping relationships through which these roles are displayed. We utilise the linear approach (e.g. Massey and Wield 2006 ; Quintas et al. 1992 ; Westhead 1997 ), as an initial simplifying framework to structure the sections that follow (see Fig.  1 ), whilst also identifying the overlapping two-way relationships between activities.

figure 1

The sets of RIS-university-science park relationships

Respectively, these sections discuss: (1) The university-RIS relationship which demonstrates an emphasis of activities on the dissemination of basic research, (2) the RIS-University-Science Park relationship emphasising product development activities, and (3) the Science Park-University relationship, which in which the emphasis is on applied research.

4.6.1 The university-RIS relationship and its focus on basic research for dissemination

Rip ( 2002 ) emphasized how universities have evolved to more closely support both regional innovation systems and strategic science, which can also be seen as constituting basic research. Rip’s ( 2002 ) case study analysis of the University of Twente in the Netherlands found; “ The University of Twente has a strong regional orientation, but that its spin - offs strengthen the economy, not necessarily the regional innovation system. It is also prominent (in selected areas) at the international research frontier. Promising options are a key feature of strategic science, but their ‘promise’ most often is not defined in regional terms, but in relation to a global scientific and technological frontier. ” (p. 129).

The interaction between the university and other RIS actors can be seen as ‘knowledge co-creation’ (Gunasekara 2006 ). In this scenario, universities have the main role of coordinating the production of knowledge and disseminating it to other actors in the RIS, but universities also cooperating with regional firms to undertake collaborative projects to conduct the basic research and create the new knowledge. The relationships between the university, the wider RIS and science park actors can then be viewed as ‘conduits’ for knowledge flows within entrepreneurial RIS (Yoon et al. 2015 ). The science park in this type of relationship fosters linkages between the university and the other RIS actors, enhancing product development and commercialising products. The last relationship occurs specifically between the university and science park actors. Defined as ‘inter-organisational relations’ (Gunasekara 2006 ), many subtypes of links may exist. More linkages are created the more organisations are involved, including the government, researchers, firms, policymakers, business ventures, and so on.

Whilst the focus for basic research for dissemination is through relationships between the university and key stakeholders, and theoretically includes the processes of knowledge co-creation, acting as conduit, and interorganizational relationship building, several studies also suggest weaknesses in Universities’ abilities to enhance the RIS through these mechanisms. Gunasekara’s ( 2006 ) undertook an analysis of three Australian universities, utilising a conceptual framework based on the triple helix model, literature on university engagement, and innovation systems. This research found the universities to be weak in their willingness and capability to act like industry, generating poor commercial benefits. In Daedeok Innopolis, universities were also found to have strong links with public research institutions, but weaker links were demonstrated between firms and universities (Yoon et al. 2015 ). Hence universities are often perceived to be relatively weak in this aspect as a result of a greater focus on education over those activities of most relevance within many RIS, specifically R&D activities which are closer to market (as opposed to basic research). It was these weaknesses that led Chung ( 2002 ) to suggest the need for policies supportive to innovation, such as the recruitment of experienced professors and collaboration between academics and researchers in research centres. It is also in this context that the science parks can be seen as helping to facilitate a better flow of university generated knowledge into innovation.

4.6.2 The RIS-university-science park relationship and its focus on product development

Many governments globally have used science parks to stimulate the regional economy by fostering the growth of NTBFs and science-based industry. For example, the government of Taiwan established science parks, officially defined as offshore economic zones, with complementary business services and financial incentives provided to high-technology manufacturers (Tsai et al. 2007 ). To date, however, there have been only a limited number (6) of studies focusing on science parks whilst also incorporating the RIS, four of these six papers also discussing the role of the university within this context with each taking a different focus..

For Zhang ( 2015 ) the concept of the RIS itself identified the importance of institutions, human resource and land development, which included science parks. Zhu and Tann ( 2005 ), viewed the science park as an RIS in its own right, as well as playing roles and interacting with the university to help develop the wider RIS. Hommen et al. ( 2006 ) found a specifically important role in the Swedish context for university education, training and intellectual property management to the development of the RIS. For Jonsson ( 2002 ), the role of the university was specifically important in developing and supporting communication in RIS networks, whilst for Yoon et al. ( 2015 ) the building of formal and informal relationships in the RIS by universities was of particular relevance. Given these different foci Gkypali et al. ( 2016 ) identify the need to place the science park in its specific RIS context. The limited number of papers identified, however, both generally, and within the university category, as well as the disparate focus of these papers, again highlights the lack of studies in this specific area.

Taking a broader perspective, for universities to become more effective in their RIS the knowledge they supply must fit with the needs of their region’s firms and raise future interest in their services, through product development (e.g. Tödtling and Kaufmann 2002 ). Consequently many universities have set up science parks and incubation centres to help firms overcome obstacles in the innovation process and strengthen university-industry interactions (Asheim and Coenen 2006 ; Gunasekara 2006 ; Malairaja and Zawdie 2008 ; Vedovello 2002 ). These are supported by Technology Transfer Offices (TTOs), which require close proximity and systemic links between university and industry.

Science parks are also viewed as policy instruments for encouraging regional development, innovation, and the setting up of new firms through networks between higher educational institutions (HEIs) and industry (Hansson et al. 2005 ; T.-S. Hu et al. 2005 ). In particular policy makers see science parks as “meta-organisations”, important in the task of getting small and medium-sized enterprises to participate more closely in knowledge creation with universities and research institutions (Giaretta 2013 ). This underlines the importance of the science park in terms of promoting links with the university, with the aim of making contributions to the regional economy. Indeed, Zhu and Tann ( 2005 ) analysed Zhongguancun science park (ZSP) investigated the linkages and the knowledge flows between several actors of ZSP as a RIS in itself, viewing the park effectively as a RIS in itself, acting as: “ a social system consisting of different sets of clusters, which interact with different linkages and flows, in a systematic way, to enhance the localized learning and competitive capabilities of a region ”. In this context, science parks form an important component in the broader government supported RIS. They are seen as a tool of regional development policy through transferring university generated public knowledge to NTBFs, through product development within regional contexts (Fukugawa 2006 ; Vedovello 2002 ).

4.6.3 The science park-university relationship and its focus on applied research

There is much research focused on the role that the science park plays in bridging the gap between university and industry (Bakouros et al. 2002 ; Malairaja and Zawdie 2008 ; Phillimore 1999 ; Quintas et al. 1992 ; Vedovello 2002 ), though there is much less focus in the literature on the developing economy context when compared to more developed economy examples. As outlined previously, science parks are conceived as a mechanism to help link research results from universities more closely to the market and stimulating technological spillovers (Löfsten and Lindelöf 2005 ; Siegel et al. 2003a ). Consequently, for Universities the main aim of establishing science parks is to exploit their R&D results, research ideas, and secure funding for future research (Hansson et al. 2005 ).

Proximity between knowledge creators in the university and firms on the science park can also be seen, in a range of geographical contexts, to be important to the attractiveness and growth of science parks (Guy 2002 ; Ma 1998; Siegel et al. 2003b ; Pálmai 2004 ; Fikirkoca and Saritas 2012 ; Link and Scott 2003a ; Ratinho and Henriques 2010 ). These links can be divided into forms: formal (e.g. licensing and co-operative alliances), and informal (e.g. personal relations, business partners, family tie and the mobilisation of personnel) (Bakouros et al. 2002 ; Dettwiler et al. 2006 ; Lindelöf and Löfsten 2004 ; Westhead and Batstone 1998 ). The advantages of close linkages identified within the literature include: access to experts providing improved performance (Dierdonck et al. 1991; Lindelöf and Löfsten 2004 ; Vedovello 2002 ), providing the latest knowledge (Markman et al. 2005 ; McAdam and McAdam 2008 ), encouraging R&D activities amongst firms (Siegel et al. 2003a ), and maintaining and supporting industrial innovation (Hu 2008 ).

In addition to the receipt of academic knowledge, a number of other factors have been found to influence firm decisions to locate in Science parks. For example, Westhead and Batstone ( 1998 ) found that many NTBFs decided to establish or relocate onto science parks because of the “prestige and overall image of the site” and the “prestige of being linked to a HEI/centre of research” . A case study of the Tsinghua University Science Park also revealed the significance of reputational benefits from being located on the park to firms (Motohashi 2013 ). The links between academia and industry within science parks are therefore complex, with Hobbs et al. ( 2017 ) arguing for further development of the literature. For Universities, however, proximity to a science park can also fundamentally shift their mission from basic to applied research (Link and Scott 2003b ).

5 The changing roles of the university in the RIS-university-science park nexus

The analysed literature identifies the university as sitting at the centre of a RIS-university-Science park nexus. The University plays an important specific role in its own right as a knowledge broker. It also further contributes through its relationships with the RIS and science park, as these relate to a university’s potential basic research, dissemination and applied research activities. The university’s focus therefore changes, depending on these relationships. Specifically, in addition to directly brokering knowledge, it plays supporting roles with regards to resourcing and innovation commercialization. Our subsequent analysis therefore focuses on the second question, namely: how do the roles of the university change through its interrelationships within the RIS and with the science park? This is summarised in Fig.  2 below.

figure 2

Roles of the university in the RIS-university-science park nexus

Details of the empirical evidence from the systematic literature review is summarized in the Tables  7 , 8 , 9 below, exploring more fully: (1) the parties involved and how the activities associated with specific inter-relationships and (2) the specific importance of the university in terms of resource sharing, brokerage, and exploitation/commercialisation.

Developing upon the preceding discussions, and following an initially linear approach to framing (though recognising and identifying the potentially .6overlapping multi-directional nature of the concepts), the literature in the tables below is initially conceptualized within an innovation “pipeline” reflecting the three different roles performed by the University both singly and through its relationships with the RIS and science park. First, “Resource sharing”, includes the offering, facilitating, and supporting of research results, data, and information that the university produced for others actors within the RIS. Secondly, the central University “Brokerage role” encompasses the University acting as a “seedbed”, creating conditions to promote innovation as an incubator facilitating the transfer of knowledge, encouraging the spin-offs, and stimulating the production of innovation (Felsenstein 1994 ). The final role, “exploitation and commercialization”, involves activities making use of and benefiting from these resources and brokering activities to assist economic development through innovative products. These are exploited through commercialization processes within the science park to produce commercial returns, which further strengthens the businesses utilizing them and the regional innovation systems in which they sit.

5.1 Resource sharing roles

At its interface with the RIS actors the University shares data and knowledge, and thus performs a role in the ‘provision of information’ (see Table  7 ). In this function the University itself produces knowledge, it has connections with firms to create and generate new knowledge by conducting research, and shares knowledge or data with firms through university programmes or specific courses (e.g. Hommen et al. 2006 ; Looy et al. 2003 ).

Secondly, in its relationship between the science park and RIS, the University then provides channels of communication. Here the University can play a key role in creating networks, as well as helping to ensuring and optimizing communication and cooperation between the key actors (e.g. Jonsson 2002 ; Watkins-Mathys and Foster 2006 ). This can play a key role in the transfer of tacit knowledge through a varied network of actors (e.g. Looy et al. 2003 ; Zou and Zhao 2013 ). This occurs through conferences, meetings, exhibitions, social networks, as well as firms interactions with students, staff and researchers who have the specialized skills consistent with industrial needs. The final resource sharing role highlights the sharing of infrastructure between the University and Science Park. Here Universities have been identified as providing a range of general facilities, alongside specific tools and specialist laboratory equipment (e.g. Bass 1998 ; Sofouli and Vonortas 2007 ).

5.2 Brokerage roles

The first Brokerage function performed by the University focuses on building regional networking with and between the other actors within the RIS. For example, through labour mobility, contacts and supportive strategy and policy (Table  8 ). In the University’s second brokering role it supports interactions to create and promote innovation between the university, science park, and other actors in RIS. Research collaborations between these actors are considered crucial in this, and across a number of industries R&D collaboration between them is highly valued (Kramer et al. 2011 ). For firms and the RIS, University’s investments in R&D provide benefits that can contribute to their innovation processes (Barra and Zotti 2018 ), whilst the University benefits from R&D collaboration through additional income and experience of firms’ real-life problems (Harper and Georghiou 2005 ). Finally, within the science park itself the University can then act as a “knowledge intermediary”. In this role the university can search for and absorb local and non-local knowledge and then transmit this knowledge to the science park, to improve the innovative capability of firms (e.g. Díez-Vial and Montoro-Sánchez 2016 ). This role is increasingly promoted by government to encourage technology transfer and regional development due to it supporting the geographical clustering of firms (Tan 2006 ).

5.3 Exploitation and commercialisation roles

In terms of RIS-University relationships, the role of the university also includes directly increasing local economic development (e.g. see Hu et al. 2005 ). This is achieved through vehicles such as technology companies and innovation campuses (see Table  9 ). The second exploitation and commercialization role of the University focuses on its simultaneous relationships with the science park and RIS actors (e.g. see Looy et al. 2003 ). Given that firms’ in science parks primary purpose is to launch new products and develop markets (Löfsten and Lindelöf 2003 ), the university’s role in commercialization is essential and often supported by specific government policies (Mian et al. ( 2016 ). This effort is focused through the development of vehicles for commercialisation (e.g. licensing, patents), as well as more broadly promoting technological change.

In the final exploitation and commercialization role, Universities increasingly participate directly in the commercialization of knowledge via licensing activities and spin-off firms (Looy et al. 2003 ). The study of Sofouli and Vonortas ( 2007 ) supports this notion in their case study of S&T Parks and business incubators of Greece, especially, in the first policy wave (1990’s) government providing funding support for establishing parks by universities and other public research institutes in order to exploit R&D results. Spin-off firms are also then seen as crucial in the development of university-industry relationships and as a tool for valorisation of research results (Salvador 2011 ). Indeed, Hansson et al. ( 2005 ) further claim that universities expect science parks to help them commercialise their research ideas and secure funding for further research (Hansson et al. 2005 ).

6 Conclusions, research gaps and implications

The overall aim of this review was to analyse the roles of the university in the RIS actors-university-science park nexus. Whilst it is clear that Universities contribute to both science parks and the RIS, the particular roles they perform for the RIS actors or science park have not been systematically examined. Furthermore, how these basic roles differ between each actor remains ambiguous.

Within this review we have attempted to contribute by focusing on these shortcomings. Specifically, bringing the literature on science parks and RIS together, we have contributed to this field by identifying how the university’s key roles change as it moves between: RIS actors-university, RIS actors-university-science park, and university-science park interrelationships. Further, moving beyond basic views of the roles performed by the university (Massey and Wield 2006 ; Quintas et al. 1992 ; Westhead 1997 ), we have specifically distinguished between three different types of activities performed by the university within each of these three types of interactions: knowledge co-creation, acting as conduit, and inter-organisational relationship building. In doing so, we have highlighted how the key basic roles of the university change as these different dimensions interact, thereby making a contribution.

Further research is however, required to provide a more finely grained understanding of the roles performed by the university within each of these relationships, and how their contributions can be optimized in different contexts. Our review also reveals a series of important gaps in the literatures on science parks and RIS. In terms of the science park literature, for example, no general theory of the science park was observed due to the origins of the parks being different depending on a range of factors and the context of the country.

Whilst a broad identification of the roles of the university in the RIS actors-university-science park nexus is possible from the existing literature, an examination of these roles within specific national or regional context is therefore critical in order to identify how this affects the relative significance of specific roles. Our review also supports that of Lecluyse et al. ( 2019 ) which identified a need for more research into the relationship between science park and region the science park is located within, in order to more fully explore the roles and contribution the science park can make. In addition, the impact of regional policies in the creation and development of science parks should also be analysed as suggested by (Mora-Valentín et al. 2018 ).

Our review therefore supports Hobbs et al. ( 2017 ) who argued that the science and technology park literature can still be considered to need further development, a situation that can be seen to be particularly the case in the context of peripheral developing economies. More specifically, our review has revealed a strong imbalance in the geographic distribution of prior studies, with the majority having been conducted in developed countries and core regions. By contrast developing countries and more peripheral regions have been relatively overlooked, peripheral regions in developing economies particularly so.

Considering the multiplicity of roles performed within the RIS-University-Science park nexus, there is therefore a clear gap in the literature with respect to both university roles and potential contributions within nascent peripheral and developing economy RIS. In particular, there is a specific need to understand how universities are able to contribute to the development of RIS within developing countries, as well as identifying the activities they are less capable of performing due to shortcomings in the RIS as well as their own capabilities.

More broadly, although the university is found to be the crucial component in both RIS and science park literature, there were only a limited number of studies focused on the simultaneous roles played by the university during its relationships with these other actors. Consequently, future research needs to provide a more integrated approach to further our understanding of the simultaneous roles of universities within these relationships, which again will have specific national and regional contexts. This would also help further our understanding of the university’s role as conduit between the RIS and Science Park.

Methodologically there also appeared to be relatively few studies conducted longitudinally, or comparatively. We therefore suggest a need for comparative studies to better uncover the relative influence of regional contexts, specific policies, and capabilities of Universities on the specific roles they perform. This may include analysis of Science Parks where universities are/are not present, to further understand their roles, as well as examples of science parks where universities successfully contribute versus those where universities make less of a contribution.

Finally, few studies simultaneously linked the literature on the science park with that of the RIS, identifying the necessity to examine the roles of specific universities in science parks within the context of specific RIS. Overall, this identifies the need for contextual studies to explore these roles and unpick the impact of specific local and regional government initiatives on the roles and contributions of the University. Such research would help to inform future policy to enhance science park performance and ultimately the development of RIS in a manner appropriate for a particular context.

In conclusion, whilst Universities can make several contributions within the RIS-University-Science park nexus their ability to undertake activities that are closer to the market have been found to be limited in several respects, not only because of their own limitations but also because of the wider RIS context in which they operate. The takeaway message for policymakers, universities and science park managers, therefore, is that, in the science park context, the results of this review suggest that universities will be assisting science parks to play different combinations of roles in relation to innovation, depending on the different RIS contexts in which they find themselves.

Acs, Z. J., Braunerhjelm, P., Audretsch, D. B., & Carlsson, B. (2009). The knowledge spillover theory of entrepreneurship. Small Business Economics, 32 (1), 15–30. https://doi.org/10.1007/s11187-008-9157-3 .

Article   Google Scholar  

Albahari, A., Catalano, G., & Landoni, P. (2013). Evaluation of national science park systems: A theoretical framework and its application to the Italian and Spanish systems. Technology Analysis & Strategic Management, 25 (5), 599–614. https://doi.org/10.1080/09537325.2013.785508 .

Albahari, A., Pérez-Canto, S., Barge-Gil, A., & Modrego, A. (2017). Technology parks versus science parks: Does the university make the difference? Technological Forecasting and Social Change, 116, 13–28. https://doi.org/10.1016/j.techfore.2016.11.012 .

Almeida, A., Figueiredo, A., & Silva, M. R. (2011). From concept to policy: Building regional innovation systems in follower regions. European Planning Studies, 19 (7), 1331–1356. https://doi.org/10.1080/09654313.2011.573140 .

Alshumaimri, A., Aldridge, T., & Audretsch, D. B. (2017). The university technology transfer revolution in Saudi Arabia. In Universities and the Entrepreneurial Ecosystem (pp. 112–124). https://doi.org/10.1007/s10961-010-9176-5 .

Angelakis, A., & Galanakis, K. (2017). A science-based sector in the making: The formation of the biotechnology sector in two regions. Regional Studies, 51 (10), 1542–1552. https://doi.org/10.1080/00343404.2016.1215601 .

Appold, S. J. (2004). Research parks and the location of industrial research laboratories: An analysis of the effectiveness of a policy intervention. Research Policy, 33 (2), 225–243. https://doi.org/10.1016/S0048-7333(03)00124-0 .

Asheim, B. T., & Coenen, L. (2005). Knowledge bases and regional innovation systems: Comparing Nordic clusters. Research Policy, 34 (8), 1173–1190. https://doi.org/10.1016/j.respol.2005.03.013 .

Asheim, B. T., & Coenen, L. (2006). Contextualising regional innovation systems in a globalising learning economy: On knowledge bases and institutional frameworks. Journal of Technology Transfer, 31 (1), 163–173. https://doi.org/10.1007/s10961-005-5028-0 .

Asheim, B. T., & Isaksen, A. (2002). Regional innovation systems: The integration of local “sticky” and global “ubiquitous” knowledge. Journal of Technology Transfer, 27 (1), 77–86. https://doi.org/10.1023/A:1013100704794 .

Bakouros, Y. L., Mardas, D. C., & Varsakelis, N. C. (2002). Science park, a high tech fantasy?: An analysis of the science parks of Greece. Technovation, 22 (2), 123–128. https://doi.org/10.1016/S0166-4972(00)00087-0 .

Barbera, F., & Fassero, S. (2013). The place-based nature of technological innovation: The case of Sophia Antipolis. Journal of Technology Transfer, 38 (3), 216–234. https://doi.org/10.1007/s10961-011-9242-7 .

Barra, C., & Zotti, R. (2018). The contribution of university, private and public sector resources to Italian regional innovation system (in)efficiency. Journal of Technology Transfer, 43 (2), 432–457. https://doi.org/10.1007/s10961-016-9539-7 .

Bass, S. J. (1998). Japanese research parks: National policy and local development. Regional Studies, 32 (5), 391–403. https://doi.org/10.1080/00343409850116808 .

Bigliardi, B., Dormio, A. I., Nosella, A., & Petroni, G. (2006). Assessing science parks’ performances: Directions from selected Italian case studies. Technovation, 26 (4), 489–505. https://doi.org/10.1016/j.technovation.2005.01.002 .

Bosco, M. G. (2007). Innovation, R&D and technology transfer: Policies towards a regional innovation system The case of Lombardy. European Planning Studies, 15 (8), 1085–1111. https://doi.org/10.1080/09654310701448246 .

Boucke, C., Cantner, U., & Hanusch, H. (1994). “Technopolises” as a policy goal: A morphological study of the Wissenschaftsstadt Ulm. Technovation, 14 (6), 407–418. https://doi.org/10.1016/0166-4972(94)90019-1 .

Buesa, M., Heijs, J., Pellitero, M. M., & Baumert, T. (2006). Regional systems of innovation and the knowledge production function: The Spanish case. Technovation, 26 (4), 463–472. https://doi.org/10.1016/j.technovation.2004.11.007 .

Cabral, R. (1998). Refining the Cabral-Dahab science park management Paradigm. International Journal of Technology Management, 16 (8), 813–818. https://doi.org/10.1504/ijtm.1998.002694 .

Cantner, U., Meder, A., & Ter Wal, A. L. J. (2010). Innovator networks and regional knowledge base. Technovation, 30 (9–10), 496–507. https://doi.org/10.1016/j.technovation.2010.04.002 .

Chan, K. F., & Lau, T. (2005). Assessing technology incubator programs in the science park: The good, the bad and the ugly. Technovation , 25 (10), 1215–1228.

Chan, K. Y. A., Oerlemans, L. A. G., & Pretorius, M. W. (2010). Knowledge exchange behaviours of science park firms: The innovation hub case. Technology Analysis & Strategic Management, 22 (2), 207–228. https://doi.org/10.1080/09537320903498546 .

Chang, S. L., Lee, Y. H., Lin, C. Y., & Hu, T. S. (2010). Consideration of proximity in selection of residential location by science and technology workers: Case Study of Hsinchu, Taiwan. European Planning Studies, 18 (8), 1317–1342. https://doi.org/10.1080/09654313.2010.490651 .

Chen, C. J., Wu, H. L., & Lin, B. W. (2006). Evaluating the development of high-tech industries: Taiwan’s science park. Technological Forecasting and Social Change, 73 (4), 452–465. https://doi.org/10.1016/j.techfore.2005.04.003 .

Cheng, F., van Oort, F., Geertman, S., & Hooimeijer, P. (2014). Science parks and the co-location of high-tech small- and medium-sized firms in China’s Shenzhen. Urban Studies, 51 (5), 1073–1089. https://doi.org/10.1177/0042098013493020 .

Chordà, I. M. (1996). Towards the maturity stage: An insight into the performance of French technopoles. Technovation, 16 (3), 143–152. https://doi.org/10.1016/0166-4972(95)00042-9 .

Chung, S. (2002). Building a national innovation system through regional innovation systems. Technovation, 22 (8), 485–491. https://doi.org/10.1016/S0166-4972(01)00035-9 .

Colombo, M. G., & Delmastro, M. (2002). How effective are technology incubators? Research Policy, 31 (7), 1103–1122. https://doi.org/10.1016/s0048-7333(01)00178-0 .

Cooke, P. (2002a). Regional innovation systems, clusters, and the knowledge economy. Industrial and Corporate Change, 10 (4), 945–974. https://doi.org/10.1093/icc/10.4.945 .

Cooke, P. (2002b). Regional innovation systems: General findings and some new evidence from biotechnology clusters. Journal of Technology Transfer, 27 (1), 133–145. https://doi.org/10.1023/A:1013160923450 .

Cooke, P., Gomez Uranga, M., & Etxebarria, G. (2003). Regional innovation systems: Institutional and organisational dimensions. Research Policy, 26 (4–5), 475–491. https://doi.org/10.1016/s0048-7333(97)00025-5 .

Cooke, P., & Morgan, K. (1998). The associational economy: Firms, regions, and innovation . Oxford: Oxford University Press.

Book   Google Scholar  

Corsaro, D., & Cantù, C. (2015). Actors’ heterogeneity and the context of interaction in affecting innovation networks. Journal of Business and Industrial Marketing, 30 (3–4), 246–258. https://doi.org/10.1108/JBIM-12-2014-0249 .

Dettwiler, P., Lindelöf, P., & Löfsten, H. (2006). Utility of location: A comparative survey between small new technology-based firms located on and off Science Parks—Implications for facilities management. Technovation, 26 (4), 506–517. https://doi.org/10.1016/j.technovation.2005.05.008 .

Diaz-Puente, J., Cazorla, A., & de los Rios, I. (2009). Policy support for the diffusion of innovation among SMEs: An evaluation study in the Spanish Region of Madrid. European Planning Studies, 17 (3), 365–387. https://doi.org/10.1080/09654310802618028 .

Díez-Vial, I., & Fernández-Olmos, M. (2015). Knowledge spillovers in science and technology parks: How can firms benefit most? Journal of Technology Transfer, 40 (1), 70–84. https://doi.org/10.1007/s10961-013-9329-4 .

Díez-Vial, I., & Fernández-Olmos, M. (2017). The effect of science and technology parks on firms’ performance: How can firms benefit most under economic downturns? Technology Analysis & Strategic Management, 29 (10), 1153–1166. https://doi.org/10.1080/09537325.2016.1274390 .

Díez-Vial, I., & Montoro-Sánchez, Á. (2016). How knowledge links with universities may foster innovation: The case of a science park. Technovation, 50–51, 41–52. https://doi.org/10.1016/j.technovation.2015.09.001 .

Doloreux, D. (2004). Regional innovation systems in Canada: A comparative study. Regional Studies, 38 (5), 481–494. https://doi.org/10.1080/0143116042000229267 .

Doloreux, D., & Dionne, S. (2008). Is regional innovation system development possible peripheral regions? Some evidence from the case La Pocatière, Canada. Entrepreneurship and Regional Development, 20 (3), 259–283. https://doi.org/10.1080/08985620701795525 .

Durão, D., Sarmento, M., Varela, V., & Maltez, L. (2005). Virtual and real-estate science and technology parks: A case study of Taguspark. Technovation, 25 (3), 237–244. https://doi.org/10.1016/S0166-4972(03)00110-X .

Eto, H. (2005). Obstacles to emergence of high/new technology parks, ventures and clusters in Japan. Technological Forecasting and Social Change, 72 (3), 359–373. https://doi.org/10.1016/j.techfore.2004.08.008 .

Etzkowitz, H., & Leydesdorff, L. (1996). Introduction: Universities in the Global Knowledge Economy Triaple Helix of University - Industry - Government Relations . Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2932054 .

Etzkowitz, H., & Leydesdorff, L. (1999). The future location of research and technology transfer. The Journal of Technology Transfer, 24 (2–3), 111–123. https://doi.org/10.1023/A:1007807302841 .

Etzkowitz, H., & Leydesdorff, L. (2000). The dynamics of innovation: From National Systems and “mode 2” to a Triple Helix of university-industry-government relations. Research Policy, 29 (2), 109–123. https://doi.org/10.1016/S0048-7333(99)00055-4 .

Etzkowitz, H., Webster, A., Gebhardt, C., & Terra, B. R. C. (2000). The future of the university and the university of the future: Evolution of ivory tower to entrepreneurial paradigm. Research Policy, 29 (2), 313–330. https://doi.org/10.1016/S0048-7333(99)00069-4 .

Etzkowitz, H., & Zhou, C. (2018). Innovation incommensurability and the science park. R and D Management, 48, 73–87. https://doi.org/10.1111/radm.12266 .

Feldman, J. M. (2007). The managerial equation and innovation platforms: The case of Linköping and Berzelius science park. European Planning Studies, 15 (8), 1027–1045. https://doi.org/10.1080/09654310701448162 .

Feldman, M., Siegel, D., & Wright, M. (2019). New developments in innovation and entrepreneurial ecosystems. Industrial and Corporate Change, 28 (4), 817–826.

Felsenstein, D. (1994). University-related science parks—“seedbeds” or “enclaves” of innovation? Technovation, 14 (2), 93–110. https://doi.org/10.1016/0166-4972(94)90099-X .

Ferguson, R., & Olofsson, C. (2004). Science parks and the development of NTBFs—Location, survival and growth. The Journal of Technology Transfer, 29 (1), 5–17. https://doi.org/10.1023/b:jott.0000011178.44095.cd .

Fernández-Alles, M., Camelo-Ordaz, C., & Franco-Leal, N. (2014). Key resources and actors for the evolution of academic spin-offs. Journal of Technology Transfer, 40 (6), 976–1002. https://doi.org/10.1007/s10961-014-9387-2 .

Fikirkoca, A., & Saritas, O. (2012). Foresight for science parks: The case of Ankara University. Technology Analysis & Strategic Management, 24 (10), 1071–1085. https://doi.org/10.1080/09537325.2012.723688 .

Fukugawa, N. (2006). Science parks in Japan and their value-added contributions to new technology-based firms. International Journal of Industrial Organization, 24 (2), 381–400. https://doi.org/10.1016/j.ijindorg.2005.07.005 .

Fuller, D., Beynon, M., & Pickernell, D. (2019). Indexing third stream activities in UK universities: Exploring the entrepreneurial/enterprising university. Studies in Higher Education, 44 (1), 86–110.

Gebauer, A., Nam, C. W., & Parsche, R. (2005). Regional technology policy and factors shaping local innovation networks in small German cities. European Planning Studies . https://doi.org/10.1080/09654310500139301 .

Gerstlberger, W. (2004). Regional innovation systems and sustainability—Selected examples of international discussion. Technovation, 24 (9), 749–758. https://doi.org/10.1016/S0166-4972(02)00152-9 .

Giaretta, E. (2013). The trust “builders” in the technology transfer relationships: An Italian science park experience. Journal of Technology Transfer, 39 (5), 675–687. https://doi.org/10.1007/s10961-013-9313-z .

Gkypali, A., Kokkinos, V., Bouras, C., & Tsekouras, K. (2016). Science parks and regional innovation performance in fiscal austerity era: Less is more? Small Business Economics, 47 (2), 313–330. https://doi.org/10.1007/s11187-016-9717-x .

Guadix, J., Carrillo-Castrillo, J., Onieva, L., & Navascués, J. (2016). Success variables in science and technology parks. Journal of Business Research, 69 (11), 4870–4875. https://doi.org/10.1016/j.jbusres.2016.04.045 .

Gunasekara, C. (2006). Reframing the role of Universities in the development of regional innovation systems. Journal of Technology Transfer, 31 (1), 101–113. https://doi.org/10.1007/s10961-005-5016-4 .

Guy, I. (2002). A look at aston science park. Technovation, 16 (5), 217–218. https://doi.org/10.1016/0166-4972(96)00002-8 .

Gwebu, K. L., Sohl, J., & Wang, J. (2019). Differential performance of science park firms: An integrative model. Small Business Economics, 52 (1), 193–211. https://doi.org/10.1007/s11187-018-0025-5 .

Hansson, F., Husted, K., & Vestergaard, J. (2005). Second generation science parks: From structural holes jockeys to social capital catalysts of the knowledge society. Technovation, 25 (9), 1039–1049. https://doi.org/10.1016/j.technovation.2004.03.003 .

Harper, J. C., & Georghiou, L. (2005). Foresight in innovation policy: Shared visions for a science park and business-University links in a city region. Technology Analysis & Strategic Management, 17 (2), 147–160. https://doi.org/10.1080/09537320500088716 .

Helmers, C. (2019). Choose the neighbor before the house: Agglomeration externalities in a UK science park. Journal of Economic Geography, 19, 31–55. https://doi.org/10.1093/jeg/lbx042 .

Hobbs, K. G., Link, A. N., & Scott, J. T. (2017). Science and technology parks: An annotated and analytical literature review. Journal of Technology Transfer, 42 (4), 957–976. https://doi.org/10.1007/s10961-016-9522-3 .

Hommen, L., Doloreux, D., & Larsson, E. (2006). Emergence and growth of mjardevi science park in linkoping, Sweden. European Planning Studies, 14 (10), 1331–1361. https://doi.org/10.1080/09654310600852555 .

Hu, A. G. (2007). Technology parks and regional economic growth in China. Research Policy, 36 (1), 76–87. https://doi.org/10.1016/j.respol.2006.08.003 .

Hu, T.-S. (2008). Interaction among high-tech talent and its impact on innovation performance: A comparison of taiwanese science parks at different stages of development. European Planning Studies, 16 (2), 163–187. https://doi.org/10.1080/09654310701814462 .

Hu, T.-S., Lin, C.-Y., & Chang, S.-L. (2005). Technology-based regional development strategies and the emergence of technological communities: A case study of HSIP. Taiwan. Technovation, 25 (4), 367–380. https://doi.org/10.1016/j.technovation.2003.09.002 .

Huang, W. J., & Fernández-Maldonado, A. M. (2016). High-tech development and spatial planning: Comparing the Netherlands and Taiwan from an institutional perspective. European Planning Studies, 24 (9), 1662–1683. https://doi.org/10.1080/09654313.2016.1187717 .

Huang, K. F., Yu, C. M. J., & Seetoo, D. H. (2012). Firm innovation in policy-driven parks and spontaneous clusters: The smaller firm the better? Journal of Technology Transfer, 37 (5), 715–731. https://doi.org/10.1007/s10961-012-9248-9 .

Ishizaka, A., Pickernell, D., Huang, S., & Senyard, J. M. (2020). Examining knowledge transfer activities in UK universities: Advocating a PROMETHEE-based approach. International Journal of Entrepreneurial Behavior & Research ,. https://doi.org/10.1108/IJEBR-01-2020-0028 .

Jonsson, O. (2002). Innovation Processes and Proximity: The Case of IDEON Firms in Lund, Sweden. European Planning Studies, 10 (6), 705–722. https://doi.org/10.1080/0965431022000003771 .

Keupp, M. M., Palmi`e, M., & Gassmann, O. (2012). A reflective review of disruptive innovation theory. International Journal of Management Reviews, 14, 367–390.

Kihlgren, A. (2003). Promotion of innovation activity in Russia through the creation of science parks: The case of St Petersburg (1992–1998). Technovation, 23 (1), 65–76. https://doi.org/10.1016/S0166-4972(01)00077-3 .

Koh, F. C., Koh, W. T., & Tschang, F. T. (2005). An analytical framework for science parks and technology districts with an application to Singapore. Journal of business venturing , 20 (2), 217–239.

Kramer, J. P., Marinelli, E., Iammarino, S., & Diez, J. R. (2011). Intangible assets as drivers of innovation: Empirical evidence on multinational enterprises in German and UK regional systems of innovation. Technovation, 31 (9), 447–458. https://doi.org/10.1016/j.technovation.2011.06.005 .

Lai, H. C., & Shyu, J. Z. (2005). A comparison of innovation capacity at science parks across the Taiwan Strait: The case of Zhangjiang High-Tech Park and Hsinchu Science-based Industrial Park. Technovation, 25 (7), 805–813. https://doi.org/10.1016/j.technovation.2003.11.004 .

Lamperti, F., Mavilia, R., & Castellini, S. (2017). The role of Science Parks: A puzzle of growth, innovation and R&D investments. Journal of Technology Transfer, 42 (1), 158–183. https://doi.org/10.1007/s10961-015-9455-2 .

Layson, S. K., Leyden, D. P., & Neufeld, J. (2008). To admit or not to admit: The question of research park size. Economics of Innovation and New Technology, 17 (7–8), 691–699. https://doi.org/10.1080/10438590701785652 .

Lecluyse, L., Knockaert, M., & Spithoven, A. (2019). The contribution of science parks: A literature review and future research agenda. Journal of Technology Transfer, 44 (2), 559–595. https://doi.org/10.1007/s10961-018-09712-x .

Lee, W., & Yang, W. (2000). Cradle of Taiwan high technology industry development - Hsinchu Science Park (HSP). Technovation , 20 (1), 55–59.

Lenger, A. (2008). Regional innovation systems and the role of state: Institutional design and state universities in Turkey. European Planning Studies, 16 (8), 1101–1120. https://doi.org/10.1080/09654310802315781 .

Lew, Y. K., Khan, Z., & Cozzio, S. (2018). Gravitating toward the quadruple helix: International connections for the enhancement of a regional innovation system in Northeast Italy. R and D Management, 48, 44–59. https://doi.org/10.1111/radm.12227 .

Liberati, D., Marinucci, M., & Tanzi, G. M. (2016). Science and technology parks in Italy: Main features and analysis of their effects on the firms hosted. Journal of Technology Transfer, 41 (4), 694–729. https://doi.org/10.1007/s10961-015-9397-8 .

Lindelöf, P., & Löfsten, H. (2003). Science park location and new technology-based firms in Sweden—Implications for strategy and performance. Small Business Economics . https://doi.org/10.1023/A:1022861823493 .

Lindelöf, P., & Löfsten, H. (2004). Proximity as a resource base for competitive advantage: University-industry links for technology transfer. The Journal of Technology Transfer, 29 (3/4), 311–326. https://doi.org/10.1023/b:jott.0000034125.29979.ae .

Lindelöf, P., & Löfsten, H. (2006). Environmental hostility and firm behavior—An empirical examination of new technology-based firms on science parks. Journal of Small Business Management, 44 (3), 386–406. https://doi.org/10.1111/j.1540-627X.2006.00178.x .

Link, A., & Link, K. R. (2003). On the growth of US science parks. The Journal of Technology Transfer, 28 (1), 81–85. https://doi.org/10.1023/A:1021634904546 .

Link, A. N., & Scott, J. T. (2003a). The growth of research triangle park. Small Business Economics, 20 (2), 167–175. https://doi.org/10.1023/A:1022216116063 .

Link, A. N., & Scott, J. T. (2003b). US science parks: The diffusion of an innovation and its effects on the academic missions of universities. International Journal of Industrial Organization, 21 (9), 1323–1356. https://doi.org/10.1016/S0167-7187(03)00085-7 .

Link, A., & Scott, J. (2015). Research, Science, and Technology Parks: Vehicles for Technology Transfer, in The Chicago Handbook of University Technology Transfer and Academic Entrepreneurship (Eds: Link, Siegel, and Wright), The University of Chicago Press.

Link, A. N., & Scott, J. T. (2017). U.S. university research parks. In Universities and the Entrepreneurial Ecosystem (pp. 44–55). https://doi.org/10.1007/s11123-006-7126-x .

Löfsten, H., & Lindelöf, P. (2001). Science parks in Sweden—Industrial renewal and development ? R&D Management, 31 (3), 309–322. https://doi.org/10.1111/1467-9310.00219 .

Löfsten, H., & Lindelöf, P. (2002). Science Parks and the growth of new technology-based firms—Academic-industry links, innovation and markets. Research Policy, 31 (6), 859–876. https://doi.org/10.1016/S0048-7333(01)00153-6 .

Löfsten, H., & Lindelöf, P. (2003). Determinants for an entrepreneurial milieu: Science Parks and business policy in growing firms. Technovation, 23 (1), 51–64. https://doi.org/10.1016/S0166-4972(01)00086-4 .

Löfsten, H., & Lindelöf, P. (2005). R&D networks and product innovation patterns—Academic and non-academic new technology-based firms on Science Parks. Technovation, 25 (9), 1025–1037. https://doi.org/10.1016/j.technovation.2004.02.007 .

Looy, B. Van, Debackere, K., & Andries, P. (2003). Policies to stimulate regional innovation capabilities via university-industry collaboration: An analysis and an assessment. R and D Management, 33 (2), 209–229. https://doi.org/10.1111/1467-9310.00293 .

Macdonald, S. (2016). Milking the myth: innovation funding in theory and practice. R&D Management , 46 (2), 552–563.

Macpherson, A., & Holt, R. (2007). Knowledge, learning and small firm growth: A systematic review of the evidence. Research Policy, 36 (2), 172–192. https://doi.org/10.1016/j.respol.2006.10.001 .

Malairaja, C., & Zawdie, G. (2008). Science parks and university–industry collaboration in Malaysia. Technology Analysis & Strategic Management, 20 (6), 727–739. https://doi.org/10.1080/09537320802426432 .

Markman, G. D., Phan, P. H., Balkin, D. B., & Gianiodis, P. T. (2005). Entrepreneurship and university-based technology transfer. Journal of Business Venturing, 20 (2), 241–263. https://doi.org/10.1016/j.jbusvent.2003.12.003 .

Markman, G. D., Siegel, D. S., & Wright, M. (2008). Research and technology commercialization. Journal of Management Studies, 45 (8), 1401–1423. https://doi.org/10.1111/j.1467-6486.2008.00803.x .

Massey, D., & Wield, D. (2006). Science parks: A concept in science, society, and ‘space’ (A Realist Tale). Environment and Planning D: Society and Space, 10 (4), 411–422. https://doi.org/10.1068/d100411 .

McAdam, M., & McAdam, R. (2008). High tech start-ups in University Science Park incubators: The relationship between the start-up’s lifecycle progression and use of the incubator’s resources. Technovation, 28 (5), 277–290. https://doi.org/10.1016/j.technovation.2007.07.012 .

Mian, S., Lamine, W., & Fayolle, A. (2016). Technology business incubation: An overview of the state of knowledge. Technovation, 50–51, 1–12. https://doi.org/10.1016/j.technovation.2016.02.005 .

Mora-Valentín, E. M., Ortiz-de-Urbina-Criado, M., & Nájera-Sánchez, J. J. (2018). Mapping the conceptual structure of science and technology parks. Journal of Technology Transfer, 43 (5), 1410–1435. https://doi.org/10.1007/s10961-018-9654-8 .

Motohashi, K. (2013). The role of the science park in innovation performance of start-up firms: an empirical analysis of Tsinghua Science Park in Beijing. Asia Pacific Business Review, 19 (4), 578–599. https://doi.org/10.1080/13602381.2012.673841 .

Ng, W. K. B., Appel-Meulenbroek, R., Cloodt, M., & Arentze, T. (2019). Towards a segmentation of science parks: A typology study on science parks in Europe. Research Policy, 48 (3), 719–732. https://doi.org/10.1016/j.respol.2018.11.004 .

Pálmai, Z. (2004). An innovation park in Hungary: INNOTECH of the Budapest University of Technology and Economics. Technovation, 24 (5), 421–432. https://doi.org/10.1016/S0166-4972(02)00098-6 .

Park, S. C. (2004). The city of brain in South Korea: Daedeok science town. International Journal of Technology Management , 28 (3–6), 602–614.

Phan, P. H., Siegel, D. S., & Wright, M. (2005). Science parks and incubators: Observations, synthesis and future research. Journal of Business Venturing, 20 (2), 165–182. https://doi.org/10.1016/j.jbusvent.2003.12.001 .

Phillimore, J. (1999). Beyond the linear view of innovation in science park evaluation An analysis of Western Australian Technology Park. Technovation , 19 (11), 673–680.

Pickernell, D., Ishizaka, A., Huang, S., & Senyard, J. (2019). Entrepreneurial university strategies in the UK context: Towards a research agenda. Management Decision .

Pilar Latorre, M., Hermoso, R., & Rubio, M. A. (2017). A novel network-based analysis to measure efficiency in science and technology parks: The ISA framework approach. Journal of Technology Transfer, 42 (6), 1255–1275. https://doi.org/10.1007/s10961-017-9585-9 .

Pittaway, L., Robertson, M., Munir, K., Denyer, D., & Neely, A. (2004). Networking and innovation: A systematic review of the evidence. International Journal of Management Reviews , 5 (3–4), 137–168.

Quintas, P., Wield, D., & Massey, D. (1992). Academic-industry links and innovation: Questioning the science park model. Technovation, 12 (3), 161–175. https://doi.org/10.1016/0166-4972(92)90033-E .

Radosevic, S., & Myrzakhmet, M. (2009). Between vision and reality: Promoting innovation through technoparks in an emerging economy. Technovation, 29 (10), 645–656. https://doi.org/10.1016/j.technovation.2009.04.001 .

Ramasamy, B., Chakrabarty, A., & Cheah, M. (2004). Malaysia’s leap into the future: An evaluation of the multimedia super corridor. Technovation, 24 (11), 871–883. https://doi.org/10.1016/S0166-4972(03)00049-X .

Ramirez, M., Li, X., & Chen, W. (2013). Comparing the impact of intra-and inter-regional labour mobility on problem-solving in a Chinese science park. Regional Studies , 47 (10), 1734–1751.

Ramírez-Alesón, M., & Fernández-Olmos, M. (2018). Unravelling the effects of Science Parks on the innovation performance of NTBFs. Journal of Technology Transfer, 43 (2), 482–505. https://doi.org/10.1007/s10961-017-9559-y .

Ratinho, T., & Henriques, E. (2010). The role of science parks and business incubators in converging countries: Evidence from Portugal. Technovation, 30 (4), 278–290. https://doi.org/10.1016/j.technovation.2009.09.002 .

Rip, A. (2002). Regional innovation systems and the advent of strategic science. Journal of Technology Transfer, 27 (1), 123–131. https://doi.org/10.1023/A:1013108906611 .

Salvador, E. (2011). Are science parks and incubators good “brand names” for spin-offs? The case study of Turin. Journal of Technology Transfer, 36 (2), 203–232. https://doi.org/10.1007/s10961-010-9152-0 .

Savino, T., Messeni Petruzzelli, A., & Albino, V. (2017). Search and recombination process to innovate: A review of the empirical evidence and a research agenda. International Journal of Management Reviews, 19 (1), 54–75.

Shearmur, R., & Doloreux, D. (2000). Science parks: Actors or reactors? Canadian science parks in their urban context. Environment and Planning A, 32 (6), 1065–1082. https://doi.org/10.1068/a32126 .

Siegel, D. S., Westhead, P., & Wright, M. (2003a). Assessing the impact of university science parks on research productivity: Exploratory firm-level evidence from the United Kingdom. International Journal of Industrial Organization, 21 (9), 1357–1369. https://doi.org/10.1016/S0167-7187(03)00086-9 .

Siegel, D. S., Westhead, P., & Wright, M. (2003b). Science parks and the performance of new technology-based firms: A review of recent UK Evidence and an Agenda for future research. Small Business Economics . https://doi.org/10.1023/A:1022268100133 .

Sofouli, E., & Vonortas, N. S. (2007). S&T Parks and business incubators in middle-sized countries: The case of Greece. Journal of Technology Transfer, 32 (5), 525–544. https://doi.org/10.1007/s10961-005-6031-1 .

Squicciarini, M. (2008). Science Parks’ tenants versus out-of-Park firms: Who innovates more? A duration model. Journal of Technology Transfer, 33 (1), 45–71. https://doi.org/10.1007/s10961-007-9037-z .

Staudt, E., Bock, J., & Muhlemeyer, P. (1994). Technology centres and science parks: Agents or competence centres for small businesses? International Journal of Technology Management , 9 (2), 213–226.

Google Scholar  

Storey, D. J., & Tether, B. S. (1998). Public policy measures to support new technology-based firms in the European Union. Research Policy, 26 (9), 1037–1057. https://doi.org/10.1016/S0048-7333(97)00058-9 .

Takeda, Y., Kajikawa, Y., Sakata, I., & Matsushima, K. (2008). An analysis of geographical agglomeration and modularized industrial networks in a regional cluster: A case study at Yamagata prefecture in Japan. Technovation, 28 (8), 531–539. https://doi.org/10.1016/j.technovation.2007.12.006 .

Tan, J. (2006). Growth of industry clusters and innovation: Lessons from Beijing Zhongguancun Science Park. Journal of Business Venturing, 21 (6), 827–850. https://doi.org/10.1016/j.jbusvent.2005.06.006 .

Tödtling, F., & Kaufmann, A. (2002). SMEs in regional innovation systems and the role of innovation support—The case of upper Austria. Journal of Technology Transfer, 27 (1), 15–26. https://doi.org/10.1023/A:1013140318907 .

Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review*introduction: the need for an evidence-informed approach. British Journal of Management, 14 (3), 207–222.

Tsai, M. C., Wen, C. H., & Chen, C. S. (2007). Demand choices of high-tech industry for logistics service providers-an empirical case of an offshore science park in Taiwan. Industrial Marketing Management, 36 (5), 617–626. https://doi.org/10.1016/j.indmarman.2006.03.002 .

Turner, N., Swart, J., & Maylor, H. (2013). Mechanisms for managing ambidexterity: A review and research agenda. International Journal of Management Reviews , 15 (3), 317–332.

Ubeda, F., Ortiz-de-Urbina-Criado, M., & Mora-Valentín, E. M. (2019). Do firms located in science and technology parks enhance innovation performance? The effect of absorptive capacity. Journal of Technology Transfer, 44 (1), 21–48. https://doi.org/10.1007/s10961-018-9686-0 .

Vaidyanathan, G. (2008). Technology parks in a developing country: The case of India. The Journal of Technology Transfer , 33 (3), 285–299.

Vásquez-Urriago, Á. R., Barge-Gil, A., & Modrego Rico, A. (2016). Science and Technology Parks and cooperation for innovation: Empirical evidence from Spain. Research Policy, 45 (1), 137–147. https://doi.org/10.1016/j.respol.2015.07.006 .

Vedovello, C. (2002). Science parks and university-industry interaction: Geographical proximity between the agents as a driving force. Technovation, 17 (9), 491–531. https://doi.org/10.1016/s0166-4972(97)00027-8 .

Watkins-Mathys, L., & Foster, M. J. (2006). Entrepreneurship: The missing ingredient in China’s STIPs? Entrepreneurship and Regional Development, 18 (3), 249–274. https://doi.org/10.1080/08985620600593161 .

Westhead, P. (1997). R&D “inputs” and “outputs” of technology-based firms located on and off Science Parks. R and D Management, 27 (1), 45–62. https://doi.org/10.1111/1467-9310.00041 .

Westhead, P., & Batstone, S. (1998). Independent technology-based firms: The perceived benefits of a science park location. Urban Studies, 35 (12), 2197–2219. https://doi.org/10.1080/0042098983845 .

Williams, M. (2002). Generalizations in qualitative research. In T. May (Ed.), Qualitative research in action (pp. 125–143). London: Sage.

Wonglimpiyarat, J. (2010). Commercialization strategies of technology: Lessons from Silicon Valley. The Journal of Technology Transfer , 35 (2), 225–236.

Wright, M, Link, A. N., & Amoroso, S. (2019). Lessons learned and a future and policy agenda on science parks in science and technology parks and regional economic development, (Eds: Amoroso, Link, Wright), Palgrave Advances in the Economics of Innovation and Technology. https://doi.org/10.1007/978-3-030-30963-3_12 .

Wright, M., Liu, X., Buck, T., & Filatotchev, I. (2008). Returnee entrepreneurs, science park location choice and performance: An analysis of high-technology SMEs in China. Entrepreneurship: Theory and Practice, 32 (1), 131–155. https://doi.org/10.1111/j.1540-6520.2007.00219.x .

Wright, M., Siegel, D., & Mustar, P. (2017). An emerging ecosystem for student start-ups. Journal of Technology Transfer, 42 (4), 909–922.

Xie, K., Song, Y., Zhang, W., Hao, J., Liu, Z., & Chen, Y. (2018). Technological entrepreneurship in science parks: A case study of Wuhan Donghu High-Tech Zone. Technological Forecasting and Social Change, 135, 156–168. https://doi.org/10.1016/j.techfore.2018.01.021 .

Yang, D. Y. R., Hsu, J. Y., & Ching, C. H. (2009a). Revisiting the silicon Island? The geographically varied “Strategic Coupling” in the development of high-technology parks in Taiwan. Regional Studies, 43 (3), 369–384. https://doi.org/10.1080/00343400902777067 .

Yang, C. H., Motohashi, K., & Chen, J. R. (2009b). Are new technology-based firms located on science parks really more innovative? Evidence from Taiwan. Research Policy, 38 (1), 77–85. https://doi.org/10.1016/j.respol.2008.09.001 .

Yoon, H., Yun, S., Lee, J., & Phillips, F. (2015). Entrepreneurship in East Asian regional innovation systems: Role of social capital. Technological Forecasting and Social Change, 100, 83–95. https://doi.org/10.1016/j.techfore.2015.06.028 .

Zhang, F. (2015). Building biotech in Shanghai: A perspective of regional innovation system. European Planning Studies, 23 (10), 2062–2078. https://doi.org/10.1080/09654313.2014.1001322 .

Zhu, D., & Tann, J. (2005). A regional innovation system in a small-sized region: A clustering model in Zhongguancun Science Park. Technology Analysis & Strategic Management, 17 (3), 375–390. https://doi.org/10.1080/09537320500211789 .

Zou, Y., & Zhao, W. (2013). Anatomy of Tsinghua University science park in China: Institutional evolution and assessment. Journal of Technology Transfer, 39 (5), 663–674. https://doi.org/10.1007/s10961-013-9314-y .

Download references

Author information

Authors and affiliations.

Strategy, Enterprise and Innovation Subject Group, Faculty of Business and Law, University of Portsmouth, Richmond Building, Portland Street, Portsmouth, PO13DE, UK

T. Theeranattapong, D. Pickernell & C. Simms

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to C. Simms .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Theeranattapong, T., Pickernell, D. & Simms, C. Systematic literature review paper: the regional innovation system-university-science park nexus. J Technol Transf 46 , 2017–2050 (2021). https://doi.org/10.1007/s10961-020-09837-y

Download citation

Accepted : 08 December 2020

Published : 02 January 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s10961-020-09837-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Science park
  • Regional innovation system
  • Technology transfer
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 12 October 2020

A systematic literature review of researchers’ and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data

  • Elizabeth Hutchings   ORCID: orcid.org/0000-0002-6030-954X 1 ,
  • Max Loomes   ORCID: orcid.org/0000-0003-1042-0968 2 ,
  • Phyllis Butow   ORCID: orcid.org/0000-0003-3562-6954 2 , 3 , 4 &
  • Frances M. Boyle   ORCID: orcid.org/0000-0003-3798-1570 1 , 5  

Systematic Reviews volume  9 , Article number:  240 ( 2020 ) Cite this article

13k Accesses

8 Citations

5 Altmetric

Metrics details

A systematic literature review of researchers and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data was conducted using electronic data searching. Eligible articles included those reporting qualitative or quantitative original research and published in English. No restrictions were placed on publication dates, study design, or disease setting. Two authors were involved in all stages of the review process; conflicts were resolved by consensus. Data was extracted independently using a pre-piloted data extraction template. Quality and bias were assessed using the QualSyst criteria for qualitative studies. Eighteen eligible articles were identified, and articles were categorised into four key themes: barriers, facilitators, access, and ownership; 14 subthemes were identified. While respondents were generally supportive of data sharing, concerns were expressed about access to data, data storage infrastructure, and consent. Perceptions of data ownership and acknowledgement, trust, and policy frameworks influenced sharing practice, as did age, discipline, professional focus, and world region. Young researchers were less willing to share data; they were willing to share in circumstances where they were acknowledged. While there is a general consensus that increased data sharing in health is beneficial to the wider scientific community, substantial barriers remain.

Systematic review registration

PROSPERO CRD42018110559

Peer Review reports

Healthcare systems generate large amounts of data; approximately 80 mB of data are generated per patient per year [ 1 ]. It is projected that this figure will continue to grow with an increasing reliance on technologies and diagnostic capabilities. Healthcare data provides an opportunity for secondary data analysis with the capacity to greatly influence medical research, service planning, and health policy.

There are many forms of data collected in the healthcare setting including administrative and clinical trial data which are the focus of this review. Administrative data collected during patients’ care in the primary, secondary, and tertiary settings can be analysed to identify systemic issues and service gaps, and used to inform improved health resourcing. Clinical trials play an essential role in furthering our understanding of disease, advancing new therapeutics, and developing improved supportive care interventions. However, clinical trials are expensive and can take several years to complete; a frequently quoted figure is that it takes 17 years for 14% of clinical research to benefit the patient [ 2 , 3 ].

Those who argue for increased data sharing in healthcare suggest that it may lead to improved treatment decisions based on all available information [ 4 , 5 ], improved identification of causes and clinical manifestations of disease [ 6 ], and provide increased research transparency [ 7 ]. In rare diseases, secondary data analysis may greatly accelerate the medical community’s understanding of the disease’s pathology and influence treatment.

Internationally, there are signs of movement towards greater transparency, particularly with regard to clinical research data. This change has been driven by governments [ 8 ], peak bodies [ 9 ], and clinician led initiatives [ 5 ]. One initiative led by the International Council of Medical Journal Editors (ICMJE) now requires a data sharing plan for all clinical research submitted for publication in a member scientific journal [ 9 ]. Further, international examples of data sharing can be seen in projects such as The Cancer Genome Atlas (TCGA) [ 10 ] dataset and the Surveillance, Epidemiology, and End Results (SEER) [ 11 ] database which have been used extensively for cancer research.

However, consent, data ownership, privacy, intellectual property rights, and potential for misinterpretation of data [ 12 ] remain areas of concern to individuals who are more circumspect about changing the data sharing norm. To date, there has been no published synthesis of views on data sharing from the perspectives of diverse professional stakeholders. Thus, we conducted a systematic review of the literature on the views of researchers and healthcare professionals regarding the sharing of health data.

This systematic literature review was part of a larger review of articles addressing data sharing, undertaken in accordance with the PRISMA statement for systematic reviews and meta-analysis [ 13 ]. The protocol was prospectively registered on PROSPERO ( www.crd.york.ac.uk /PROSPERO, CRD42018110559).

The following databases were searched: EMBASE/MEDLINE, Cochrane Library, PubMed, CINAHL, Informit Health Collection, PROSPERO Database of Systematic Reviews, PsycINFO, and ProQuest. The final search was conducted on 21 October 2018. No date restrictions were placed on the search; key search terms are listed in Table 1 . Papers were considered eligible if they: were published in English; were published in a peer review journal; reported original research, either qualitative or quantitative with any study design, related to data sharing in any disease setting; and included subjects over 18 years of age. Systematic literature reviews were included in the wider search but were not included in the results. Reference list and hand searching were undertaken to identify additional papers. Papers were considered ineligible if they focused on electronic health records, biobanking, or personal health records or were review articles, opinion pieces/articles/letters, editorials, or theses from masters or doctoral research. Duplicates were removed and title and abstract and full-text screening were undertaken using the Cochrane systematic literature review program Covidence [ 14 ]. Two authors were involved in all stages of the review process; conflicts were resolved by consensus.

Quality and bias were assessed at a study level using the QualSyst system for quantitative and qualitative studies as described by Kmet et al. [ 15 ]. A maximum score of 20 is assigned to articles of high quality and low bias; the final QualSyst score is a proportion of the total, with a possible score ranging from 0.0 to 1.0 [ 15 ].

Data extraction was undertaken using a pre-piloted form in Microsoft Office Excel. Data points included author, country and year of study, study design and methodology, health setting, and key themes and results. Where available, detailed information on research participants was extracted including age, sex, clinical/academic employment setting, publication and grant history, career stage, and world region.

Quantitative data were summarised using descriptive statistics. Synthesis of qualitative findings used a meta-ethnographic approach, in accordance with guidelines from Lockwood et al. [ 16 ].The main themes of each qualitative study were first identified and then combined, if relevant, into categories of commonality. Using a constant comparative approach, higher order themes and subthemes were developed. Quantitative data relevant to each theme were then incorporated. Using a framework analysis approach as described by Gale et al. [ 17 ], the perspectives of different professional groups (researchers, healthcare professionals, data custodians, and ethics committees) towards data sharing were identified. Where differences occurred, they are highlighted in the results. Similarly, where systematic differences according to other characteristics (such as age or years of experience), these are highlighted.

This search identified 4019 articles, of which 241 underwent full-text screening; 73 articles met the inclusion criteria for the larger review. Five systematic literature reviews were excluded as was one article which presented duplicate results; this left a total of 67 articles eligible for review. See Fig. 1 for the PRISMA diagram describing study screening.

figure 1

PRIMSA flow diagram (attached)

This systematic literature review was originally developed to identify attitudes towards secondary use and sharing of health administrative and clinical trial data in breast cancer. However, as there was a paucity of material identified specifically related to this group, we present the multidisciplinary results of this search, and where possible highlight results specific to breast cancer, and cancer more generally. We believe that the material identified in this search is relevant and reflective of the wider attitudes towards data sharing within the scientific and medical communities and can be used to inform data sharing strategies in breast cancer.

Eighteen [ 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ] of the 67 articles addressed the perspectives of clinical and scientific researchers, data custodians, and ethics committees and were analysed for this paper (Table 2 ). The majority ( n = 16) of articles focused on the views of researchers and health professionals, [ 18 , 19 , 20 , 21 , 22 , 24 , 25 , 26 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ], only one article focused on data custodians [ 27 ] and ethics committees [ 23 ] respectively. Four articles [ 18 , 19 , 21 , 35 ] included a discussion on the attitudes of both researchers and healthcare professionals and patients; only results relating to researchers/clinicians are included in this analysis (Fig. 1 ).

Study design, location, and disciplines

Several study methodologies were used, including surveys ( n = 11) [ 24 , 25 , 26 , 27 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ], interviews and focus groups ( n = 6) [ 18 , 19 , 20 , 21 , 22 , 23 ], and mixed methods ( n = 1) [ 28 ]. Studies were conducted in a several countries and regions; a breakdown by country and study is available in Table 3 .

In addition to papers focusing on general health and sciences [ 18 , 21 , 22 , 24 , 25 , 26 , 29 , 30 , 31 , 32 , 33 , 34 ], two articles included views from both science and non-science disciplines [ 27 , 28 ]. Multiple sclerosis (MS) [ 19 ], mental health [ 35 ], and human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS)/tuberculosis (TB) [ 20 ] were each the subject of one article.

Study quality

Results of the quality assessment are provided in Table 2 . QualSyst [15] scores ranged from 0.7 to 1.0 (possible range 0.0 to 1.0). While none were blinded studies, most provided clear information on respondent selection, data analysis methods, and justifiable study design and methodology.

Four key themes, barriers, facilitators, access, and ownership were identified; 14 subthemes were identified. A graphical representation of article themes is presented in Fig. 2 . Two articles reflect the perspective of research ethics committees [ 23 ] and data custodians [ 27 ]; concerns noted by these groups are similar to those highlighted by researchers and healthcare professionals.

figure 2

Graphic representation of key themes and subthemes identified (attached)

Barriers and facilitators

Reasons for not sharing.

Eleven articles identified barriers to data sharing [ 20 , 22 , 24 , 25 , 27 , 29 , 30 , 31 , 32 , 33 , 34 ]. Concerns cited by respondents included other researchers taking their results [ 24 , 25 ], having data misinterpreted or misattributed [ 24 , 27 , 31 , 32 ], loss of opportunities to maximise intellectual property [ 24 , 25 , 27 ], and loss of publication opportunities [ 24 , 25 ] or funding [ 25 ]. Results of a qualitative study showed respondents emphasised the competitive value of research data and its capacity to advance an individual’s career [ 20 ] and the potential for competitive disadvantage with data sharing [ 22 ]. Systematic issues related to increased data sharing were noted in several articles where it was suggested the barriers are ‘deeply rooted in the practices and culture of the research process as well as the researchers themselves’ [ 33 ] (p. 1), and that scientific competition and a lack of incentive in academia to share data remain barriers to increased sharing [ 30 ].

Insufficient time, lack of funding, limited storage infrastructure, and lack of procedural standards were also noted as barriers [ 33 ]. Quantitative results demonstrated that the researchers did not have the right to make the data public or that there was no requirement to share by the study sponsor [ 33 ]. Maintaining the balance between investigator and funder interests and the protection of research subjects [ 31 ] were also cited as barriers. Concerns about privacy were noted in four articles [ 25 , 27 , 29 , 30 ]; one study indicated that clinical researchers were significantly more concerned with issues of privacy compared to scientific researchers [ 25 ]. The results of one qualitative study indicated that clinicians were more cautious than patients regarding the inclusion of personal information in a disease specific registry; the authors suggest this may be a result of potential for legal challenges in the setting of a lack of explicit consent and consistent guidelines [ 19 ]. Researchers, particularly clinical staff, indicated that they did not see sharing data in a repository as relevant to their work [ 29 ]

Trust was also identified as a barrier to greater data sharing [ 32 ]. Rathi et al. identified that researchers were likely to withhold data if they mistrusted the intent of the researcher requesting the information [ 32 ]. Ethical, moral, and legal issues were other potential barriers cited [ 19 , 22 ]. In one quantitative study, 74% of respondents ( N = 317) indicated that ensuring appropriate data use was a concern; other concerns included data not being appropriate for the requested purpose [ 32 ]. Concerns about data quality were also cited as a barrier to data reuse; some respondents suggested that there was a perceived negative association of data reuse among health scientists [ 30 ].

Reasons for sharing

Eleven articles [ 19 , 20 , 21 , 22 , 24 , 25 , 29 , 30 , 31 , 32 , 33 ] discussed the reasons identified by researchers and healthcare professionals for sharing health data; broadly the principle of data sharing was seen as a desirable norm [ 25 , 31 ]. Cited benefits included improvements to the delivery of care, communication and receipt of information, impacts on care and quality of life [ 19 ], contributing to the advancement of science [ 20 , 24 , 29 ], validating scientific outputs, reducing duplication of scientific effort and minimising research costs [ 20 ], and promoting open science [ 31 , 32 ]. Professional reasons for sharing data included academic benefit and recognition, networking and collaborative opportunities [ 20 , 24 , 29 , 31 ], and contributing to the visibility of their research [ 24 ]. Several articles noted the potential of shared data for enabling faster access to a wider pool of patients [ 21 ] for research, improved access to population data for longitudinal studies [ 22 ], and increased responsiveness to public health needs [ 20 ]. In one study, a small percentage of respondents indicated that there were no benefits from sharing their data [ 24 ].

Analysis of quantitative survey data indicated that the perceived usefulness of data was most strongly associated with reuse intention [ 30 ]. The lack of access to data generated by other researchers or institutions was seen as a major impediment to progress in science [ 33 ]. In a second study, quantitative data showed no significant differences in reasons for sharing by clinical trialists’ academic productivity, geographic location, trial funding source or size, or the journal in which the results were published [ 32 ]. Attitudes towards sharing in order to receive academic benefits or recognition differed significantly based on the respondent’s geographic location; those from Western Europe were more willing to share compared to respondents in the USA or Canada, and the rest of the world [ 32 ].

Views on sharing

Seven articles [ 19 , 20 , 21 , 29 , 31 , 33 , 34 ] discussed researchers’ and healthcare professionals’ views relating to sharing data, with a broad range of views noted. Two articles, both qualitative, discussed the role of national registries [ 21 ], and data repositories [ 31 ]. Generally, there was clear support for national research registers and an acceptance for their rationale [ 21 ], and some respondents believed that sharing de-identified data through data repositories should be required and that when requested, investigators should share data [ 31 ]. Sharing de-identified data for reasons beyond academic and public health benefit were cited as a concern [ 20 ]. Two quantitative studies noted a proportion of researchers who believed that data should not be made available [ 33 , 34 ]. Researchers also expressed differences in how shared data should be managed; the requirement for data to be ‘gate-kept’ was preferred by some, while others were happy to relinquish control of their data once curated or on release [ 20 ]. Quantitative results indicated that scientists were significantly more likely to rank data reuse as highly relevant to their work than clinicians [ 29 ], but not all scientists shared data equally or had the same views about data sharing or reuse [ 33 ]. Some respondents argued that not all data were equal and therefore should only be shared in certain circumstances. This was in direct contrast to other respondents who suggested that all data should be shared, all of the time [ 20 ].

Differences by age, background, discipline, professional focus, and world region

Differences in attitudes towards shared data were noted by age, professional focus, and world region [ 25 , 27 , 33 , 34 ]. Younger researchers, aged between 20–39 and 40–49 years, were less likely to share their data with others (39% and 38% respectively) compared to other age groups; respondents aged over 50 years of age were more willing (46%) to share [ 33 ]. Interestingly, while less willing to share, younger researchers also believed that the lack of access to data was a major impediment to science and their research [ 33 ]. Where younger researchers were able to place conditions on access to their data, rates of willingness to share were increased [ 33 ].

Respondents from the disciplines of education, medicine/health science, and psychology were more inclined than others to agree that their data should not be available for others to use in the first place [ 34 ]. However, results from one study indicated that researchers from the medical field and social sciences were less likely to share compared to other disciplines [ 33 ]. For example, results of a quantitative study showed that compared to biologists, who reported sharing 85% of their data, medical and social sciences reported sharing their data 65% and 58% percent of the time, respectively [ 33 ].

One of the primary reasons for controlling access to data, identified in a study of data custodians, was due to a desire to avoid data misuse; this was cited as a factor for all surveyed data repositories except those of an interdisciplinary nature [ 27 ]. Limiting access to certain types of research and ensuring attribution were not listed as a concern for sociology, humanities or interdisciplinary data collections [ 27 ]. Issues pertaining to privacy and sensitive data were only cited as concerns for data collections related to humanities, social sciences, and biology, ecology, and chemistry; concerns regarding intellectual property were also noted [ 27 ]. The disciplines of biology, ecology, and chemistry and social sciences had the most policy restrictions on the use of data held in their repositories [ 27 ].

Differences in data sharing practices were also noted by world region. Respondents not from North American and European countries were more willing to place their data on a central repository; however, they were also more likely to place conditions on the reuse of their data [ 33 , 34 ].

Experience of data sharing

The experience of data sharing among researchers was discussed in nine articles [ 20 , 24 , 25 , 26 , 28 , 29 , 30 , 31 , 32 , 33 ]. Data sharing arrangements were highly individual and ranged from ad hoc and informal processes to formal procedures enforced by institutional policies in the form of contractual agreements, with respondents indicating data sharing behaviour ranging from sharing no data to sharing all data [ 20 , 26 , 31 ]. Quantitative data from one study showed that researchers were more inclined to share data prior to publication with people that they knew compared to those they did not; post publication, these figures were similar between groups [ 24 ]. While many researchers were prepared to share data, results of a survey identified a preference of researchers to collect data themselves, followed by their team, or by close colleagues [ 26 ].

Differences in the stated rate of data sharing compared to the actual rate of sharing [ 25 ] were noted. In a large quantitative study ( N = 1329), nearly one third of respondents chose not to answer whether they make their data available to others; of those who responded to the question, 46% reported they do not make their data electronically available to others [ 33 ]. By discipline, differences in the rate of refusal to share were higher in chemistry compared to non-science disciplines such as sociology [ 25 ]. Respondents who were more academically productive (> 25 articles over the past 3 years) reported that they have or would withhold data to protect research subjects less frequently than those who were less academically productive or received industry funding [ 32 ].

Attitudes to sharing de-identified data via data repositories was discussed in two articles [ 29 , 31 ]. A majority of respondents in one study indicated that de-identified data should be shared via a repository and that it should be shared when requested. A lack of experience in uploading data to repositories was noted as a barrier [ 29 ]. When data was shared, most researchers included additional materials to support their data including materials such as metadata or a protocol description [ 29 ].

Two articles [ 28 , 30 ] focused on processes and variables associated with sharing. Factors such as norms, data infrastructure/organisational support, and research communities were identified as important factors in a researcher’s attitude towards data sharing [ 28 , 30 ]. A moderate correlation between data reuse and data sharing suggest that these two variables are not linked. Furthermore, sharing data compared to self-reported data reuse were also only moderately associated (Pearson’s correlation of 0.25 ( p ≤ 0.001)) [ 26 ].

Predictors of data sharing and norms

Two articles [ 26 , 30 ] discussed the role of social norms and an individual’s willingness to share health data. Perceived efficacy and efficiency of data reuse were strong predictors of data sharing [ 26 ] and the development of a ‘positive social norm towards data sharing support(s)[ed] researcher data reuse intention’ [ 30 ] (p. 400).

Policy framework

The establishment of clear policies and procedures to support data sharing was highlighted in two articles [ 22 , 28 ]. The presence of ambiguous data sharing policies was noted as a major limitation, particularly in primary care and the increased adoption of health informatics systems [ 22 ]. Policies that support an efficient exchange system allowing for the maximum amount of data sharing are preferred and may include incentives such as formal recognition and financial reimbursement; a framework for this is proposed in Fecher et al. [ 28 ].

Research funding

The requirement to share data funded by public monies was discussed in one article [ 25 ]. Some cases were reported of researchers refusing to share data funded by tax-payer funds; reasons for refusal included a potential reduction in future funding or publishing opportunities [ 25 ].

Access and ownership

Articles relating to access and ownership were grouped together and seven subthemes were identified.

Access, information systems, and metadata

Ten articles [ 19 , 20 , 21 , 22 , 26 , 27 , 29 , 33 , 34 , 35 ] discussed the themes of access, information systems, and the use of metadata. Ensuring privacy protections in a prospective manner was seen as important for data held in registries [ 19 ]. In the setting of mental health, researchers indicated that patients should have more choices for controlling access to shared registry data [ 35 ]. The use of guardianship committees [ 19 ] or gate-keepers [ 20 ] was seen as important in ensuring the security and access to data held in registries by some respondents; however, many suggested that a researcher should relinquish control of the data collection once curated or released, unless embargoed [ 20 ]. Reasons for maintaining control over registry data included ensuring attribution, restricting commercial research, protecting sensitive (non-personal) information, and limiting certain types of research [ 27 ]. Concerns about security and confidentiality were noted as important and assurances about these needed to be provided; accountability and transparency mechanisms also need to be included [ 21 ]. Many respondents believed that access to the registry data by pharmaceutical companies and marketing agencies was not considered appropriate [ 19 ].

Respondents to a survey from medicine and social sciences were less likely to agree to have all data included on a central repository with no restrictions [ 33 ]; notably, this was also reflected in the results of qualitative research which indicated that health professionals were more cautious than patients about the inclusion of personal data within a disease specific register [ 19 ].

While many researchers stated that they commonly shared data directly with other researchers, most did not have experience with uploading data to repositories [ 29 ]. Results from a survey indicated that younger respondents have more data access restrictions and thought that their data is easier to access significantly more than older respondents [ 34 ]. In the primary care setting, concerns were noted about the potential for practitioners to block patient involvement in a registry by refusing access to a patient’s personal data or by not giving permission for the data to be extracted from their clinical system [ 21 ]. There was also resistance in primary care towards health data amalgamation undertaken for an unspecified purpose [ 22 ]; respondents were not in favour of systems which included unwanted functionality (do not want/need), inadequate attributes (capability and receptivity) of the practice, or undesirable impact on the role of the general practitioner (autonomy, status, control, and workflow) [ 22 ].

Access to ‘comprehensive metadata (is needed) to support the correct interpretation of the data’ [ 26 ] (p. 4) at a later stage. When additional materials were shared, most researchers shared contextualising information or a description of the experimental protocol [ 29 ]. The use of metadata standards was not universal with some respondents using their own [ 33 ].

Several articles highlighted the impact of data curation on researchers’ time [ 20 , 21 , 22 , 29 , 33 ] or finances [ 24 , 28 , 29 , 33 , 34 ]; these were seen as potential barriers to increased registry adoption [ 21 ]. Tasks required for curation included preparing data for dissemination in a usable format and uploading data to repositories. The importance of ensuring that the data is accurately preserved for future reuse was highlighted; it must be presented in a retriable and auditable manner [ 20 ]. The amount of time required to curate data ranged from ‘no additional time’ to ‘greater than ten hours’ [ 29 ]. In one study, no clinical respondent had their data in a sharable format [ 29 ]. In the primary care setting, health information systems which promote sharing were not seen as being beneficial if they required standardisation of processes and/or sharing of clinical notes [ 22 ]. Further, spending time on non-medical issues in a time poor environment [ 22 ] was identified as a barrier. Six articles described the provision of funding or technical support to ensure data storage, maintenance, and the ability to provide access to data when requested. All noted a lack of funding and time as a barrier to increased sharing data [ 20 , 24 , 28 , 29 , 33 , 34 ].

Results of qualitative research indicated a range of views regarding consent mechanisms for future data use [ 18 , 19 , 20 , 23 , 35 ]. Consenting for future research can be complex given that the exact nature of the study will be unknown, and therefore some respondents suggested that a broad statement on future data uses be included [ 19 , 20 ] during the consent process. In contrast, other participants indicated that the current consent processes were too broad and do not reflect patient preferences sufficiently [ 35 ]. The importance of respecting the original consent in all future research was noted [ 20 ]. It was suggested that seeking additional consent for future data use may discourage participation in the original study [ 20 ]. Differences in views regarding the provision of detailed information about sharing individual level data was noted suggesting that the researchers wanted to exert some control over data they had collected [ 20 ]. An opt-out consent process was considered appropriate in some situations [ 18 ] but not all; some respondents suggested that consent to use a patient’s medical records was not required [ 18 ]. There was support by some researchers to provide patients with the option to ‘opt-in’ to different levels of involvement in a registry setting [ 19 ]. Providing patients more granular choices when controlling access to their medical data [ 35 ] was seen as important.

The attitudes of ethics and review boards ( N = 30) towards the use of medical records for research was discussed in one article [ 23 ]. While 38% indicated that no further consent would be required, 47% required participant consent, and 10% said that the requirement for consent would depend on how the potentially identifying variables would be managed [ 23 ]. External researcher access to medical record data was associated with a requirement for consent [ 23 ].

Acknowledgement

The importance of establishing mechanisms which acknowledge the use of shared data were discussed in four articles [ 27 , 29 , 33 , 34 ]. A significant proportion of respondents to a survey believed it was fair to use other researchers’ data if they acknowledged the originator and the funding body in all disseminated work or as a formal citation in published works [ 33 ]. Other mechanisms for acknowledging the data originator included opportunities to collaborate on the project, reciprocal data sharing agreements, allowing the originator to review or comment on results, but not approve derivative works, or the provision of a list of products making use of the data and co-authorship [ 33 , 34 ]. In the setting of controlled data collections, survey results indicated that ensuring attribution was a motivator for controlled access [ 27 ]. Over half of respondents in one survey believed it was fair to disseminate results based either in whole or part without the data provider’s approval [ 33 ]. No significant differences in mechanisms for acknowledgement were noted between clinical and scientific participants; mechanisms included co-authorship, recognition in the acknowledgement section of publications, and citation in the bibliography [ 29 ]. No consentient method for acknowledging shared data reuse was identified [ 29 ].

Data ownership was identified as a potential barrier to increased data sharing in academic research [ 28 ]. In the setting of control of data collections, survey respondents indicated that they wanted to maintain some control over the dataset, which is suggestive of researchers having a perceived ownership of their research data [ 28 ]. Examples of researchers extending ownership over their data include the right to publish first and the control of access to datasets [ 28 ]. Fecher et al. noted that the idea of data ownership by the researcher is not a position always supported legally; ‘the ownership and rights of use, privacy, contractual consent and copyright’ are subsumed [ 28 ] (p. 15). Rather data sharing is restricted by privacy law, which is applied to datasets containing data from individuals. The legal uncertainty about data ownership and the complexity of law can deter data sharing [ 28 ].

Promotion/professional criteria

The role of data sharing and its relation to promotion and professional criteria were discussed in two articles [ 24 , 28 ]. The requirement to share data is rarely a promotion or professional criterion, rather the systems are based on grants and publication history [ 24 , 28 ]. One study noted that while the traditional link between publication history and promotion remains, it is ‘likely that funders will continue to get sub-optimal returns on their investments, and that data will continue to be inefficiently utilised and disseminated’ [ 24 ] (p. 49).

This systematic literature review highlights the ongoing complexity associated with increasing data sharing across the sciences. No additional literature meeting the inclusion criteria were identified in the period between the data search and the submission of this manuscript. Data gaps identified include a paucity of information specifically related to the attitudes of breast cancer researchers and health professionals towards the secondary use and sharing of health administrative and clinical trial data.

While the majority of respondents believed the principles of data sharing were sound, significant barriers remain: issues of consent, privacy, information security, and ownership were key themes throughout the literature. Data ownership and acknowledgement, trust, and policy frameworks influenced sharing practice, as did age, discipline, professional focus, and world region.

Addressing concerns of privacy, trust, and information security in a technologically changing and challenging landscape is complex. Ensuring the balance between privacy and sharing data for the greater good will require the formation of policy and procedures, which promote both these ideals.

Establishing clear consent mechanisms would provide greater clarity for all parties involved in the data sharing debate. Ensuring that appropriate consent for future research, including secondary data analysis and sharing and linking of datasets, is gained at the point of data collection, would continue to promote research transparency and provide healthcare professionals and researchers with knowledge that an individual is aware that their data may be used for other research purposes. The establishment of policy which supports and promotes the secondary use of data and data sharing will assist in the normalisation of this type of health research. With the increased promotion of data sharing and secondary data analysis as an established tool in health research, over time barriers to its use, including perceptions of ownership and concerns regarding privacy and consent, will decrease.

The importance of establishing clear and formal processes associated with acknowledging the use of shared data has been underscored in the results presented. Initiatives such as the Bioresource Research Impact Factor/Framework (BRIF) [ 36 ] and the Citation of BioResources in journal Articles (CoBRA) [ 37 ] have sought to formalise the process. However, increased academic recognition of sharing data for secondary analysis requires further development and the allocation of funding to ensure that collected data is in a usable, searchable, and retrievable format. Further, there needs to be a shift away from the traditional criteria of academic promotion, which includes research outputs, to one which is inclusive of a researcher’s data sharing history and the availability of their research dataset for secondary analysis.

The capacity to identify and use already collected data was identified as a barrier. Moves to make data findable, accessible, interoperable, and reusable (FAIR) have been promoted as a means to encourage greater accessibility to data in a systematic way [ 38 ]. The FAIR principles focus on data characteristics and should be interpreted alongside the collective benefit, authority to control, responsibility, and ethics (CARE) principles established by the Global Indigenous Data Alliance (GIDA) which a people and purpose orientated [ 39 ].

Limitations

The papers included in this study were limited to those indexed on major databases. Some literature on this topic may have been excluded if it was not identified during the grey literature and hand searching phases.

Implications

Results of this systematic literature review indicate that while there is broad agreement for the principles of data sharing in medical research, there remain disagreements about the infrastructure and procedures associated with the data sharing process. Additional work is therefore required on areas such as acknowledgement, curation, and data ownership.

While the literature confirms that there is overall support for data sharing in medical and scientific research, there remain significant barriers to its uptake. These include concerns about privacy, consent, information security, and data ownership.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Abbreviations

Bioresource Research Impact Factor/Framework

Collective benefit, authority to control, responsibility, and ethics

Citation of BioResources in journal Articles

Findable, accessible, interoperable, and reusable

Global Indigenous Data Alliance

Human immunodeficiency virus/acquired immunodeficiency

International Council of Medical Journal Editors

Multiple sclerosis

Surveillance, Epidemiology, and End Results

Tuberculosis

The Cancer Genome Atlas

Huesch MD, Mosher TJ. Using it or losing it? The case for data scientists inside health care. NEJM Catalyst. 2017.

Green LW. Closing the chasm between research and practice: evidence of and for change. Health Promot J Australia. 2014;25(1):25–9.

Article   Google Scholar  

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Article   PubMed   PubMed Central   Google Scholar  

Goldacre B. Are clinical trial data shared sufficiently today? No. Br Med J. 2013;347:f1880.

Goldacre B, Gray J. OpenTrials: towards a collaborative open database of all available information on all clinical trials. Trials. 2016;17(1):164.

Kostkova P, Brewer H, de Lusignan S, Fottrell E, Goldacre B, Hart G, et al. Who owns the data? Open data for healthcare. Front Public Health. 2016;4.

Elliott M. Seeing through the lies: innovation and the need for transparency. Gresham College Lecture Series; 23 November 2016; Museum of London. 2016.

European Medicines Agency. Publication and access to clinical-trial data. London: European Medicines Agency; 2013.

Google Scholar  

Taichman DB, Backus J, Baethge C, Bauchner H, de Leeuw PW, Drazen JM, et al. Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. J Am Med Assoc. 2016;315(5):467–8.

Article   CAS   Google Scholar  

National Institue of Health (NIH). The Cancer Genome Atlas (TCGA): program overview United States of America: National Institue of Health (NIH); 2019 [Available from: https://cancergenome.nih.gov/abouttcga/overview ].

National Institue of Health (NIH). Surveillance, Epidemiology, and End Results (SEER) Program Washington: The Government of United States of Ameica; 2019 [Available from: https://seer.cancer.gov ].

Castellani J. Are clinical trial data shared sufficiently today? Yes. Br Med J. 2013;347:f1881.

Moher D, Liberati A, Tetzlaff J, Altman DG, Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097–e.

Veritas Health Innovation. Covidence systematic review software. Melbourne: Cochrane Collaboration; 2018.

Kmet LM, Cook LS, Lee RC. Standard quality assessment criteria for evaluating primary research papers from a variety of fields; 2004.

Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evidence Based Healthcare. 2015;13(3):179–87.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117.

Asai A, Ohnishi M, Nishigaki E, Sekimoto M, Fukuhara S, Fukui T. Attitudes of the Japanese public and doctors towards use of archived information and samples without informed consent: preliminary findings based on focus group interviews. BMC Medical Ethics. 2002;3(1):1.

Article   PubMed Central   Google Scholar  

Baird W, Jackson R, Ford H, Evangelou N, Busby M, Bull P, et al. Holding personal information in a disease-specific register: the perspectives of people with multiple sclerosis and professionals on consent and access. J Med Ethics. 2009;35(2):92–6.

Article   CAS   PubMed   Google Scholar  

Denny SG, Silaigwana B, Wassenaar D, Bull S, Parker M. Developing ethical practices for public health research data sharing in South Africa: the views and experiences from a diverse sample of research stakeholders. J Empiric Res Human Res Ethics. 2015;10(3):290–301.

Grant A, Ure J, Nicolson DJ, Hanley J, Sheikh A, McKinstry B, et al. Acceptability and perceived barriers and facilitators to creating a national research register to enable 'direct to patient' enrolment into research: the Scottish Health Research register (SHARE). BMC Health Serv Res. 2013;13(1):422.

Knight J, Patrickson M, Gurd B. Understanding GP attitudes towards a data amalgamating health informatics system. Electron J Health Inform. 2008;3(2):12.

Willison DJ, Emerson C, Szala-Meneok KV, Gibson E, Schwartz L, Weisbaum KM, et al. Access to medical records for research purposes: varying perceptions across research ethics boards. J Med Ethics. 2008;34(4):308–14.

Bezuidenhout L, Chakauya E. Hidden concerns of sharing research data by low/middle-income country scientists. Glob Bioethics. 2018;29(1):39–54.

Ceci SJ. Scientists' attitudes toward data sharing. Sci Technol Human Values. 1988;13(1-2):45–52.

Curty RG, Crowston K, Specht A, Grant BW, Dalton ED. Attitudes and norms affecting scientists’ data reuse. PLoS One. 2017;12(12):e0189288.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Eschenfelder K, Johnson A. The limits of sharing: controlled data collections. Proc Am Soc Inf Sci Technol. 2011;48(1):1–10.

Fecher B, Friesike S, Hebing M. What drives academic data sharing? PLoS One. 2015;10(2):e0118053.

Federer LM, Lu Y-L, Joubert DJ, Welsh J, Brandys B. Biomedical data sharing and reuse: attitudes and practices of clinical and scientific research staff. PLoS One. 2015;10(6):e0129506.

Joo S, Kim S, Kim Y. An exploratory study of health scientists’ data reuse behaviors: examining attitudinal, social, and resource factors. Aslib J Inf Manag. 2017;69(4):389–407.

Rathi V, Dzara K, Gross CP, Hrynaszkiewicz I, Joffe S, Krumholz HM, et al. Sharing of clinical trial data among trialists: a cross sectional survey. Br Med J. 2012;345:e7570.

Rathi VK, Strait KM, Gross CP, Hrynaszkiewicz I, Joffe S, Krumholz HM, et al. Predictors of clinical trial data sharing: exploratory analysis of a cross-sectional survey. Trials. 2014;15(1):384.

Tenopir C, Allard S, Douglass K, Aydinoglu AU, Wu L, Read E, et al. Data sharing by scientists: practices and perceptions. PLoS One. 2011;6(6):e21101.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tenopir C, Dalton ED, Allard S, Frame M, Pjesivac I, Birch B, et al. Changes in data sharing and data reuse practices and perceptions among scientists worldwide. PLoS One. 2015;10(8):e0134826.

Grando MA, Murcko A, Mahankali S, Saks M, Zent M, Chern D, et al. A study to elicit behavioral health patients' and providers' opinions on health records consent. J Law Med Ethics. 2017;45(2):238–59.

Howard HC, Mascalzoni D, Mabile L, Houeland G, Rial-Sebbag E, Cambon-Thomsen A. How to responsibly acknowledge research work in the era of big data and biobanks: ethical aspects of the bioresource research impact factor (BRIF). J Commun Genetics. 2018;9(2):169–76.

Bravo E, Calzolari A, De Castro P, Mabile L, Napolitani F, Rossi AM, et al. Developing a guideline to standardize the citation of bioresources in journal articles (CoBRA). BMC Med. 2015;13:33.

Boeckhout M, Zielhuis GA, Bredenoord AL. The FAIR guiding principles for data stewardship: fair enough? Eur J Human Genetics. 2018;26(7):931–6.

Global Indigenous Data Alliance (GIDA). CARE principles for indigenous data governance GIDA; 2019 [Available from: https://www.gida-global.org/care ].

Download references

Acknowledgements

The authors would like to thank Ms. Ngaire Pettit-Young, Information First, Sydney, NSW, Australia, for her assistance in developing the search strategy.

This project was supported by the Sydney Vital, Translational Cancer Research, through a Cancer Institute NSW competitive grant. The views expressed herein are those of the authors and are not necessarily those of the Cancer Institute NSW. FB is supported in her academic role by the Friends of the Mater Foundation.

Author information

Authors and affiliations.

Northern Clinical School, Faculty of Medicine, University of Sydney, Sydney, Australia

Elizabeth Hutchings & Frances M. Boyle

Department of Psychology, The University of Sydney, Sydney, NSW, Australia

Max Loomes & Phyllis Butow

Centre for Medical Psychology & Evidence-Based Decision-Making (CeMPED), Sydney, Australia

Phyllis Butow

Psycho-Oncology Co-Operative Research Group (PoCoG), The University of Sydney, Sydney, NSW, Australia

Patricia Ritchie Centre for Cancer Care and Research, Mater Hospital, North Sydney, Sydney, Australia

Frances M. Boyle

You can also search for this author in PubMed   Google Scholar

Contributions

EH, PB, and FB were responsible for developing the study concept and the development of the protocol. EH and ML were responsible for the data extraction and data analysis. FB and PB supervised this research. All authors participated in interpreting the findings and contributed the intellectual content of the manuscript. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Elizabeth Hutchings .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

EH, ML, PB, and FB declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hutchings, E., Loomes, M., Butow, P. et al. A systematic literature review of researchers’ and healthcare professionals’ attitudes towards the secondary use and sharing of health administrative and clinical trial data. Syst Rev 9 , 240 (2020). https://doi.org/10.1186/s13643-020-01485-5

Download citation

Received : 27 December 2019

Accepted : 17 September 2020

Published : 12 October 2020

DOI : https://doi.org/10.1186/s13643-020-01485-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Secondary data analysis

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

systematic literature review paper pdf

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 24, Issue 2
  • Five tips for developing useful literature summary tables for writing review articles
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-0157-5319 Ahtisham Younas 1 , 2 ,
  • http://orcid.org/0000-0002-7839-8130 Parveen Ali 3 , 4
  • 1 Memorial University of Newfoundland , St John's , Newfoundland , Canada
  • 2 Swat College of Nursing , Pakistan
  • 3 School of Nursing and Midwifery , University of Sheffield , Sheffield , South Yorkshire , UK
  • 4 Sheffield University Interpersonal Violence Research Group , Sheffield University , Sheffield , UK
  • Correspondence to Ahtisham Younas, Memorial University of Newfoundland, St John's, NL A1C 5C4, Canada; ay6133{at}mun.ca

https://doi.org/10.1136/ebnurs-2021-103417

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research. 1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis in reviews, the use of literature summary tables is of utmost importance. A literature summary table provides a synopsis of an included article. It succinctly presents its purpose, methods, findings and other relevant information pertinent to the review. The aim of developing these literature summary tables is to provide the reader with the information at one glance. Since there are multiple types of reviews (eg, systematic, integrative, scoping, critical and mixed methods) with distinct purposes and techniques, 2 there could be various approaches for developing literature summary tables making it a complex task specialty for the novice researchers or reviewers. Here, we offer five tips for authors of the review articles, relevant to all types of reviews, for creating useful and relevant literature summary tables. We also provide examples from our published reviews to illustrate how useful literature summary tables can be developed and what sort of information should be provided.

Tip 1: provide detailed information about frameworks and methods

  • Download figure
  • Open in new tab
  • Download powerpoint

Tabular literature summaries from a scoping review. Source: Rasheed et al . 3

The provision of information about conceptual and theoretical frameworks and methods is useful for several reasons. First, in quantitative (reviews synthesising the results of quantitative studies) and mixed reviews (reviews synthesising the results of both qualitative and quantitative studies to address a mixed review question), it allows the readers to assess the congruence of the core findings and methods with the adapted framework and tested assumptions. In qualitative reviews (reviews synthesising results of qualitative studies), this information is beneficial for readers to recognise the underlying philosophical and paradigmatic stance of the authors of the included articles. For example, imagine the authors of an article, included in a review, used phenomenological inquiry for their research. In that case, the review authors and the readers of the review need to know what kind of (transcendental or hermeneutic) philosophical stance guided the inquiry. Review authors should, therefore, include the philosophical stance in their literature summary for the particular article. Second, information about frameworks and methods enables review authors and readers to judge the quality of the research, which allows for discerning the strengths and limitations of the article. For example, if authors of an included article intended to develop a new scale and test its psychometric properties. To achieve this aim, they used a convenience sample of 150 participants and performed exploratory (EFA) and confirmatory factor analysis (CFA) on the same sample. Such an approach would indicate a flawed methodology because EFA and CFA should not be conducted on the same sample. The review authors must include this information in their summary table. Omitting this information from a summary could lead to the inclusion of a flawed article in the review, thereby jeopardising the review’s rigour.

Tip 2: include strengths and limitations for each article

Critical appraisal of individual articles included in a review is crucial for increasing the rigour of the review. Despite using various templates for critical appraisal, authors often do not provide detailed information about each reviewed article’s strengths and limitations. Merely noting the quality score based on standardised critical appraisal templates is not adequate because the readers should be able to identify the reasons for assigning a weak or moderate rating. Many recent critical appraisal checklists (eg, Mixed Methods Appraisal Tool) discourage review authors from assigning a quality score and recommend noting the main strengths and limitations of included studies. It is also vital that methodological and conceptual limitations and strengths of the articles included in the review are provided because not all review articles include empirical research papers. Rather some review synthesises the theoretical aspects of articles. Providing information about conceptual limitations is also important for readers to judge the quality of foundations of the research. For example, if you included a mixed-methods study in the review, reporting the methodological and conceptual limitations about ‘integration’ is critical for evaluating the study’s strength. Suppose the authors only collected qualitative and quantitative data and did not state the intent and timing of integration. In that case, the strength of the study is weak. Integration only occurred at the levels of data collection. However, integration may not have occurred at the analysis, interpretation and reporting levels.

Tip 3: write conceptual contribution of each reviewed article

While reading and evaluating review papers, we have observed that many review authors only provide core results of the article included in a review and do not explain the conceptual contribution offered by the included article. We refer to conceptual contribution as a description of how the article’s key results contribute towards the development of potential codes, themes or subthemes, or emerging patterns that are reported as the review findings. For example, the authors of a review article noted that one of the research articles included in their review demonstrated the usefulness of case studies and reflective logs as strategies for fostering compassion in nursing students. The conceptual contribution of this research article could be that experiential learning is one way to teach compassion to nursing students, as supported by case studies and reflective logs. This conceptual contribution of the article should be mentioned in the literature summary table. Delineating each reviewed article’s conceptual contribution is particularly beneficial in qualitative reviews, mixed-methods reviews, and critical reviews that often focus on developing models and describing or explaining various phenomena. Figure 2 offers an example of a literature summary table. 4

Tabular literature summaries from a critical review. Source: Younas and Maddigan. 4

Tip 4: compose potential themes from each article during summary writing

While developing literature summary tables, many authors use themes or subthemes reported in the given articles as the key results of their own review. Such an approach prevents the review authors from understanding the article’s conceptual contribution, developing rigorous synthesis and drawing reasonable interpretations of results from an individual article. Ultimately, it affects the generation of novel review findings. For example, one of the articles about women’s healthcare-seeking behaviours in developing countries reported a theme ‘social-cultural determinants of health as precursors of delays’. Instead of using this theme as one of the review findings, the reviewers should read and interpret beyond the given description in an article, compare and contrast themes, findings from one article with findings and themes from another article to find similarities and differences and to understand and explain bigger picture for their readers. Therefore, while developing literature summary tables, think twice before using the predeveloped themes. Including your themes in the summary tables (see figure 1 ) demonstrates to the readers that a robust method of data extraction and synthesis has been followed.

Tip 5: create your personalised template for literature summaries

Often templates are available for data extraction and development of literature summary tables. The available templates may be in the form of a table, chart or a structured framework that extracts some essential information about every article. The commonly used information may include authors, purpose, methods, key results and quality scores. While extracting all relevant information is important, such templates should be tailored to meet the needs of the individuals’ review. For example, for a review about the effectiveness of healthcare interventions, a literature summary table must include information about the intervention, its type, content timing, duration, setting, effectiveness, negative consequences, and receivers and implementers’ experiences of its usage. Similarly, literature summary tables for articles included in a meta-synthesis must include information about the participants’ characteristics, research context and conceptual contribution of each reviewed article so as to help the reader make an informed decision about the usefulness or lack of usefulness of the individual article in the review and the whole review.

In conclusion, narrative or systematic reviews are almost always conducted as a part of any educational project (thesis or dissertation) or academic or clinical research. Literature reviews are the foundation of research on a given topic. Robust and high-quality reviews play an instrumental role in guiding research, practice and policymaking. However, the quality of reviews is also contingent on rigorous data extraction and synthesis, which require developing literature summaries. We have outlined five tips that could enhance the quality of the data extraction and synthesis process by developing useful literature summaries.

  • Aromataris E ,
  • Rasheed SP ,

Twitter @Ahtisham04, @parveenazamali

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient consent for publication Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Systematic Literature Review of Electronic Zakat Payment

Profile image of Bambang Agus Pramuka

OIKONOMIKA: Jurnal Kajian Ekonomi dan Keuangan Syariah

Indonesia has 235.53 million Muslim population, making up 86.88% of the country's total population. In this regard, an online zakat financial instrument is helpful to ensure equal income and alleviate poverty. This study aims to examine the effectiveness of electronic zakat payment and the factors optimizing the electronic zakat payment. This systematic literature review examines articles in a structured and systematic manner. The data sources of the study were secondary data obtained from several previous studies. The study found that online zakat payment has not been conclusive due to several factors, including an overly high target, lack of socialization on technology use, and the absence of sharia regulation. Electronic zakat distribution could be more effective if more people adopt this transaction method.

Related Papers

Sagi Sagara

Nowadays everything is surrounded by information technology (IT) or better-known as the 'Internet of Things (IoT).' This increasing trend has impacted all part of our lives – including zakat collection. Zakat is supposed to promote the welfare of all people but it is not what happens in reality. Two obstacles that hamper Muslims to settle their zakat are; first, whether or not their zakat will be given to the right people (poor and needy) which gives rise to the issue of accountability and second, the easiness of paying zakat. This paper aims to reveal the viewpoint on E-Zakat implementation that has changed the way Muslims pay their zakat, in this case is zakat al-fitr and also the impact that it has on modern society. We sincerely hope to be able to contribute our findings to parties involved with E-Zakat which are: banks, academicians, the giver, the recipient and Muslims in Indonesia.

systematic literature review paper pdf

NORAINI SARO

Assoc Prof Dr Dodik Siswantoro

Zakat is increasingly important nowadays, especially during the covid-19 pandemic that created bigger demand for zakat and other charitable funds. Not only that, the covid-19 pandemic also accelerates the need for digitalization to support transaction processes-including in collecting and distributing zakat funds-in a safer and convenient way. However, few studies attempt to discuss the literature related to zakat and digitalization. This study fills in the gap by systematically reviewing literature related to zakat in digitalization. In analyzing the literature, it uses a qualitative research method as well as descriptive statistics and VOSviewer software in analysing the qualitative data. The study covers articles published in Mendeley and Scopus database journals during 2016-2021 period. The study highlights several interesting trends in relation to digital zakat literature. First, the main topics related to digital zakat are institutions, zakat collection and efficiency. Second, most of the studies related to digital zakat is published in 2016 and 2020, albeit there are high possibilities that more papers will come in 2021. Third, most studies related to digital zakat use qualitative research method. Fourth, Indonesia is dominating the research on digital zakat as most studies related to digital zakat are mostly discussed as case studies of Indonesia. Fifth, the subtopic mostly discussed within the digital zakat papers is marketing aspect of digital zakat. Sixth, the proportion of female and male authors are relatively the same. Finally, the citations of the papers are relatively still small albeit there is a high possibility that the number will significantly increase in the future. Overall, the study highlights the importance of digital zakat issue in Indonesia. This might be influenced by the fact that zakat potential in Indonesia is very high, while the actual collection is very low. Thus, digitalization of zakat is seen as an important strategy and solution' to solve the problem. In line with this, marketing aspect of digital zakat is also seen and found to be the most researched topic in the area. These results are expected to provide insight for all stakeholders of zakat in developing digital zakat in the future.

Proceedings of the 2nd Borobudur International Symposium on Humanities and Social Sciences, BIS-HSS 2020, 18 November 2020, Magelang, Central Java, Indonesia

Samarah: Jurnal Hukum Keluarga dan Hukum Islam

Teuku Zulfikar

This study aims to examine the digital-based zakat management information system and strategies for increasing ZIS fund income from the perspective of muamalah fiqh in NTB and Aceh. This study focuses on using SimBaznas on the loyalty of zakat payments, the cand constraints, and strategies taken to increase zakat payments in the two regions. This is a mixed-method study to obtain more comprehensive, reliable, and objective data. The results showed that the implementation of SimBaznas at Baznas in NTB was only carried out on reporting zakat collection that had been carried out properly, while reports and asset reports had not been well informed in SimBaznas. Meanwhile, in Aceh Province, all SimBaznas features have not been effective. Quantitative analysis shows that the ease of use and availability of facilities and infrastructure does not guarantee a correlation with the implementation of SimBaznas in the two provinces. In addition, the seriousness of the interest of SimBaznas users...

The International Journal of Business & Management

Randa Elsobky

International Journal of Academic Research in Business and Social Sciences

AHMAD ARIF ZULKEFLI

Journal of Islamic Monetary Economics and Finance

rahmatina kasri

Jurnal Ekonomi & Keuangan Islam

Sri Maulida

Purpose – This study aims (i) to analyze the readiness of zakat management institutions in zakat digitalization and (ii) to analyze the problems and solutions in managing zakat funds through digital platforms.Methodology – The study used two methods, called the interview and the Delphi-ANP methods. The data used in this study were the results of interviews with zakat managers (OPZ) in South Kalimantan (BAZNAS and LAZNAS). Besides practitioners, it also involved experts from various universities in South Kalimantan.Findings – The results showed that most zakat institutions in South Kalimantan, Most zakat institutions have a good understanding and readiness to shift to digital platforms. Based on the analysis of problems and solutions in using digital platforms in zakat management, the study found alternative priority problems and solutions for zakat institutions. The problems and solutions covered human resources, IT, institution management and socialization and communication, muzakk...

International Conference of Zakat

deasy tantriana

This paper analyzes the preferences of muzzaki in Surabaya in selecting zakat methods in the digital era. One of the developments of zakat in the digital era is the ease in paying zakat. A Muzzaki, no need to come to BAZ or LAZ but can be done digitally. So many zakat digital applications issued by BAZ or LAZ, even E-commerce institutions issued digital zakat applications. Surabaya as the second largest city after Jakarta could make a picture of whether urban people tend to choose to use digital or conventional Zakat. This research is quantitative research using path analysis method with LISREL 8.80 application tools assisted by MSI (Method of Successive Interval) and SPSS 20.0 for data feasibility test. The population of this research is muzzaki in Surabaya. The results of this study are useful in providing information to BAZNAS on the effective of digital zakat and the digital zakat application model of the most popular muzzaki in Surabaya.

RELATED PAPERS

YULI T R I A N I _C1C021197

Journal of Electronic Materials

Morsi Morsi

Journal of STEM Teacher Education

Cathy Wissehr

Clinical Neurophysiology

Aditya Pandey

ATOM-N 2022

Ioana Marcu , Marius C Vochin

Abolhassan Shamsaie

Asian Journal of Education and Social Studies

Enung Hasanah

Lecture Notes in Computer Science

Laura Pearson (nee Daley)

9H 32 Flora Nurhalizah

Publication Division, Aligarh Muslim University Aligarh

Zafar Minhaj

Les Cahiers de la recherche architecturale et urbaine

Constance Ringon

Educational Research Review

Andreas Demetriou

Science Fiction Studies

Anna McFarlane

Gregorianum

Environmental Microbiology

Michael Landry

Nanomaterials

Ewa Mijowska

Pediatric Research

Ernest Cady

Proceedings of the Catholic Theological Society of America

John Renard

arXiv (Cornell University)

Dimitrios Makris

Zsuzsanna Horváth

Historia Unisinos

Patricia Falco Genovez

John Dorrer

omar aktouf

Saad Ibrahim

See More Documents Like This

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

VIDEO

  1. Systematic Literature Review Paper

  2. Systematic Literature Review Paper presentation

  3. Write Your Literature Review FAST

  4. Lecture Series #3 How to Write a Systematic Literature Review Paper? (Dhanan S Utomo, PhD)

  5. Systematic Literature Review Paper Development

  6. Systematic Literature Review and Meta Analysis(literature review)(quantitative analysis)

COMMENTS

  1. (PDF) Systematic Literature Reviews: An Introduction

    Systematic literature reviews (SRs) are a way of synt hesising scientific evidence to answer a particular. research question in a way that is transparent and reproducible, while seeking to include ...

  2. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  3. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  4. Guidance on Conducting a Systematic Literature Review

    Step 3: Search the Literature. The quality of literature review is highly dependent on the literature collected for the review—"Garbage-in, garbage-out.". The literature search finds materials for the review; therefore, a systematic review depends on a systematic search of literature. Channels for literature search.

  5. PDF How to write a systematic literature review: a guide for medical students

    Systematic review allows the assessment of primary study quality, identifying the weaknesses in current experimental efforts and guiding the methodology of future research. Choosing the features of study design to review and critique is dependent on the subject and design of the literature identified.

  6. PDF Chapter 4 Systematic Literature Reviews

    A systematic literature review may be viewed as a research method for making a literature review. Specifying the research question(s). The area of the systematic review and the ... not be; in this case, it means that a paper is found and hence assumed to be of interest, and later it turns out that it is not and therefore it has to be removed. ...

  7. PDF The PRISMA 2020 statement: an updated guideline for reporting ...

    elaboration paper, in which we explain why reporting of each item is recommended and present bullet points that detail the reporting recommendations (which we Box 1: Glossary of terms •ystematic review S —A review that uses explicit, systematic methods to collate and synthesise findings of studies that address a clearly formulated question43

  8. How-to conduct a systematic literature review: A quick guide for

    Abstract. Performing a literature review is a critical first step in research to understanding the state-of-the-art and identifying gaps and challenges in the field. A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in ...

  9. PDF How to Perform a Systematic Literature Review

    replication are the bedrocks of science. In this regard, review methodologies have undergone enormous change in recent years, indeed during the period over which this book has written new tools and techniques have become available. The days of a systematic review being comprised of a few papers you have hanging around plus

  10. PDF Systematic Literature Reviews: an Introduction

    including the most prestigious ones. In 1987, a review of 50 literature reviews in major medical journals found only one with clearly specified methods for identifying, selecting, and validating included information (Mulrow, 1987). A similar study in 1999 reviewed 158 review papers, and 1634

  11. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  12. PDF How to Write a Systematic Review: A Step-by-Step Guide

    HOWTO WRITE A SYSTEmATIC REVIEW: A STEP-BY-STEP GUIDE 65 VOLUmE 23, JUNE 2013 or 6) improve study generalizability. Bear in mind that the purpose of a systematic review is to not only collect all the relevant literature in an unbiased fashion, but to extract data presented in these articles in order to provide readers with a

  13. The PRISMA 2020 statement: an updated guideline for reporting ...

    The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement ...

  14. Easy guide to conducting a systematic review

    The meticulous nature of the systematic review research methodology differentiates a systematic review from a narrative review (literature review or authoritative review). This paper provides a brief step by step summary of how to conduct a systematic review, which may be of interest for clinicians and researchers.

  15. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  16. Systematic literature review paper: the regional innovation system

    In order to be systematic, transparent and replicable, our review involved two processes. This follows the approach of Macpherson and Holt (), who themselves followed refined protocols outlined by Tranfield et al. and Pittaway et al. ().First, we define the review protocols and map the literature by: (1) accessing, (2) retrieving and (3) judging the quality and relevance of the literature in ...

  17. A systematic literature review of researchers' and healthcare

    Abstract A systematic literature review of researchers and healthcare professionals' attitudes towards the secondary use and sharing of health administrative and clinical trial data was conducted using electronic data searching. Eligible articles included those reporting qualitative or quantitative original research and published in English. No restrictions were placed on publication dates ...

  18. How-to conduct a systematic literature review: A quick guide for

    A systematic literature review is a method which sets out a series of steps to methodically organize the review. In this paper, we present a guide designed for researchers and in particular early-stage researchers in the computer-science field. The contribution of the article is the following: • Clearly defined strategies to follow for a ...

  19. Systematic reviews: Structure, form and content

    Abstract. This article aims to provide an overview of the structure, form and content of systematic reviews. It focuses in particular on the literature searching component, and covers systematic database searching techniques, searching for grey literature and the importance of librarian involvement in the search.

  20. Five tips for developing useful literature summary tables for writing

    Literature reviews offer a critical synthesis of empirical and theoretical literature to assess the strength of evidence, develop guidelines for practice and policymaking, and identify areas for future research.1 It is often essential and usually the first task in any research endeavour, particularly in masters or doctoral level education. For effective data extraction and rigorous synthesis ...

  21. [PDF] School counselor leadership skills development: A systematic

    Studies about school counselor leadership skills development shows diverse findings. Therefore, a comprehensive review of the emerging literature is required. This study utilized a systematic literature review in order to assess existing understanding about the school counselor leadership skill development. This research adhered to the Standard Preferred Reporting Items for Systematic Reviews ...

  22. (PDF) Systematic Literature Review of Electronic Zakat Payment

    Systematic Literature Review of Electronic Zakat Payment. Indonesia has 235.53 million Muslim population, making up 86.88% of the country's total population. In this regard, an online zakat financial instrument is helpful to ensure equal income and alleviate poverty. This study aims to examine the effectiveness of electronic zakat payment and ...

  23. Distinguishing Between Integrative and Systematic Literature Reviews

    Systematic literature reviews are evidence-synthesizing, reproducible, and transparent literature, often referred to as the "gold standard" among literature reviews. 2 A systematic literature review aims to identify all empirical evidence focused on a research question in a specific context, with an explicit method to identify, appraise, select, and synthesize high-quality research ...