• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Inclusion criteria for abstract and full article screening

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

How to Write a Research Paper | A Beginner's Guide

A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research.

Research papers are similar to academic essays , but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research. Writing a research paper requires you to demonstrate a strong knowledge of your topic, engage with a variety of sources, and make an original contribution to the debate.

This step-by-step guide takes you through the entire writing process, from understanding your assignment to proofreading your final draft.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Understand the assignment, choose a research paper topic, conduct preliminary research, develop a thesis statement, create a research paper outline, write a first draft of the research paper, write the introduction, write a compelling body of text, write the conclusion, the second draft, the revision process, research paper checklist, free lecture slides.

Completing a research paper successfully means accomplishing the specific tasks set out for you. Before you start, make sure you thoroughly understanding the assignment task sheet:

  • Read it carefully, looking for anything confusing you might need to clarify with your professor.
  • Identify the assignment goal, deadline, length specifications, formatting, and submission method.
  • Make a bulleted list of the key points, then go back and cross completed items off as you’re writing.

Carefully consider your timeframe and word limit: be realistic, and plan enough time to research, write, and edit.

Prevent plagiarism. Run a free check.

There are many ways to generate an idea for a research paper, from brainstorming with pen and paper to talking it through with a fellow student or professor.

You can try free writing, which involves taking a broad topic and writing continuously for two or three minutes to identify absolutely anything relevant that could be interesting.

You can also gain inspiration from other research. The discussion or recommendations sections of research papers often include ideas for other specific topics that require further examination.

Once you have a broad subject area, narrow it down to choose a topic that interests you, m eets the criteria of your assignment, and i s possible to research. Aim for ideas that are both original and specific:

  • A paper following the chronology of World War II would not be original or specific enough.
  • A paper on the experience of Danish citizens living close to the German border during World War II would be specific and could be original enough.

Note any discussions that seem important to the topic, and try to find an issue that you can focus your paper around. Use a variety of sources , including journals, books, and reliable websites, to ensure you do not miss anything glaring.

Do not only verify the ideas you have in mind, but look for sources that contradict your point of view.

  • Is there anything people seem to overlook in the sources you research?
  • Are there any heated debates you can address?
  • Do you have a unique take on your topic?
  • Have there been some recent developments that build on the extant research?

In this stage, you might find it helpful to formulate some research questions to help guide you. To write research questions, try to finish the following sentence: “I want to know how/what/why…”

A thesis statement is a statement of your central argument — it establishes the purpose and position of your paper. If you started with a research question, the thesis statement should answer it. It should also show what evidence and reasoning you’ll use to support that answer.

The thesis statement should be concise, contentious, and coherent. That means it should briefly summarize your argument in a sentence or two, make a claim that requires further evidence or analysis, and make a coherent point that relates to every part of the paper.

You will probably revise and refine the thesis statement as you do more research, but it can serve as a guide throughout the writing process. Every paragraph should aim to support and develop this central claim.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

quality of research paper

Try for free

A research paper outline is essentially a list of the key topics, arguments, and evidence you want to include, divided into sections with headings so that you know roughly what the paper will look like before you start writing.

A structure outline can help make the writing process much more efficient, so it’s worth dedicating some time to create one.

Your first draft won’t be perfect — you can polish later on. Your priorities at this stage are as follows:

  • Maintaining forward momentum — write now, perfect later.
  • Paying attention to clear organization and logical ordering of paragraphs and sentences, which will help when you come to the second draft.
  • Expressing your ideas as clearly as possible, so you know what you were trying to say when you come back to the text.

You do not need to start by writing the introduction. Begin where it feels most natural for you — some prefer to finish the most difficult sections first, while others choose to start with the easiest part. If you created an outline, use it as a map while you work.

Do not delete large sections of text. If you begin to dislike something you have written or find it doesn’t quite fit, move it to a different document, but don’t lose it completely — you never know if it might come in useful later.

Paragraph structure

Paragraphs are the basic building blocks of research papers. Each one should focus on a single claim or idea that helps to establish the overall argument or purpose of the paper.

Example paragraph

George Orwell’s 1946 essay “Politics and the English Language” has had an enduring impact on thought about the relationship between politics and language. This impact is particularly obvious in light of the various critical review articles that have recently referenced the essay. For example, consider Mark Falcoff’s 2009 article in The National Review Online, “The Perversion of Language; or, Orwell Revisited,” in which he analyzes several common words (“activist,” “civil-rights leader,” “diversity,” and more). Falcoff’s close analysis of the ambiguity built into political language intentionally mirrors Orwell’s own point-by-point analysis of the political language of his day. Even 63 years after its publication, Orwell’s essay is emulated by contemporary thinkers.

Citing sources

It’s also important to keep track of citations at this stage to avoid accidental plagiarism . Each time you use a source, make sure to take note of where the information came from.

You can use our free citation generators to automatically create citations and save your reference list as you go.

APA Citation Generator MLA Citation Generator

The research paper introduction should address three questions: What, why, and how? After finishing the introduction, the reader should know what the paper is about, why it is worth reading, and how you’ll build your arguments.

What? Be specific about the topic of the paper, introduce the background, and define key terms or concepts.

Why? This is the most important, but also the most difficult, part of the introduction. Try to provide brief answers to the following questions: What new material or insight are you offering? What important issues does your essay help define or answer?

How? To let the reader know what to expect from the rest of the paper, the introduction should include a “map” of what will be discussed, briefly presenting the key elements of the paper in chronological order.

The major struggle faced by most writers is how to organize the information presented in the paper, which is one reason an outline is so useful. However, remember that the outline is only a guide and, when writing, you can be flexible with the order in which the information and arguments are presented.

One way to stay on track is to use your thesis statement and topic sentences . Check:

  • topic sentences against the thesis statement;
  • topic sentences against each other, for similarities and logical ordering;
  • and each sentence against the topic sentence of that paragraph.

Be aware of paragraphs that seem to cover the same things. If two paragraphs discuss something similar, they must approach that topic in different ways. Aim to create smooth transitions between sentences, paragraphs, and sections.

The research paper conclusion is designed to help your reader out of the paper’s argument, giving them a sense of finality.

Trace the course of the paper, emphasizing how it all comes together to prove your thesis statement. Give the paper a sense of finality by making sure the reader understands how you’ve settled the issues raised in the introduction.

You might also discuss the more general consequences of the argument, outline what the paper offers to future students of the topic, and suggest any questions the paper’s argument raises but cannot or does not try to answer.

You should not :

  • Offer new arguments or essential information
  • Take up any more space than necessary
  • Begin with stock phrases that signal you are ending the paper (e.g. “In conclusion”)

There are four main considerations when it comes to the second draft.

  • Check how your vision of the paper lines up with the first draft and, more importantly, that your paper still answers the assignment.
  • Identify any assumptions that might require (more substantial) justification, keeping your reader’s perspective foremost in mind. Remove these points if you cannot substantiate them further.
  • Be open to rearranging your ideas. Check whether any sections feel out of place and whether your ideas could be better organized.
  • If you find that old ideas do not fit as well as you anticipated, you should cut them out or condense them. You might also find that new and well-suited ideas occurred to you during the writing of the first draft — now is the time to make them part of the paper.

The goal during the revision and proofreading process is to ensure you have completed all the necessary tasks and that the paper is as well-articulated as possible. You can speed up the proofreading process by using the AI proofreader .

Global concerns

  • Confirm that your paper completes every task specified in your assignment sheet.
  • Check for logical organization and flow of paragraphs.
  • Check paragraphs against the introduction and thesis statement.

Fine-grained details

Check the content of each paragraph, making sure that:

  • each sentence helps support the topic sentence.
  • no unnecessary or irrelevant information is present.
  • all technical terms your audience might not know are identified.

Next, think about sentence structure , grammatical errors, and formatting . Check that you have correctly used transition words and phrases to show the connections between your ideas. Look for typos, cut unnecessary words, and check for consistency in aspects such as heading formatting and spellings .

Finally, you need to make sure your paper is correctly formatted according to the rules of the citation style you are using. For example, you might need to include an MLA heading  or create an APA title page .

Scribbr’s professional editors can help with the revision process with our award-winning proofreading services.

Discover our paper editing service

Checklist: Research paper

I have followed all instructions in the assignment sheet.

My introduction presents my topic in an engaging way and provides necessary background information.

My introduction presents a clear, focused research problem and/or thesis statement .

My paper is logically organized using paragraphs and (if relevant) section headings .

Each paragraph is clearly focused on one central idea, expressed in a clear topic sentence .

Each paragraph is relevant to my research problem or thesis statement.

I have used appropriate transitions  to clarify the connections between sections, paragraphs, and sentences.

My conclusion provides a concise answer to the research question or emphasizes how the thesis has been supported.

My conclusion shows how my research has contributed to knowledge or understanding of my topic.

My conclusion does not present any new points or information essential to my argument.

I have provided an in-text citation every time I refer to ideas or information from a source.

I have included a reference list at the end of my paper, consistently formatted according to a specific citation style .

I have thoroughly revised my paper and addressed any feedback from my professor or supervisor.

I have followed all formatting guidelines (page numbers, headers, spacing, etc.).

You've written a great paper. Make sure it's perfect with the help of a Scribbr editor!

Open Google Slides Download PowerPoint

Is this article helpful?

Other students also liked.

  • Writing a Research Paper Introduction | Step-by-Step Guide
  • Writing a Research Paper Conclusion | Step-by-Step Guide
  • Research Paper Format | APA, MLA, & Chicago Templates

More interesting articles

  • Academic Paragraph Structure | Step-by-Step Guide & Examples
  • Checklist: Writing a Great Research Paper
  • How to Create a Structured Research Paper Outline | Example
  • How to Write a Discussion Section | Tips & Examples
  • How to Write Recommendations in Research | Examples & Tips
  • How to Write Topic Sentences | 4 Steps, Examples & Purpose
  • Research Paper Appendix | Example & Templates
  • Research Paper Damage Control | Managing a Broken Argument
  • What Is a Theoretical Framework? | Guide to Organizing

Unlimited Academic AI-Proofreading

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Book cover

International Conference on Intelligent Systems Design and Applications

ISDA 2022: Intelligent Systems Design and Applications pp 374–383 Cite as

A Step-To-Step Guide to Write a Quality Research Article

  • Amit Kumar Tyagi   ORCID: orcid.org/0000-0003-2657-8700 14 ,
  • Rohit Bansal 15 ,
  • Anshu 16 &
  • Sathian Dananjayan   ORCID: orcid.org/0000-0002-6103-7267 17  
  • Conference paper
  • First Online: 01 June 2023

218 Accesses

19 Citations

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 717))

Today publishing articles is a trend around the world almost in each university. Millions of research articles are published in thousands of journals annually throughout many streams/sectors such as medical, engineering, science, etc. But few researchers follow the proper and fundamental criteria to write a quality research article. Many published articles over the web become just irrelevant information with duplicate information, which is a waste of available resources. This is because many authors/researchers do not know/do not follow the correct approach for writing a valid/influential paper. So, keeping such issues for new researchers or exiting researchers in many sectors, we feel motivated to write an article and present some systematic work/approach that can help researchers produce a quality research article. Also, the authors can publish their work in international conferences like CVPR, ICML, NeurIPS, etc., or international journals with high factors or a white paper. Publishing good articles improve the profile of researchers around the world, and further future researchers can refer their work in their work as references to proceed with the respective research to a certain level. Hence, this article will provide sufficient information for researchers to write a simple, effective/impressive and qualitative research article on their area of interest.

  • Quality Research
  • Research Paper
  • Qualitative Research
  • Quantitative Research
  • Problem Statement

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Nair, M.M., Tyagi, A.K., Sreenath, N.: The future with industry 4.0 at the core of society 5.0: open issues, future opportunities and challenges. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–7 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402498

Tyagi, A.K., Fernandez, T.F., Mishra, S., Kumari, S.: Intelligent Automation Systems at the Core of Industry 4.0. In: Abraham, A., Piuri, V., Gandhi, N., Siarry, P., Kaklauskas, A., Madureira, A. (eds.) ISDA 2020. AISC, vol. 1351, pp. 1–18. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-71187-0_1

Chapter   Google Scholar  

Goyal, D., Tyagi, A.: A Look at Top 35 Problems in the Computer Science Field for the Next Decade. CRC Press, Boca Raton (2020) https://doi.org/10.1201/9781003052098-40

Tyagi, A.K., Meenu, G., Aswathy, S.U., Chetanya, V.: Healthcare Solutions for Smart Era: An Useful Explanation from User’s Perspective. In the Book “Recent Trends in Blockchain for Information Systems Security and Privacy”. CRC Press, Boca Raton (2021)

Google Scholar  

Varsha, R., Nair, S.M., Tyagi, A.K., Aswathy, S.U., RadhaKrishnan, R.: The future with advanced analytics: a sequential analysis of the disruptive technology’s scope. In: Abraham, A., Hanne, T., Castillo, O., Gandhi, N., Nogueira Rios, T., Hong, T.-P. (eds.) HIS 2020. AISC, vol. 1375, pp. 565–579. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73050-5_56

Tyagi, A.K., Nair, M.M., Niladhuri, S., Abraham, A.: Security, privacy research issues in various computing platforms: a survey and the road ahead. J. Inf. Assur. Secur. 15 (1), 1–16 (2020)

Madhav, A.V.S., Tyagi, A.K.: The world with future technologies (Post-COVID-19): open issues, challenges, and the road ahead. In: Tyagi, A.K., Abraham, A., Kaklauskas, A. (eds.) Intelligent Interactive Multimedia Systems for e-Healthcare Applications, pp. 411–452. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-6542-4_22

Mishra, S., Tyagi, A.K.: The role of machine learning techniques in the Internet of Things-based cloud applications. In: Pal, S., De, D., Buyya, R. (eds.) Artificial Intelligence-Based Internet of Things Systems. Internet of Things (Technology, Communications and Computing). Springer, Cham. https://doi.org/10.1007/978-3-030-87059-1_4

Pramod, A., Naicker, H.S., Tyagi, A.K.: Machine Learning and Deep Learning: Open Issues and Future Research Directions for Next Ten Years. Computational Analysis and Understanding of Deep Learning for Medical Care: Principles, Methods, and Applications. Wiley Scrivener (2020)

Kumari, S., Tyagi, A.K., Aswathy, S.U.: The Future of Edge Computing with Blockchain Technology: Possibility of Threats, Opportunities and Challenges. In the Book Recent Trends in Blockchain for Information Systems Security and Privacy. CRC Press, Boca Raton (2021)

Dananjayan, S., Tang, Y., Zhuang, J., Hou, C., Luo, S.: Assessment of state-of-the-art deep learning based citrus disease detection techniques using annotated optical leaf images. Comput. Electron. Agric. 193 (7), 106658 (2022). https://doi.org/10.1016/j.compag.2021.106658

Nair, M.M., Tyagi, A.K.: Privacy: History, Statistics, Policy, Laws, Preservation and Threat analysis. J. Inf. Assur. Secur. 16 (1), 24–34 (2021)

Tyagi, A.K., Sreenath, N.: A comparative study on privacy preserving techniques for location based services. Br. J. Math. Comput. Sci. 10 (4), 1–25 (2015). ISSN: 2231–0851

Rekha, G., Tyagi, A.K., Krishna Reddy, V.: A wide scale classification of class imbalance problem and its solutions: a systematic literature review. J. Comput. Sci. 15 (7), 886–929 (2019). ISSN Print: 1549–3636

Kanuru, L., Tyagi, A.K., A, S.U., Fernandez, T.F., Sreenath, N., Mishra, S.: Prediction of pesticides and fertilisers using machine learning and Internet of Things. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–6 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402536

Ambildhuke, G.M., Rekha, G., Tyagi, A.K.: Performance analysis of undersampling approaches for solving customer churn prediction. In: Goyal, D., Gupta, A.K., Piuri, V., Ganzha, M., Paprzycki, M. (eds.) Proceedings of the Second International Conference on Information Management and Machine Intelligence. LNNS, vol. 166, pp. 341–347. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9689-6_37

Sathian, D.: ABC algorithm-based trustworthy energy-efficient MIMO routing protocol. Int. J. Commun. Syst. 32 , e4166 (2019). https://doi.org/10.1002/dac.4166

Varsha, R., et al.: Deep learning based blockchain solution for preserving privacy in future vehicles. Int. J. Hybrid Intell. Syst. 16 (4), 223–236 (2020)

Tyagi, A.K., Aswathy, S U.: Autonomous Intelligent Vehicles (AIV): research statements, open issues, challenges and road for future. Int. J. Intell. Netw. 2 , 83–102 (2021). ISSN 2666–6030. https://doi.org/10.1016/j.ijin.2021.07.002

Tyagi, A.K., Sreenath, N.: Cyber physical systems: analyses, challenges and possible solutions. Internet Things Cyber-Phys. Syst. 1 , 22–33 (2021). ISSN 2667–3452, https://doi.org/10.1016/j.iotcps.2021.12.002

Tyagi, A.K., Aghila, G.: A wide scale survey on botnet. Int. J. Comput. Appl. 34 (9), 9–22 (2011). (ISSN: 0975–8887)

Tyagi, A.K., Fernandez, T.F., Aswathy, S.U.: Blockchain and aadhaar based electronic voting system. In: 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, pp. 498–504 (2020). https://doi.org/10.1109/ICECA49313.2020.9297655

Kumari, S., Muthulakshmi, P.: Transformative effects of big data on advanced data analytics: open issues and critical challenges. J. Comput. Sci. 18 (6), 463–479 (2022). https://doi.org/10.3844/jcssp.2022.463.479

Article   Google Scholar  

Download references

Acknowledgement

We want to think of the anonymous reviewer and our colleagues who helped us to complete this work.

Author information

Authors and affiliations.

Department of Fashion Technology, National Institute of Fashion Technology, New Delhi, India

Amit Kumar Tyagi

Department of Management Studies, Vaish College of Engineering, Rohtak, India

Rohit Bansal

Faculty of Management and Commerce (FOMC), Baba Mastnath University, Asthal Bohar, Rohtak, India

School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, 600127, India

Sathian Dananjayan

You can also search for this author in PubMed   Google Scholar

Contributions

Amit Kumar Tyagi & Sathian Dananjayan have drafted and approved this manuscript for final publication.

Corresponding author

Correspondence to Amit Kumar Tyagi .

Editor information

Editors and affiliations.

Faculty of Computing and Data Science, FLAME University, Pune, Maharashtra, India

Ajith Abraham

Center for Smart Computing Continuum, Burgenland, Austria

Sabri Pllana

University of Bari, Bari, Italy

Gabriella Casalino

University of Jinan, Jinan, Shandong, China

Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India

Ethics declarations

Conflict of interest.

The author declares that no conflict exists regarding the publication of this paper.

Scope of the Work

As the author belongs to the computer science stream, so he has tried to cover up this article for all streams, but the maximum example used in situations, languages, datasets etc., are with respect to computer science-related disciplines only. This work can be used as a reference for writing good quality papers for international conferences journals.

Disclaimer. Links and papers provided in the work are only given as examples. To leave any citation or link is not intentional.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Tyagi, A.K., Bansal, R., Anshu, Dananjayan, S. (2023). A Step-To-Step Guide to Write a Quality Research Article. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol 717. Springer, Cham. https://doi.org/10.1007/978-3-031-35510-3_36

Download citation

DOI : https://doi.org/10.1007/978-3-031-35510-3_36

Published : 01 June 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-35509-7

Online ISBN : 978-3-031-35510-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Assessing the quality...

Assessing the quality of research

  • Related content
  • Peer review

This article has a correction. Please see:

  • Errata - September 09, 2004
  • Paul Glasziou ( paul.glasziou{at}dphpc.ox.ac.uk ) , reader 1 ,
  • Jan Vandenbroucke , professor of clinical epidemiology 2 ,
  • Iain Chalmers , editor, James Lind library 3
  • 1 Department of Primary Health Care, University of Oxford, Oxford OX3 7LF
  • 2 Leiden University Medical School, Leiden 9600 RC, Netherlands
  • 3 James Lind Initiative, Oxford OX2 7LG
  • Correspondence to: P Glasziou
  • Accepted 20 October 2003

Inflexible use of evidence hierarchies confuses practitioners and irritates researchers. So how can we improve the way we assess research?

The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses. In particular, criteria designed to guide inferences about the main effects of treatment have been uncritically applied to questions about aetiology, diagnosis, prognosis, or adverse effects. So should we assess evidence the way Michelin guides assess hotels and restaurants? We believe five issues should be considered in any revision or alternative approach to helping practitioners to find reliable answers to important clinical questions.

Different types of question require different types of evidence

Ever since two American social scientists introduced the concept in the early 1960s, 1 hierarchies have been used almost exclusively to determine the effects of interventions. This initial focus was appropriate but has also engendered confusion. Although interventions are central to clinical decision making, practice relies on answers to a wide variety of types of clinical questions, not just the effect of interventions. 2 Other hierarchies might be necessary to answer questions about aetiology, diagnosis, disease frequency, prognosis, and adverse effects. 3 Thus, although a systematic review of randomised trials would be appropriate for answering questions about the main effects of a treatment, it would be ludicrous to attempt to use it to ascertain the relative accuracy of computerised versus human reading of cervical smears, the natural course of prion diseases in humans, the effect of carriership of a mutation on the risk of venous thrombosis, or the rate of vaginal adenocarcinoma in the daughters of pregnant women given diethylstilboesterol. 4

To answer their everyday questions, practitioners need to understand the “indications and contraindications” for different types of research evidence. 5 Randomised trials can give good estimates of treatment effects but poor estimates of overall prognosis; comprehensive non-randomised inception cohort studies with prolonged follow up, however, might provide the reverse.

Systematic reviews of research are always preferred

With rare exceptions, no study, whatever the type, should be interpreted in isolation. Systematic reviews are required of the best available type of study for answering the clinical question posed. 6 A systematic review does not necessarily involve quantitative pooling in a meta—analysis.

Although case reports are a less than perfect source of evidence, they are important in alerting us to potential rare harms or benefits of an effective treatment. 7 Standardised reporting is certainly needed, 8 but too few people know about a study showing that more than half of suspected adverse drug reactions were confirmed by subsequent, more detailed research. 9 For reliable evidence on rare harms, therefore, we need a systematic review of case reports rather than a haphazard selection of them. 10 Qualitative studies can also be incorporated in reviews—for example, the systematic compilation of the reasons for non-compliance with hip protectors derived from qualitative research. 11

Level alone should not be used to grade evidence

The first substantial use of a hierarchy of evidence to grade health research was by the Canadian Task Force on the Preventive Health Examination. 12 Although such systems are preferable to ignoring research evidence or failing to provide justification for selecting particular research reports to support recommendations, they have three big disadvantages. Firstly, the definitions of the levels vary within hierarchies so that level 2 will mean different things to different readers. Secondly, novel or hybrid research designs are not accommodated in these hierarchies—for example, reanalysis of individual data from several studies or case crossover studies within cohorts. Thirdly, and perhaps most importantly, hierarchies can lead to anomalous rankings. For example, a statement about one intervention may be graded level 1 on the basis of a systematic review of a few, small, poor quality randomised trials, whereas a statement about an alternative intervention may be graded level 2 on the basis of one large, well conducted, multicentre, randomised trial.

This ranking problem arises because of the objective of collapsing the multiple dimensions of quality (design, conduct, size, relevance, etc) into a single grade. For example, randomisation is a key methodological feature in research into interventions, 13 but reducing the quality of evidence to a single level reflecting proper randomisation ignores other important dimensions of randomised clinical trials. These might include:

Other design elements, such as the validity of measurements and blinding of outcome assessments

Quality of the conduct of the study, such as loss to follow up and success of blinding

Absolute and relative size of any effects seen

Confidence intervals around the point estimates of effects.

None of the current hierarchies of evidence includes all these dimensions, and recent methodological research suggests that it may be difficult for them to do so. 14 Moreover, some dimensions are more important for some clinical problems and outcomes than for others, which necessitates a tailored approach to appraising evidence. 15 Thus, for important recommendations, it may be preferable to present a brief summary of the central evidence (such as “double-blind randomised controlled trials with a high degree of follow up over three years showed that…”), coupled with a brief appraisal of why particular quality dimensions are important. This broader approach to the assessment of evidence applies not only to randomised trials but also to observational studies. In the final recommendations, there will also be a role for other types of scientific evidence—for example, on aetiological and pathophysiological mechanisms—because concordance between theoretical models and the results of empirical investigations will increase confidence in the causal inferences. 16 17

What to do when systematic reviews are not available

Although hierarchies can be misleading as a grading system, they can help practitioners find the best relevant evidence among a plethora of studies of diverse quality. For example, to answer a therapeutic question, the hierarchy would suggest first looking for a systematic review of randomised controlled trials. However, only a fraction of the hundreds of thousands of reports of randomised trials have been considered for possible inclusion in systematic reviews. 18 So when there is no existing review, a busy clinician might next try to identify the best of several randomised trials. If the search fails to identify any randomised trials, non-randomised cohort studies might be informative. For non-therapeutic questions, however, search strategies should accommodate the need for observational designs that answer questions about aetiology, prognosis, or adverse effects. 19 Whatever evidence is found, this should be clearly described rather than simply assigned to a level. Such considerations have led the authors of the BMJ 's Clinical Evidence to use a hierarchy for finding evidence but to forgo grading evidence into levels. Instead, they make explicit the type of evidence on which their conclusions are based.

Balanced assessments should draw on a variety of types of research

For interventions, the best available evidence for each outcome of potential importance to patients is needed. 20 Often this will require systematic reviews of several different types of study. As an example, consider a woman interested in oral contraceptives. Evidence is available from controlled trials showing their contraceptive effectiveness. Although contraception is the main intended beneficial effect, some women will also be interested in the effects of oral contraceptives on acne or dysmenorrhoea. These may have been assessed in short term randomised controlled trials comparing different contraceptives. Any beneficial intended effect needs to be weighed against possible harms, such as increases in thromboembolism and breast cancer. The best evidence for such potential harms is likely to come from non-randomised cohort studies or case-control studies. For example, fears about negative consequences on fertility after long term use of oral contraceptives were allayed by such non-randomised studies. The figure gives an example of how all this information might be amalgamated into a balance sheet. 21 22

Example of possible evidence table for short and long term effects of oral contraceptives. (Absolute effects will vary with age and other risk factors such as smoking and blood pressure. RCT = randomised controlled trial)

  • Download figure
  • Open in new tab
  • Download powerpoint

Sometimes, rare, dramatic adverse effects detected with case reports or case control studies prompt further investigation and follow up of existing randomised cohorts to detect related but less severe adverse effects. For example, the case reports and case-control studies showing that intrauterine exposure to diethylstilboestrol could cause vaginal adenocarcinoma led to further investigation and follow up of the mothers and children (male as well as female) who had participated in the relevant randomised trials. These investigations showed several less serious but more frequent adverse effects of diethylstilboestrol that would have otherwise been difficult to detect. 4

Conclusions

Given the flaws in evidence hierarchies that we have described, how should we proceed? We suggest that there are two broad options: firstly, to extend, improve, and standardise current evidence hierarchies 22 ; and, secondly, to abolish the notion of evidence hierarchies and levels of evidence, and concentrate instead on teaching practitioners general principles of research so that they can use these principles to appraise the quality and relevance of particular studies. 5

We have been unable to reach a consensus on which of these approaches is likely to serve the current needs of practitioners more effectively. Practitioners who seek immediate answers cannot embark on a systematic review every time a new question arises in their practice. Clinical guidelines are increasingly prepared professionally—for example, by organisations of general practitioners and of specialist physicians or the NHS National Institute for Clinical Excellence—and this work draws on the results of systematic reviews of research evidence. Such organisations might find it useful to reconsider their approach to evidence and broaden the type of problems that they examine, especially when they need to balance risks and benefits. Most importantly, however, the practitioners who use their products should understand the approach used and be able to judge easily whether a review or a guideline has been prepared reliably.

Evidence hierarchies with the randomised trial at the apex have been pivotal in the ascendancy of numerical reasoning in medicine over the past quarter century. 17 Now that this principle is widely appreciated, however, we believe that it is time to broaden the scope by which evidence is assessed, so that the principles of other types of research, addressing questions on aetiology, diagnosis, prognosis, and unexpected effects of treatment, will become equally widely understood. Indeed, maybe we do have something to learn from Michelin guides: they have separate grading systems for hotels and restaurants, provide the details of the several quality dimensions behind each star rating, and add a qualitative commentary ( http://www.viamichelin.com/ ).

Summary points

Different types of research are needed to answerdifferent types of clinical questions

Irrespective of the type of research, systematic reviews are necessary

Adequate grading of quality of evidence goes beyond the categorisation of research design

Risk-benefit assessments should draw on a variety of types of research

Clinicians need efficient search strategies for identifying reliable clinical research

Acknowledgments

We thank Andy Oxman and Mike Rawlins for helpful suggestions.

Contributors As a general practitioner, PG uses the his own and others' evidence assessments, and as a teacher of evidence based medicine helps others find and appraise research. JV is an internist and epidemiologist by training; he has extensively collaborated in clinical research, which made him strongly aware of the diverse types of evidence that clinicians use and need. IC's interest in these issues arose from witnessing the harm done to patients from eminence based medicine.

Competing interests None declared.

  • Campbell DT ,
  • Sackett DL ,
  • Wennberg JE
  • Guyatt GH ,
  • Haynes RB ,
  • Jaeschke RZ ,
  • Naylor CD ,
  • Macdonald RR
  • Vandenbroucke JP
  • Centre for Evidence-Based Medicine
  • Van Schoor NM ,
  • Deville WL ,
  • Bouter LM ,
  • Canadian Task Force on the Periodic Health Examination
  • Witschi A ,
  • Sterne JA ,
  • Schulz KF ,
  • Altman DG ,
  • Bartlett C ,
  • Vandenbroucke JP ,
  • de Craen AJ
  • Mallett S ,
  • Rosendaal FR
  • Glasziou PP ,
  • An evidence based approach to individualising
  • Schünemann HJ ,
  • for the GRADE Working

quality of research paper

What is quality research? A guide to identifying the key features and achieving success

quality of research paper

Every researcher worth their salt strives for quality. But in research, what does quality mean?

Simply put, quality research is thorough, accurate, original and relevant. And to achieve this, you need to follow specific standards. You need to make sure your findings are reliable and valid. And when you know they're quality assured, you can share them with absolute confidence.

You’ll be able to draw accurate conclusions from your investigations and contribute to the wider body of knowledge in your field.

Importance of quality research

Quality research helps us better understand complex problems. It enables us to make decisions based on facts and evidence. And it empowers us to solve real-world issues. Without quality research, we can't advance knowledge or identify trends and patterns. We also can’t develop new theories and approaches to solving problems.

With rigorous and transparent research methods, you’ll produce reliable findings that other researchers can replicate. This leads to the development of new theories and interventions. On the other hand, low-quality research can hinder progress by producing unreliable findings that can’t be replicated, wasting resources and impeding advancements in the field.

In all cases, quality control is critical. It ensures that decisions are based on evidence rather than gut feeling or bias.

Standards for quality research

Over the years, researchers, scientists and authors have come to a consensus about the standards used to check the quality of research. Determined through empirical observation, theoretical underpinnings and philosophy of science, these include:

1. Having a well-defined research topic and a clear hypothesis

This is essential to verify that the research is focused and the results are relevant and meaningful. The research topic should be well-scoped and the hypothesis should be clearly stated and falsifiable .

For example, in a quantitative study about the effects of social media on behavior, a well-defined research topic could be, "Does the use of TikTok reduce attention span in American adolescents?"

This is good because:

  • The research topic focuses on a particular platform of social media (TikTok). And it also focuses on a specific group of people (American adolescents).
  • The research question is clear and straightforward, making it easier to design the study and collect relevant data.
  • You can test the hypothesis and a research team can evaluate it easily. This can be done through the use of various research methods, such as survey research , experiments or observational studies.
  • The hypothesis is focused on a specific outcome (the attention span). Then, this can be measured and compared to control groups or previous research studies.

2. Ensuring transparency

Transparency is crucial when conducting research. You need to be upfront about the methods you used, such as:

  • Describing how you recruited the participants.
  • How you communicated with them.
  • How they were incentivized.

You also need to explain how you analyzed the data, so other researchers can replicate your results if necessary. re-registering your study is a great way to be as transparent in your research as possible. This  involves publicly documenting your study design, methods and analysis plan before conducting the research. This reduces the risk of selective reporting and increases the credibility of your findings.

3. Using appropriate research methods

Depending on the topic, some research methods are better suited than others for collecting data. To use our TikTok example, a quantitative research approach, such as a behavioral test that measures the participants' ability to focus on tasks, might be the most appropriate.

On the other hand, for topics that require a more in-depth understanding of individuals' experiences or perspectives, a qualitative research approach, such as interviews or focus groups, might be more suitable. These methods can provide rich and detailed information that you can’t capture through quantitative data alone.

4. Assessing limitations and the possible impact of systematic bias

When you present your research, it’s important to consider how the limitations of your study could affect the result. This could be systematic bias in the sampling procedure or data analysis, for instance. Let’s say you only study a small sample of participants from one school district. This would limit the generalizability and content validity of your findings.

5. Conducting accurate reporting

This is an essential aspect of any research project. You need to be able to clearly communicate the findings and implications of your study . Also, provide citations for any claims made in your report. When you present your work, it’s vital that you describe the variables involved in your study accurately and how you measured them.

Curious to learn more? Read our Data Quality eBook .

How to identify credible research findings

To determine whether a published study is trustworthy, consider the following:

  • Peer review: If a study has been peer-reviewed by recognized experts, rest assured that it’s a reliable source of information. Peer review means that other scholars have read and verified the study before publication.
  • Researcher's qualifications: If they're an expert in the field, that’s a good sign that you can trust their findings. However, if they aren't, it doesn’t necessarily mean that the study's information is unreliable. It simply means that you should be extra cautious about accepting its conclusions as fact.
  • Study design: The design of a study can make or break its reliability. Consider factors like sample size and methodology.
  • Funding source: Studies funded by organizations with a vested interest in a particular outcome may be less credible than those funded by independent sources.
  • Statistical significance: You've heard the phrase "numbers don't lie," right? That's what statistical significance is all about. It refers to the likelihood that the results of a study occurred by chance. Results that are statistically significant are more credible.

Achieve quality research with Prolific

Want to ensure your research is high-quality? Prolific can help.

Our platform gives you access to a carefully vetted pool of participants. We make sure they're attentive, honest, and ready to provide rich and detailed answers where needed. This helps to ensure that the data you collect through Prolific is of the highest quality.

With Prolific, you can streamline your research process and feel confident in the results you receive. Our minimum pay threshold and commitment to fair compensation motivate participants to provide valuable responses and give their best effort. This ensures the quality of your research and helps you get the results you need. Sign up as a researcher today to get started!

You might also like

Whitepaper-ux-research

High-quality human data to deliver world-leading research and AIs.

quality of research paper

Follow us on

All Rights Reserved Prolific 2024

What makes a high quality clinical research paper?

Affiliation.

The quality of a research paper depends primarily on the quality of the research study it reports. However, there is also much that authors can do to maximise the clarity and usefulness of their papers. Journals' instructions for authors often focus on the format, style, and length of articles but do not always emphasise the need to clearly explain the work's science and ethics: so this review reminds researchers that transparency is important too. The research question should be stated clearly, along with an explanation of where it came from and why it is important. The study methods must be reported fully and, where appropriate, in line with an evidence based reporting guideline such as the CONSORT statement for randomised controlled trials. If the study was a trial the paper should state where and when the study was registered and state its registration identifier. Finally, any relevant conflicts of interest should be declared.

Publication types

  • Clinical Trials as Topic*
  • Ethics, Research*
  • Guidelines as Topic
  • Journalism, Medical / standards*
  • Periodicals as Topic
  • Publishing / standards*
  • Writing / standards*

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

ScienceDaily

Using CO2 and biomass, researchers find path to more environmentally friendly recyclable plastics

Modern life relies on plastic. This lightweight, adaptable product is a cornerstone of packaging, medical equipment, the aerospace and automotive industries and more. But plastic waste remains a problem as it degrades in landfills and pollutes oceans.

FAMU-FSU College of Engineering researchers have created a potential alternative to traditional petroleum-based plastic that is made from carbon dioxide (CO 2 ) and lignin, a component of wood that is a low-cost byproduct of paper manufacturing and biofuel production. Their research was published in Advanced Functional Materials .

"Our study takes the harmful greenhouse gas CO 2 and makes it into a useful raw material to produce degradable polymers or plastics," said Hoyong Chung, an associate professor in chemical and biomedical engineering at the college. "We are not only reducing CO 2 emissions, but we are producing a sustainable polymer product using the CO 2 ."

This study is the first to demonstrate the direct synthesis of what's known as a cyclic carbonate monomer -- a molecule made of carbon and oxygen atoms that can be linked with other molecules -- made from CO 2 and lignin.

By linking multiple monomers together, scientists can create synthetic polymers, long-chained molecules that can be designed to fill all manner of applications.

The material developed by Chung and his research team is fully degradable at the end of its life without producing microplastics and toxic substances. It can be synthesized at lower pressures and temperatures. And the polymer can be recycled without losing its original properties.

Using depolymerization, the researchers can convert polymers to pure monomers, which are the building blocks of polymers. This is the key to the high quality of the recycled material. The monomers can be recycled indefinitely and produce a high-quality polymer as good as the original, an improvement over previously developed and currently used polymer materials in which repeated heat exposure from melting reduces quality and allows for limited recycling.

"We can readily degrade the polymer via depolymerization, and the degraded product can synthesize the same polymer again," Chung said. "This is more cost effective and keeps it from losing original properties of polymers over multiple recycling. This is considered a breakthrough in material science, as it enables the realization of a true circular economy."

The newly developed material could be used for low-cost, short lifespan plastic products in such sectors as construction, agriculture, packaging, cosmetics, textiles, diapers and disposable kitchenware. With further development, Chung anticipates its use in highly specialized polymers for biomedical and energy storage applications.

The FSU Office of Commercialization provided valuable foundational support for Chung's research. Support from an internal funding program helped previous work with lignin-based polymers, and with the help of the office, he has received patents for other polymer research.

The project was supported by federal funds awarded to the State of Florida from the United States Department of Agriculture, National Institute of Food and Agriculture and support from the FAMU-FSU College of Engineering. Postdoctoral researcher Arijit Ghorai was the lead author of the study.

  • Recycling and Waste
  • Sustainability
  • Environmental Science
  • Global Warming
  • Air Quality
  • Environmental Policy
  • Environmental Issues
  • Geoengineering
  • Carbon dioxide
  • Carbon dioxide sink
  • Energy development
  • Fossil fuel
  • Alternative fuel vehicle
  • Climate engineering

Story Source:

Materials provided by Florida State University . Original written by Trisha Radulovich. Note: Content may be edited for style and length.

Journal Reference :

  • Arijit Ghorai, Hoyong Chung. CO2 and Lignin‐Based Sustainable Polymers with Closed‐Loop Chemical Recycling . Advanced Functional Materials , 2024; DOI: 10.1002/adfm.202403035

Cite This Page :

Explore More

  • 3D Mouth of an Ancient Jawless Fish
  • Connecting Lab-Grown Brain Cells
  • Device: Self-Healing Materials, Drug Delivery
  • How We Perceive Bitter Taste
  • Next-Generation Digital Displays
  • Feeling Insulted? How to Rid Yourself of Anger
  • Pregnancy Accelerates Biological Aging
  • Tiny Plastic Particles Are Found Everywhere
  • What's Quieter Than a Fish? A School of Them
  • Do Odd Bones Belong to Gigantic Ichthyosaurs?

Trending Topics

Strange & offbeat.

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

The best AI image generators to try right now

screenshot-2024-03-27-at-4-28-37pm.png

If you've ever searched Google high and low to find an image you needed to no avail, artificial intelligence (AI) may be able to help. 

With AI image generators, you can type in a prompt as detailed or vague as you'd like to fit an array of purposes and have the image you were thinking of instantly pop up on your screen. These tools can help with branding, social media content creation, and making invitations, flyers, business cards, and more.

Also: ChatGPT no longer requires a login, but you might want one anyway. Here's why

Even if you have no professional use for AI, don't worry -- the process is so fun that anyone can (and should) try it out.

OpenAI's DALL-E 2 made a huge splash because of its advanced capabilities as the first mainstream AI image generator. However, since its initial launch, there have been many developments. Other companies have released models that rival DALL-E 2, and OpenAI even released a more advanced model known as DALL-E 3 , discontinuing its predecessor. 

To help you discover which models are the best for different tasks, I put the image generators to the test by giving each tool the same prompt: "Two Yorkies sitting on a beach that is covered in snow". I also included screenshots to help you decide which is best. 

Also: DALL-E adds new ways to edit and create AI-generated images. Learn how to use it

While I found the best overall AI image generator is Image Creator from Microsoft Designer , due to its free-of-charge, high-quality results, other AI image generators perform better for specific needs. For the full roundup of the best AI image generators, keep reading. 

The best AI image generators of 2024

Image creator from microsoft designer (formerly bing image creator), best ai image generator overall.

  • Powered by DALL-E 3
  • Convenient to access
  • Need a Microsoft account
  • In preview stage

Image Creator from Microsoft Designer is powered by DALL-E 3, OpenAI's most advanced image-generating model. As a result, it produces the same quality results as DALL-E while remaining free to use as opposed to the $20 per month fee to use DALL-E. 

All you need to do to access the image generator is visit the Image Creator website and sign in with a Microsoft account. 

Another major perk about this AI generator is that you can also access it in the same place where you can access Microsoft's AI chatbot, Copilot (formerly Bing Chat) . 

This capability means that in addition to visiting Image Creator on its standalone site, you can ask it to generate images for you in Copilot. To render an image, all you have to do is conversationally ask Copilot to draw you any image you'd like. 

Also:   How to use Image Creator from Microsoft Designer (formerly Bing Image Creator)

This feature is so convenient because you can satisfy all your image-generating and AI-chatting needs in the same place for free. This combination facilitates tasks that could benefit from image and text generation, such as party planning, as you can ask the chatbot to generate themes for your party and then ask it to create images that follow the theme.

Image Creator from Microsoft Designer f eatures:  Powered by:  DALL-E 3 |  Access via:  Copilot, browser, mobile |  Output:  4 images per prompt |  P rice:  Free 

DALL-E 3 by OpenAI

Best ai image generator if you want to experience the inspiration.

  • Not copyrighted
  • Accurate depictions
  • Confusing credits

OpenAI, the AI research company behind ChatGPT, launched DALL-E 2 in November 2022. The tool quickly became the most popular AI image generator on the market. However, after launching its most advanced image generator, DALL-E 3, OpenAI discontinued DALL-E 2. 

DALL-E 3 is even more capable than the original model, but this ability comes at a cost. To access DALL-E 3 you must be a ChatGPT Plus subscriber, and the membership costs $20 per month per user. You can access DALL-E 3 via ChatGPT or the ChatGPT app.

Using DALL-E 3 is very intuitive. Type in whatever prompt you'd like, specifying as much detail as necessary to bring your vision to life, and then DALL-E 3 will generate four images from your prompt. As you can see in the image at the top of the article, the renditions are high quality and very realistic.

OpenAI even recently added new ways to edit an image generated by the chatbot, including easy conversational text prompts and the ability to click on parts of the image you want to edit. 

Like with Copilot, you can chat and render your images on the same platform, making it convenient to work on projects that depend on image and text generation. If you don't want to shell out the money,  Image Creator by Designer  is a great alternative since it's free, uses DALL-E 3, and can be accessed via Copilot.

DALL-E 3 features: Powered by:  DALL-E 3 by OpenAI |  Access via:  ChatGPT website and app |  Output:  4 images per credit |  Price:  ChatGPT Plus subscription, $20 per month

ImageFX by Google

The best ai image generator for beginners.

  • Easy-to-use
  • High-quality results
  • Expressive chips
  • Need a Google account
  • Strict guardrails can be limiting

Google's ImageFX was a dark horse, entering the AI image generator space much later than its competition, over a year after DALL-E 2 launched. However, the generator's performance seems to have been worth the wait. The image generator can produce high-quality, realistic outputs, even objects that are difficult to render, such as hands. 

Also: I just tried Google's ImageFX AI image generator, and I'm shocked at how good it is

The tool boasts a unique feature, expressive chips, that make it easier to refine your prompts or generate new ones via dropdowns, which highlight parts of your prompt and suggest different word changes to modify your output.

ImageFX also includes suggestions for the style you'd like your image rendered in, such as photorealistic, 35mm film, minimal, sketch, handmade, and more. This combination of features makes ImageFX the perfect for beginners who want to experiment. 

ImageFX from Google: Powered by:  Imagen 2  | Access via:  Website |  Output:  4 images |  Price:  free 

DreamStudio by Stability AI

Best ai image generator for customization.

  • Accepts specific instruction
  • Open source
  • More entries for customization
  • Paid credits
  • Need to create an account

Stability AI created the massively popular, open-sourced, text-to-image generator, Stable Diffusion. Users can download the tool and use it at no cost. However, using this tool typically requires technical skill. 

Also :  How to use Stable Diffusion AI to create amazing images

To make the technology readily accessible to everyone (regardless of skill level), Stability AI created DreamStudio, which incorporates Stable Diffusion in a UI that is easy to understand and use. 

One of the standouts of the platform is that it includes many different entries for customization, including a "negative prompt" where you can delineate the specifics of what you'd like to avoid in the final image. You can also easily change the image ratio -- that's a key feature, as most AI image generators automatically deliver 1:1. 

DreamStudio features: Powered by:  SDXL 1.0 by Stability AI  | Access via:  Website |  Output:  1 image per 2 credits |  Price:  $1 per 100 credits |  Credits:  25 free credits when you open an account; buy purchase once you run out

Dream by WOMBO

Best ai image generator for your phone.

  • Remix your own images
  • Multiple templates
  • One image per prompt
  • Subscription cost for full access

This app took the first-place spot for the best overall app in Google Play's 2022 awards , and it has five stars on Apple's App Store with 141.6K ratings. With the app, you can create art and images with the simple input of a quick prompt. 

An added plus is this AI image generator allows you to pick different design styles such as realistic, expressionist, comic, abstract, fanatical, ink, and more. 

Also :  How to use Dream by WOMBO to generate artwork in any style

In addition to the app, the tool has a free desktop mobile version that is simple to use. If you want to take your use of the app to the next level, you can pay $90 per year or $10 per month.

Dream by WOMBO f eatures: Powered by:  WOMBO AI's machine-learning algorithm |  Access via:  Mobile and desktop versions |  Output:  1 image with a free version, 4 with a paid plan |  Price:  Free limited access

Best no-frills AI image generator

  • Unlimited access
  • Simple to use
  • Longer wait
  • Inconsistent images

Despite originally being named DALL-E mini, this AI image generator is NOT affiliated with OpenAI or DALL-E 2. Rather, it is an open-source alternative. However, the name DALL-E 2 mini is somewhat fitting as the tool does everything DALL-E 2 does, just with less precise renditions. 

Also :  How to use Craiyon AI (formerly known as DALL-E mini)

Unlike DALL-E 2, the outputs from Craiyon lack quality and take longer to render (approximately a minute). However, because you have unlimited prompts, you can continue to tweak the prompt until you get your exact vision. The site is also simple to use, making it perfect for someone wanting to experiment with AI image generators. It also generates six images, more than any other chatbot listed. 

Craiyon f eatures: Powered by:  Their model |  Access via :  Craiyon website  |  Output:  6 images per prompt |  Price:  Free, unlimited prompts 

Best AI image generator for highest quality photos

  • Very high-quality outputs
  • Discord community
  • Monthly cost
  • Confusing to set up

I often play around with AI image generators because they make it fun and easy to create digital artwork. Despite all my experiences with different AI generators, nothing could have prepared me for Midjourney -- in the best way. 

The output of the image was so crystal clear that I had a hard time believing it wasn't an actual picture that someone took of my prompt. This software is so good that it has produced award-winning art .

However, I think Midjourney isn't user-friendly and it confuses me. If you also need extra direction, check out our step-by-step how-to here: How to use Midjourney to generate amazing images and art .

Another problem with the tool is that you may not access it for free. When I tried to render images, I got this error message: "Due to extreme demand, we can't provide a free trial right now. Please subscribe to create images with Midjourney."

To show you the quality of renditions, I've included a close-up below from a previous time I tested the generator. The prompt was: "A baby Yorkie sitting on a comfy couch in front of the NYC skyline." 

Midjourney f eatures: Powered by:  Midjourney; utilizes Discord |  Access via:  Discord |  Output:  4 images per prompt |  Price:  Starts at $10/month

Adobe Firefly

Best ai image generator if you have a reference photo.

  • Structure and Style Reference
  • Commercial-safe
  • Longer lag than other generators
  • More specific prompts required

Adobe has been a leader in developing creative tools for creative and working professionals for decades. As a result, it's no surprise that its image generator is impressive. Accessing the generator is easy. Just visit the website and type the prompt of the image you'd like generated. 

Also: This new AI tool from Adobe makes generating the images you need even simpler

As you can see above, the images rendered of the Yorkies are high-quality, realistic, and detailed. Additionally, the biggest standout features of this chatbot are its Structure Reference and Style Reference features. 

Structure Reference lets users input an image they want the AI model to use as a template. The model then uses this structure to create a new image with the same layout and composition. Style Reference uses an image as a reference to generate a new image in the same style. 

These features are useful if you have an image you'd like the new, generated image to resemble, for example, a quick sketch you drew or even a business logo or style you'd like to keep consistent. 

Another perk is that Adobe Firefly was trained on Adobe Stock images, openly licensed content, and public domain content, making all the images generated safe for commercial use and addressing the ethics issue of image generators. 

Adobe Firefly f eatures:  Powered by:  Firefly Image 2 |  Access via:  Website |  Output:  4 images per prompt |  P rice:  Free 

Generative AI by Getty Images

Best ai image generator for businesses.

  • Commercially safe
  • Contributor compensation program
  • Personalized stock photos
  • Not clear about pricing
  • Not individual-friendly

One of the biggest issues with AI image generators is that they typically train their generators on content from the entirety of the internet, which means the generators use aspects of creators' art without compensation. This approach also puts businesses that use generators at risk of copyright infringement. 

Generative AI by Getty Images tackles that issue by generating images with content solely from Getty Images' vast creative library with full indemnification for commercial use. The generated images will have Getty Images' standard royalty-free license, assuring customers that their content is fair to use without fearing legal repercussions.

Another pro is that contributors whose content was used to train the models will be compensated for their inclusion in the training set. This is a great solution for businesses that want stock photos that match their creative vision but do not want to deal with copyright-related issues. 

ZDNET's Tiernan Ray went hands-on with the AI image generator. Although the tool did not generate the most vivid images, especially compared to DALL-E, it did create accurate, reliable, and useable stock images. 

Generative AI by Getty Images f eatures:  Powered by:  NVIDIA Picasso |  Access via:  Website |  Output:  4 images per prompt |  P rice:  Paid (price undisclosed, have to contact the team)

What is the best AI image generator?

Image Creator from Microsoft Designer is the best overall AI image generator. Like DALL-E 3, Image Creator from Microsoft Designer combines accuracy, speed, and cost-effectiveness, and can generate high-quality images in seconds. However, unlike DALL-E 3, this Microsoft version is entirely free.

Whether you want to generate images of animals, objects, or even abstract concepts, Image Creator from Microsoft Designer can produce accurate depictions that meet your expectations. It is highly efficient, user-friendly, and cost-effective.

Note: Prices and features are subject to change.

Which is the right AI image generator for you?

Although I crowned Image Creator from Microsoft Designer the best AI image generator overall, other AI image generators perform better for specific needs. For example, suppose you are a professional using AI image generation for your business. In that case, you may need a tool like Generative AI by Getty Images which renders images safe for commercial use. 

On the other hand, if you want to play with AI art generating for entertainment purposes, Craiyon might be the best option because it's free, unlimited, and easy to use. 

How did I choose these AI image generators?

To find the best AI image generators, I tested each generator listed and compared their performance. The factors that went into testing performance included UI/UX, image results, cost, speed, and availability. Each AI image generator had different strengths and weaknesses, making each one the ideal fit for individuals as listed next to my picks. 

What is an AI image generator?

An AI image generator is software that uses AI to create images from user text inputs, usually within seconds. The images vary in style depending on the capabilities of the software, but can typically render an image in any style you want, including 3D, 2D, cinematic, modern, Renaissance, and more. 

How do AI image generators work?

Like any other AI model, AI image generators work on learned data they are trained with. Typically, these models are trained on billions of images, which they analyze for characteristics. These insights are then used by the models to create new images.

Are there ethical implications with AI image generators?

AI image generators are trained on billions of images found throughout the internet. These images are often artworks that belong to specific artists, which are then reimagined and repurposed by AI to generate your image. Although the output is not the same image, the new image has elements of the artist's original work not credited to them. 

Are there DALL-E 3 alternatives worth considering?

Contrary to what you might think, there are many AI image generators other than DALL-E 3. Some tools produce even better results than OpenAI's software. If you want to try something different, check out one of our alternatives above or the three additional options below. 

Nightcafe is a multi-purpose AI image generator. The tool is worth trying because it allows users to create unique and original artwork using different inputs and styles, including abstract, impressionism, expressionism, and more.

Canva is a versatile and powerful AI image generator that offers a wide range of options within its design platform. It allows users to create professional-looking designs for different marketing channels, including social media posts, ads, flyers, brochures, and more. 

Artificial Intelligence

Google photos users will soon get the best ai editing tools on pixel devices for free, dall-e adds new ways to edit and create ai-generated images. learn how to use it, openai makes gpt-4 turbo with vision available to developers to unlock new ai apps.

ORIGINAL RESEARCH article

The impact of esg ratings on the quality and quantity of green innovation of new energy enterprises haiwen liu1*, yuanze xu2 provisionally accepted.

  • 1 Facuity of Business,City university of Macau, China
  • 2 School of Economics and Management,Beijing Jiaotong University, China

The final, formatted version of the article will be published soon.

Amidst growing environmental challenges linked to coal dependence, fostering green innovation in new energy enterprises is vital for sustainable development in China. Although there have been studies on green innovation of new energy enterprises in recent years, few studies have been conducted from the perspective of ESG, whether informal environmental regulation represented by ESG can stimulate the green innovation of new energy enterprises is of great significance to China's construction of a low-carbon and secure energy system. In this paper, from the perspective of informal environmental regulation, based on the ESG ratings of SynTao Green Finance's first public new energy listed companies as an exogenous shock, and taking A-share new energy listed companies as a sample from 2010-2021, we empirically verified the effect and mechanism of ESG ratings on the green innovation of new energy companies in terms of the quantity and quality of green innovations by utilizing the Staggered Difference-in-difference (DID) model. The findings demonstrate that new energy enterprises' green patent numbers and quality are greatly enhanced by ESG grading. However, there is clear heterogeneity in this green innovation effect, which is particularly visible in new energy firms with state-owned enterprise and greater enterprise scales and a higher level of digitization. The Mechanistic findings suggest that ESG ratings drive green innovation by alleviating financial constraints, reducing agency risk, and boosting R&D, thus providing empirical evidence for the development of a green innovation ecosystem in the new energy industry.

Keywords: New energy enterprises, ESG ratings, green innovation, Non-environmental regulation, Staggered Difference-in-difference

Received: 05 Feb 2024; Accepted: 10 Apr 2024.

Copyright: © 2024 Liu and Xu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Haiwen Liu, Facuity of Business,City university of Macau, Macau, China

People also looked at

IMAGES

  1. How to Write a High Quality Research Paper 2023

    quality of research paper

  2. Example Of Research Paper Qualitative / (PDF) Step-by-step guide to

    quality of research paper

  3. 😎 What is a research paper. Write A Research Paper. 2019-02-24

    quality of research paper

  4. 1 Shows the quality of evidence from various types of research papers

    quality of research paper

  5. Evaluating Quality of a Research Article

    quality of research paper

  6. PPT

    quality of research paper

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. How to Make Table of Contents for Review Paper ?

  3. Workshop: Technical paper writing preparation

  4. Common Types of Research Papers for Publication

  5. How to Read a Paper Efficiently (By Prof. Pete Carr)

  6. மாதிரி ஆராய்ச்சி கட்டுரைகளில் இருந்து எவ்வாறு எழுத கற்றுக்கொள்வது? Learning

COMMENTS

  1. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  2. How do you determine the quality of a journal article?

    The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  4. Defining and assessing research quality in a transdisciplinary context

    Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. ... These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research ...

  5. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  6. How to Write a Research Paper

    A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research. Research papers are similar to academic essays, but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research ...

  7. Research quality: What it is, and how to achieve it

    Section snippets What research quality historically is. Research assessment plays an essential role in academic appointments, in annual performance reviews, in promotions, and in national research assessment exercises such as the Excellence in Research for Australia (ERA), the Research Excellence Framework (REF) in the United Kingdom, the Standard Evaluation Protocol (SEP) in the Netherlands ...

  8. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter.

  9. How to Write and Publish a Research Paper for a Peer ...

    Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper. In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common ...

  10. A Step-To-Step Guide to Write a Quality Research Article

    These research papers will be of more use to you in the process of preparing a high-quality research paper. On the other hand, the majority of reputable journals advise against citing more than two publications from the pre-print or ArXiv database in a single paper. We are only permitted to refer to articles that have been published by ...

  11. Assessing the quality of research

    The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses.

  12. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  13. PDF How to GRADE the quality of the evidence

    about rating the quality of the evidence in the table ('comments' column). The reasons for your decisions about the quality of the evidence form a critical part of the GRADE assessment and must be reported, either as part of the SoF table (footnotes) or in the review if a SoF table is not included. d.

  14. Full article: Quality 2030: quality management for the future

    The paper is also an attempt to initiate research for the emerging 2030 agenda for QM, here referred to as 'Quality 2030'. This article is based on extensive data gathered during a workshop process conducted in two main steps: (1) a collaborative brainstorming workshop with 22 researchers and practitioners (spring 2019) and (2) an ...

  15. What is quality research? A guide to identifying the key features and

    Standards for quality research. Over the years, researchers, scientists and authors have come to a consensus about the standards used to check the quality of research. Determined through empirical observation, theoretical underpinnings and philosophy of science, these include: 1. Having a well-defined research topic and a clear hypothesis

  16. What makes a high quality clinical research paper?

    The quality of a research paper depends primarily on the quality of the research study it reports. However, there is also much that authors can do to maximise the clarity and usefulness of their papers. Journals' instructions for authors often focus on the format, style, and length of articles but d …

  17. Assessing the Quality of Education Research Through Its Relevance to

    What constitutes "quality" in education research? Consensus on assessing the quality of education research has been elusive. There are various different criteria for assessing research related to a host of methodological and research approaches employed in education research, and the effective adoption and use of quality standards is unclear (Boaz & Ashby, 2003; Moss et al., 2009; Tijssen ...

  18. (PDF) Assessment of Research Quality

    This paper considers assessment of research quality by focusing on definition and. solution of research problems. W e develop and discuss, across different classes of. problems, a set of general ...

  19. How to use and assess qualitative research methods

    Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [15, 17, 23].

  20. Tips on Writing a Good Research Paper

    While writing a paper can be a time-consuming process, selecting a good thesis statement and employing the right research strategies may help students efficiently gather information, maximize their time, and produce a high-quality research paper. Based on my experience, here are several tips to help with writing a high-quality research paper.

  21. (PDF) Writing Quality Research Papers

    This book is about a thorough understanding of the essentials and the way to write quality research papers. It explores the techniques and standard sentence formation along with grammar tenses for ...

  22. The Many Meanings of Quality: Towards a Definition in Support of

    1. Introduction. Quality is a multi-faceted and intangible construct (Charantimath, Citation 2011; Zhang, Citation 2001) that has been subject to many interpretations and perspectives in our everyday life, in academia, as well as in industry and the public domain.In industry, most organisations have well-established quality departments (Sousa & Voss, Citation 2002), but the method of ...

  23. How to Start Getting Published in Medical and Scientific Journals

    However, Lasky-Su (whose papers make use of terabytes of molecular data) says that any research that utilizes large language models will require someone to interpret and make sense of the data in a macro way. "We are in a computational place where the output and the quality of the output that we have are dramatically different.

  24. Using CO2 and biomass, researchers find path to more ...

    Researchers have created a potential alternative to traditional petroleum-based plastic that is made from carbon dioxide (CO2) and lignin, a component of wood that is a low-cost byproduct of paper ...

  25. The best AI image generators of 2024: Tested and reviewed

    DALL-E 3. An upgraded version of the original best AI image generator that combines accuracy, speed, and cost-effectiveness. It allows users to generate high-quality images quickly and easily ...

  26. Frontiers

    In this paper, from the perspective of informal environmental regulation, based on the ESG ratings of SynTao Green Finance's first public new energy listed companies as an exogenous shock, and taking A-share new energy listed companies as a sample from 2010-2021, we empirically verified the effect and mechanism of ESG ratings on the green ...

  27. Linking Export Activities to Productivity and Wage Rate Growth

    Abstract: This paper examines the relationship between trade and job quality, using productivity and wage rate data for export and non-export activities in a sample of 60 countries across all income levels and 45 sectors spanning the whole economy over 1995-2019. First, the analysis finds that workers involved in export activities are more ...

  28. Mandating indoor air quality for public buildings

    Vol 383, Issue 6690. pp. 1418 - 1420. DOI: 10.1126/science.adl0677. People living in urban and industrialized societies, which are expanding globally, spend more than 90% of their time in the indoor environment, breathing indoor air (IA). Despite decades of research and advocacy, most countries do not have legislated indoor air quality (IAQ ...