• Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

51k Accesses

25 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research methods journal articles

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

75k Accesses

27 Citations

71 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

research methods journal articles

Mixed methods research: what it is and what it could be

Rob Timans, Paul Wouters & Johan Heilbron

research methods journal articles

Saturation in qualitative research: exploring its conceptualization and operationalization

Benjamin Saunders, Julius Sim, … Clare Jinks

research methods journal articles

The potential of working hypotheses for deductive exploratory research

Mattia Casula, Nandhini Rangarajan & Patricia Shields

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

Research methods & reporting

Quantifying possible bias in clinical and epidemiological studies with quantitative bias analysis: common approaches and limitations, assessing robustness to worst case publication bias using a simple subset meta-analysis, regression discontinuity design studies: a guide for health researchers, process guide for inferential studies using healthcare data from routine clinical practice to evaluate causal effects of drugs, updated recommendations for the cochrane rapid review methods guidance for rapid reviews of effectiveness, avoiding conflicts of interest and reputational risks associated with population research on food and nutrition, the estimands framework: a primer on the ich e9(r1) addendum, evaluation of clinical prediction models (part 3): calculating the sample size required for an external validation study, evaluation of clinical prediction models (part 2): how to undertake an external validation study, evaluation of clinical prediction models (part 1): from development to external validation, emulation of a target trial using electronic health records and a nested case-control design, rob-me: a tool for assessing risk of bias due to missing evidence in systematic reviews with meta-analysis, enhancing reporting quality and impact of early phase dose-finding clinical trials: consort dose-finding extension (consort-define) guidance, enhancing quality and impact of early phase dose-finding clinical trial protocols: spirit dose-finding extension (spirit-define) guidance, understanding how health interventions or exposures produce their effects using mediation analysis, a guide and pragmatic considerations for applying grade to network meta-analysis, a framework for assessing selection and misclassification bias in mendelian randomisation studies: an illustrative example between bmi and covid-19, practical thematic analysis: a guide for multidisciplinary health services research teams engaging in qualitative analysis, selection bias due to conditioning on a collider, the imprinting effect of covid-19 vaccines: an expected selection bias in observational studies, a step-by-step approach for selecting an optimal minimal important difference, recommendations for the development, implementation, and reporting of control interventions in trials of self-management therapies, methods for deriving risk difference (absolute risk reduction) from a meta-analysis, transparent reporting of multivariable prediction models for individual prognosis or diagnosis: checklist for systematic reviews and meta-analyses, consort harms 2022 statement, explanation, and elaboration: updated guideline for the reporting of harms in randomised trials, transparent reporting of multivariable prediction models: : explanation and elaboration, transparent reporting of multivariable prediction models: tripod-cluster checklist, bias by censoring for competing events in survival analysis, code-ehr best practice framework for the use of structured electronic healthcare records in clinical research, validation of prediction models in the presence of competing risks, reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence, searching clinical trials registers: guide for systematic reviewers, how to design high quality acupuncture trials—a consensus informed by evidence, early phase clinical trials extension to guidelines for the content of statistical analysis plans, incorporating dose effects in network meta-analysis, consolidated health economic evaluation reporting standards 2022 statement, strengthening the reporting of observational studies in epidemiology using mendelian randomisation (strobe-mr): explanation and elaboration, a new framework for developing and evaluating complex interventions, adapting interventions to new contexts—the adapt guidance, recommendations for including or reviewing patient reported outcome endpoints in grant applications, consort extension for the reporting of randomised controlled trials conducted using cohorts and routinely collected data (consort-routine): checklist with explanation and elaboration, consort extension for the reporting of randomised controlled trials conducted using cohorts and routinely collected data, guidance for the design and reporting of studies evaluating the clinical performance of tests for present or past sars-cov-2 infection, the prisma 2020 statement: an updated guideline for reporting systematic reviews, prisma 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews, preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (prisma-dta for abstracts): checklist, explanation, and elaboration, designing and undertaking randomised implementation trials: guide for researchers, start-rwe: structured template for planning and reporting on the implementation of real world evidence studies, methodological standards for qualitative and mixed methods patient centered outcomes research, grade approach to drawing conclusions from a network meta-analysis using a minimally contextualised framework, grade approach to drawing conclusions from a network meta-analysis using a partially contextualised framework, use of multiple period, cluster randomised, crossover trial designs for comparative effectiveness research, when to replicate systematic reviews of interventions: consensus checklist, reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the consort-ai extension, guidelines for clinical trial protocols for interventions involving artificial intelligence: the spirit-ai extension, preferred reporting items for systematic review and meta-analysis of diagnostic test accuracy studies (prisma-dta): explanation, elaboration, and checklist, non-adherence in non-inferiority trials: pitfalls and recommendations, the adaptive designs consort extension (ace) statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design, machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness, calculating the sample size required for developing a clinical prediction model, spirit extension and elaboration for n-of-1 trials: spent 2019 checklist, synthesis without meta-analysis (swim) in systematic reviews: reporting guideline, alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners, a guide to prospective meta-analysis, rob 2: a revised tool for assessing risk of bias in randomised trials, consort 2010 statement: extension to randomised crossover trials, when and how to use data from randomised trials to develop or validate prognostic models, guide to presenting clinical prediction models for use in clinical settings, a guide to systematic review and meta-analysis of prognostic factor studies, when continuous outcomes are measured using different scales: guide for meta-analysis and interpretation, the reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology (record-pe), reporting of stepped wedge cluster randomised trials: extension of the consort 2010 statement with explanation and elaboration, delta,2, guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial, outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews, reading mendelian randomisation studies: a guide, glossary, and checklist for clinicians, how to use fda drug approval documents for evidence syntheses, how to avoid common problems when using clinicaltrials.gov in research: 10 issues to consider, tidier-php: a reporting guideline for population health and policy interventions, analysis of cluster randomised trials with an assessment of outcome at baseline, key design considerations for adaptive clinical trials: a primer for clinicians, population attributable fraction, how to estimate the effect of treatment duration on survival outcomes using observational data, concerns about composite reference standards in diagnostic research, statistical methods to compare functional outcomes in randomized controlled trials with high mortality, consort-equity 2017 extension and elaboration for better reporting of health equity in randomised trials, handling time varying confounding in observational research, four study design principles for genetic investigations using next generation sequencing, amstar 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both, multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples, stard for abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts, statistics notes: percentage differences, symmetry, and natural logarithms, statistics notes: what is a percentage difference, gripp2 reporting checklists: tools to improve reporting of patient and public involvement in research, enhancing the usability of systematic reviews by improving the consideration and description of interventions, how to design efficient cluster randomised trials, consort 2010 statement: extension checklist for reporting within person randomised trials, life expectancy difference and life expectancy ratio: two measures of treatment effects in randomised trials with non-proportional hazards, standards for reporting implementation studies (stari) statement, meta-analytical methods to identify who benefits most from treatments: daft, deluded, or deft approach, follow us on, content links.

  • Collections
  • Health in South Asia
  • Women’s, children’s & adolescents’ health
  • News and views
  • BMJ Opinion
  • Rapid responses
  • Editorial staff
  • BMJ in the USA
  • BMJ in South Asia
  • Submit your paper
  • BMA members
  • Subscribers
  • Advertisers and sponsors

Explore BMJ

  • Our company
  • BMJ Careers
  • BMJ Learning
  • BMJ Masterclasses
  • BMJ Journals
  • BMJ Student
  • Academic edition of The BMJ
  • BMJ Best Practice
  • The BMJ Awards
  • Email alerts
  • Activate subscription

Information

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

Open Methods

Methods describe the processes, procedures and materials used in a research investigation. Methods can take many forms depending on the field and approach, including study designs, protocols, code, materials and reagents, databases, and more.

Why methods matter

Transparency creates trust and deepens understanding..

When readers have the opportunity to examine your approach in detail, they gain a more profound, contextualized understanding of the results, and increased respect for the integrity of the work.

Reproducibility relies on detail. 

A narrative summary in the methods section of a research article is often insufficient to reproduce results or adapt a methodology to another study. Detailed open methods facilitate replication and reuse, and reduce the amount of trial and error along the way.

Methods transcend barriers.

Methods have the potential for adaptation and reuse in different contexts and across a broad range of research questions and disciplines. For that reason, methods articles tend to be highly cited, and to attract readers and citations for a longer period than standard research articles.

Read more about Open Methods

A research article is an orderly summation of a complex and circuitous process. It is characterized by detailed planning, iterative trial and error, meticulous execution and thoughtful analysis. As a summary, articles are invaluable, but detailed insight into processes and procedures is required to truly understand and reproduce research.

The methods section was once the most likely part of a paper to be unfairly abbreviated, overly summarized, or even relegated to hard-to-find sections of a publisher’s website. While some journals may responsibly include more detailed elements of methods in supplementary sections, the movement for increased reproducibility and rigor in science has reinstated the importance of the methods section.

Publishing open methods with PLOS

Shared methods can take many forms, including protocols, code, materials and reagents, and more. Whatever your approach, making methods publicly accessible inspires trust, facilitates reproducibility and reuse, and helps to keep your work relevant. Discover your options for communicating methods with PLOS.

Diagram explaining publishing open methods options

The PLOS Open Science Toolbox

The future is open.

The PLOS Open Science Toolbox is your source for sci-comm tips and best-practice. Learn practical strategies and hands-on tips to improve reproducibility, increase trust, and maximize the impact of your research through Open Science.

Sign up to have new issues delivered to your inbox every week.

Learn more about the benefits of Open Science.  

This paper is in the following e-collection/theme issue:

Published on 11.4.2024 in Vol 26 (2024)

Evaluating the Digital Health Experience for Patients in Primary Care: Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Melinda Ada Choy 1, 2 , BMed, MMed, DCH, MD   ; 
  • Kathleen O'Brien 1 , BSc, GDipStats, MBBS, DCH   ; 
  • Katelyn Barnes 1, 2 , BAPSC, MND, PhD   ; 
  • Elizabeth Ann Sturgiss 3 , BMed, MPH, MForensMed, PhD   ; 
  • Elizabeth Rieger 1 , BA, MClinPsych, PhD   ; 
  • Kirsty Douglas 1, 2 , MBBS, DipRACOG, Grad Cert HE, MD  

1 School of Medicine and Psychology, College of Health and Medicine, The Australian National University, Canberra, Australia

2 Academic Unit of General Practice, Office of Professional Leadership and Education, ACT Health Directorate, Canberra, Australia

3 School of Primary and Allied Health Care, Monash University, Melbourne, Australia

Corresponding Author:

Melinda Ada Choy, BMed, MMed, DCH, MD

School of Medicine and Psychology

College of Health and Medicine

The Australian National University

Phone: 61 51244947

Email: [email protected]

Background: The digital health divide for socioeconomic disadvantage describes a pattern in which patients considered socioeconomically disadvantaged, who are already marginalized through reduced access to face-to-face health care, are additionally hindered through less access to patient-initiated digital health. A comprehensive understanding of how patients with socioeconomic disadvantage access and experience digital health is essential for improving the digital health divide. Primary care patients, especially those with chronic disease, have experience of the stages of initial help seeking and self-management of their health, which renders them a key demographic for research on patient-initiated digital health access.

Objective: This study aims to provide comprehensive primary mixed methods data on the patient experience of barriers to digital health access, with a focus on the digital health divide.

Methods: We applied an exploratory mixed methods design to ensure that our survey was primarily shaped by the experiences of our interviewees. First, we qualitatively explored the experience of digital health for 19 patients with socioeconomic disadvantage and chronic disease and second, we quantitatively measured some of these findings by designing and administering a survey to 487 Australian general practice patients from 24 general practices.

Results: In our qualitative first phase, the key barriers found to accessing digital health included (1) strong patient preference for human-based health services; (2) low trust in digital health services; (3) high financial costs of necessary tools, maintenance, and repairs; (4) poor publicly available internet access options; (5) reduced capacity to engage due to increased life pressures; and (6) low self-efficacy and confidence in using digital health. In our quantitative second phase, 31% (151/487) of the survey participants were found to have never used a form of digital health, while 10.7% (52/487) were low- to medium-frequency users and 48.5% (236/487) were high-frequency users. High-frequency users were more likely to be interested in digital health and had higher self-efficacy. Low-frequency users were more likely to report difficulty affording the financial costs needed for digital access.

Conclusions: While general digital interest, financial cost, and digital health literacy and empowerment are clear factors in digital health access in a broad primary care population, the digital health divide is also facilitated in part by a stepped series of complex and cumulative barriers. Genuinely improving digital health access for 1 cohort or even 1 person requires a series of multiple different interventions tailored to specific sequential barriers. Within primary care, patient-centered care that continues to recognize the complex individual needs of, and barriers facing, each patient should be part of addressing the digital health divide.

Introduction

The promise of ehealth.

The rapid growth of digital health, sped up by the COVID-19 pandemic and associated lockdowns, brings the promise of improved health care efficiency, empowerment of consumers, and health care equity [ 1 ]. Digital health is the use of information and communication technology to improve health [ 2 ]. eHealth, which is a type of digital health, refers to the use of internet-based technology for health care and can be used by systems, providers, and patients [ 2 ]. At the time of this study (before COVID-19), examples of eHealth used by patients in Australia included searching for web-based health information, booking appointments on the web, participating in online peer-support health forums, using mobile phone health apps (mobile health), emailing health care providers, and patient portals for electronic health records.

Digital health is expected to improve chronic disease management and has already shown great potential in improving chronic disease health outcomes [ 3 , 4 ]. Just under half of the Australian population (47.3%) has at least 1 chronic disease [ 5 ]. Rates of chronic disease and complications from chronic disease are overrepresented among those with socioeconomic disadvantage [ 6 ]. Therefore, patients with chronic disease and socioeconomic disadvantage have a greater need for the potential benefits of digital health, such as an improvement in their health outcomes. However, there is a risk that those who could benefit most from digital health services are the least likely to receive them, exemplifying the inverse care law in the digital age by Hart [ 7 ].

Our Current Understanding of the Digital Health Divide

While the rapid growth of digital health brings the promise of health care equity, it may also intensify existing inequities [ 8 ]. The digital health divide for socioeconomic disadvantage describes a pattern in which patients considered socioeconomically disadvantaged who are already marginalized through poor access to traditional health care are additionally hindered through poor access to digital health [ 9 ]. In Australia, only 67.4% of households in the lowest household income quintile have home internet access, compared to 86% of the general population and 96.9% of households in the highest household income quintile [ 10 ]. Survey-based studies have also shown that even with internet access, effective eHealth use is lower in populations considered disadvantaged, which speaks to broader barriers to digital health access [ 11 ].

The ongoing COVID-19 global pandemic has sped up digital health transitions with the rapid uptake of telephone and video consultations, e-prescription, and the ongoing rollout of e-mental health in Australia. These have supported the continuation of health care delivery while limiting physical contact and the pandemic spread; however, the early evidence shows that the digital health divide remains problematic. A rapid review identified challenges with reduced digital access and digital literacy among the older adults and racial and ethnic minority groups, which are both groups at greater health risk from COVID-19 infections [ 12 ]. An Australian population study showed that the rapid uptake of telehealth during peak pandemic was not uniform, with the older adults, very young, and those with limited English language proficiency having a lower uptake of general practitioner (GP) telehealth services [ 13 ].

To ensure that digital health improves health care outcome gaps, it is essential to better understand the nature and nuance of the digital health divide for socioeconomic disadvantage. The nature of the digital health divide for socioeconomic disadvantage has been explored primarily through quantitative survey data, some qualitative papers, a few mixed methods papers, and systematic reviews [ 11 , 14 - 16 ]. Identified barriers include a lack of physical hardware and adequate internet bandwidth, a reduced inclination to seek out digital health, and a low ability and confidence to use digital health effectively [ 16 ]. The few mixed methods studies that exist on the digital health divide generally triangulate quantitative and qualitative data on a specific disease type or population subgroup to draw a combined conclusion [ 17 , 18 ]. These studies have found digital health access to be associated with education, ethnicity, and gender as well as trust, complementary face-to-face services, and the desire for alternative sources of information [ 17 , 19 ].

What This Work Adds

This project sought to extend previous research by using an exploratory mixed methods design to ensure that the first step and driver of our survey of a larger population was primarily shaped by the experiences of our interviewees within primary care. This differs from the triangulation method, which places the qualitative and quantitative data as equal first contributors to the findings and does not allow one type of data to determine the direction of the other [ 18 ]. We qualitatively explored the experience of digital health for patients with socioeconomic disadvantage and chronic disease and then quantitatively measured some of the qualitative findings via a survey of the Australian general practice patient population. Our key objective was to provide comprehensive primary mixed methods data, describing the experience and extent of barriers to accessing digital health and its benefits, with a focus on the digital health divide. We completed this research in a primary care context to investigate a diverse community-based population with conceivable reasons to seek digital help in managing their health. Findings from this mixed methods study were intended to provide health care providers and policy makers with a more detailed understanding of how specific barriers affect different aspects or steps of accessing digital health. Ultimately, understanding digital health access can influence the future design and implementation of digital health services by more effectively avoiding certain barriers or building in enablers to achieve improved digital health access not only for everyone but also especially for those in need.

Study Design

We conducted a sequential exploratory mixed methods study to explore a complex phenomenon in depth and then measure its prevalence. We qualitatively explored the experience of digital health for patients with chronic disease and socioeconomic disadvantage in the first phase. Data from the first phase informed a quantitative survey of the phenomenon across a wider population in the second phase [ 18 ]. Both stages of research were conducted before the COVID-19 pandemic in Australia.

Recruitment

Qualitative phase participants.

The eligibility criteria for the qualitative phase were as follows: English-speaking adults aged ≥18 years with at least 1 self-reported chronic disease and 1 marker of socioeconomic disadvantage (indicated by ownership of a Health Care Card or receiving a disability pension, unemployment, or a user of public housing). A chronic disease was defined to potential participants as a diagnosed long-term health condition that had lasted at least 6 months (or is expected to last for at least 6 months; examples are listed in Multimedia Appendix 1 ). The markers of socioeconomic disadvantage we used to identify potential participants were based on criteria typically used by local general practices to determine which patients can have lower or no out-of-pocket expenses. Apart from unemployment, the 3 other criteria to identify socioeconomic disadvantage are means-tested government-allocated public social services [ 20 ]. Qualitative phase participants were recruited from May to July 2019 through 3 general practices and 1 service organization that serve populations considered socioeconomically disadvantaged across urban, regional, and rural regions in the Australian Capital Territory and South Eastern New South Wales. A total of 2 recruitment methods were used in consultation with and as per the choice of the participating organizations. Potential participants were either provided with an opportunity to engage with researchers (KB and MAC) in the general practice waiting room or identified by the practice or organization as suitable for an interview. Interested participants were given a detailed verbal and written description of the project in a private space before providing written consent to be interviewed. All interview participants received an Aus $50 (US $32.68) grocery shopping voucher in acknowledgment of their time.

Quantitative Phase Participants

Eligibility for the quantitative phase was English-speaking adults aged ≥18 years. The eligibility criteria for the quantitative phase were deliberately broader than those for the qualitative phase to achieve a larger sample size within the limitations of recruitment and with the intention that the factors of socioeconomic disadvantage and having a chronic disease could be compared to the digital health access of a more general population. The quantitative phase participants were recruited from November 2019 to February 2020. Study information and paper-based surveys were distributed and collected through 24 general practices across the Australian Capital Territory and South Eastern New South Wales regions, with an option for web-based completion.

Ethical Considerations

Qualitative and quantitative phase research protocols, including the participant information sheet, were approved by the Australian Capital Territory Health Human Research Ethics Committee (2019/ETH/00013) and the Australian National University Human Research Ethics Committee (2019/ETH00003). Qualitative phase participants were given a verbal and written explanation of the study, including how and when they could opt out, before they provided written consent. All interview participants received an Aus $50 (US $32.68) grocery shopping voucher in acknowledgment of their time. Quantitative participants were given a written explanation and their informed consent was implied by return of a completed survey. Participants in both phases of the study were told that all their data was deidentified. Consent was implied through the return of a completed survey.

Qualitative Data Collection and Analysis

Participants were purposively sampled to represent a range in age, gender, degree of socioeconomic disadvantage, and experience of digital health. The sampling and sample size were reviewed regularly by the research team as the interviews were being completed to identify potential thematic saturation.

The interview guide was developed by the research team based on a review of the literature and the patient dimensions of the framework of access by Levesque et al [ 21 ]. The framework by Levesque et al [ 21 ] is a conceptualization of health care access comprising 5 service and patient dimensions of accessibility and ability. The patient dimensions are as follows: (1) ability to perceive, (2) ability to seek, (3) ability to reach, (4) ability to pay, and (5) ability to engage [ 21 ]. The key interview topics included (1) digital health use and access, including facilitators and barriers; (2) attitudes toward digital health; and (3) self-perception of digital health skills and potential training. The interview guide was reviewed for face and content validity by the whole research team, a patient advocate, a digital inclusion charity representative, and the general practices where recruitment occurred. The questions and guide were iteratively refined by the research team to ensure relevance and support reaching data saturation. The interview guide has been provided as Multimedia Appendix 1 . The interviews, which took 45 minutes on average, were taped and transcribed. An interview summary sheet and reflective journal were completed by the interviewer after each interview to also capture nonverbal cues and tone.

Interview transcriptions were coded and processed by inductive thematic analysis. Data collection and analysis were completed in parallel to support the identification of data saturation. Data saturation was defined as no significant new information arising from new interviews and was identified by discussion with the research team [ 22 ]. The 2 interviewers (MAC and KB) independently coded the first 5 transcripts and reflected on them with another researcher (EAS) to ensure intercoder validity and reliability. The rest of the interviews were coded independently by the 2 interviewers, who regularly met to reflect on emerging themes and thematic saturation. Data saturation was initially indicated after 15 interviews and subsequently confirmed with a total of 19 interviews. Coding disagreements and theme development were discussed with at least 1 other researcher (EAS, ER, and KD). Thematic saturation and the final themes were agreed upon by the entire research team.

Quantitative Survey Development

The final themes derived in the qualitative phase of the project guided the specific quantitative phase research questions. The final themes were a list of ordered cumulative barriers experienced by participants in accessing digital health and its benefits ( Figure 1 ). The quantitative survey was designed to test the association between barriers to access and the frequency of use of digital health as a proxy measure for digital health access.

research methods journal articles

In the survey, the participants were asked about their demographic details, health and chronic diseases, knowledge, use and experience of digital health tools, internet access, perception of digital resource affordability, trust in digital health and traditional health services, perceived capability, health care empowerment, eHealth literacy, and relationship with their GP.

Existing scales and questions from the literature and standardized Australian-based surveys were used whenever possible. We used selected questions and scales from the Australian Bureau of Statistics standards, the eHealth Literacy Scale (eHEALS), the eHealth Literacy Questionnaire, and the Southgate Institute for Health Society and Equity [ 17 , 23 - 26 ]. We adapted other scales from the ICEpop Capability Measure for Adults, the Health Care Empowerment Inventory (HCEI), the Patient-Doctor Relationship Questionnaire, and the Chao continuity questionnaire [ 23 , 27 - 29 ]. Where an existing scale to measure a barrier or theme did not exist, the research team designed the questions based on the literature. Our questions around the frequency of digital health use were informed by multiple existing Australian-based surveys on general technology use [ 30 , 31 ]. Most of the questions used a Likert scale. Every choice regarding the design, adaptation, or copy of questions for the survey was influenced by the qualitative findings and decided on by full agreement among the 2 researchers who completed and coded the interviews. A complete copy of the survey is provided in Multimedia Appendix 2 .

Pilot-testing of the survey was completed with 5 patients, 2 experts on digital inclusion, and 3 local GPs for both the paper surveys and web-based surveys via Qualtrics Core XM (Qualtrics LLC). The resulting feedback on face and content validity, functionality of the survey logic, and feasibility of questionnaire completion was incorporated into the final version of the survey.

The survey was offered on paper with a participant information sheet, which gave the patients the option to complete the web-based survey. The survey was handed out to every patient on paper to avoid sampling bias through the exclusion of participants who could not complete the web-based survey [ 32 ].

Quantitative Data Treatment and Analysis

Data were exported from Qualtrics Core XM to an SPSS (version 26; IBM Corp) data set. Data cleaning and screening were undertaken (KB and KO).

Descriptive statistics (number and percentage) were used to summarize participant characteristics, preference measures, and frequency of eHealth use. Significance testing was conducted using chi-square tests, with a threshold of P <.05; effect sizes were measured by the φ coefficient for 2×2 comparisons and Cramer V statistic for all others. Where the cells sizes were too small, the categories were collapsed for the purposes of significance testing. The interpretation of effect sizes was as per the study by Cohen [ 33 ]. The analysis was conducted in SPSS and SAS (version 9.4; SAS Institute).

Participant Characteristics

Participants’ self-reported characteristics included gender, indigenous status, income category, highest level of education, marital status, and language spoken at home.

Age was derived from participant-reported year of birth and year of survey completion as of 2019 and stratified into age groups. The state or territory of residence was derived from the participant-reported postcode. The remoteness area was derived using the postcode reported by the participants and mapped to a modified concordance from the Australian Bureau of Statistics. Occupation-free text responses were coded using the Australian Bureau of Statistics Census statistics level 1 and 2 descriptors. The country of birth was mapped to Australia, other Organisation for Economic Cooperation and Development countries, and non–Organisation for Economic Cooperation and Development countries.

Frequency of eHealth Use

A summary measure of the frequency of eHealth use was derived from the questions on the use of different types of eHealth.

Specifically, respondents were asked if they had ever used any form of web-based health (“eHealth“) and, if so, to rate how often (never, at least once, every now and then, and most days) against 6 types of “eHealth” (searching for health information online, booking appointments online, emailing health care providers, using health-related mobile phone apps, accessing My Health Record, and accessing online health forums). The frequency of eHealth use was then classified as follows:

  • High user: answered “most days” to at least 1 question on eHealth use OR answered “every now and then” to at least 2 questions on eHealth use
  • Never user: answered “no” to having ever used any form of eHealth OR “never” to all 6 questions on eHealth use
  • Low or medium user: all other respondents.

The frequency of eHealth use was reported as unweighted descriptive statistics (counts and percentages) against demographic characteristics and for the elements of each of the themes identified in phase 1.

Overview of Key Themes

Data were reported against the 6 themes from the phase 1 results of preference, trust, cost, structural access, capacity to engage, and self-efficacy. Where the components of trust, cost, capacity to engage, and self-efficacy had missing data (for less than half of the components only), mean imputation was used to minimize data loss. For each theme, the analysis excluded those for whom the frequency of eHealth use was unknown.

Preference measures (survey section D1 parts 1 to 3) asked participants to report against measures with a 4-point Likert scale (strongly disagree, disagree, agree, and strongly agree). Chi-square tests were conducted after the categories were condensed into 2 by combining strongly disagree and as well as combining strongly agree and agree.

Summary measures for trust were created in 4 domains: trust from the eHealth Literacy Questionnaire (survey section D1 parts 4 to 8), trust from Southgate—GPs, specialists, or allied health (survey section D2 parts 1 to 5), trust from Southgate—digital health (survey section D2 parts 6, 7, 9, and 10), and trust from Southgate—books or pamphlets (survey section D2 part 8). The data were grouped as low, moderate, and high trust based on the assigned scores from the component data. Chi-square tests were conducted comparing low-to-moderate trust against high trust for GP, specialists, or allied health and comparing low trust against moderate-to-high trust for book or pamphlet.

Summary measures for cost were created from survey item C10. To measure cost, participants were asked about whether they considered certain items or services to be affordable. These included cost items mentioned in the qualitative phase interviews relating to mobile phones (1 that connects to the internet, 1 with enough memory space to download apps, downloads or apps requiring payment, repairs, and maintenance costs), having an iPad or tablet with internet connectivity, a home computer or laptop (owning, repairs, and maintenance), home fixed internet access, and an adequate monthly data allowance. These 9 items were scored as “yes definitely”=1 or 0 otherwise. Chi-square tests were conducted with never and low or medium eHealth users combined.

Structural Access

Structural access included asking where the internet is used by participants (survey section C8) and factors relating to internet access (survey section C8 parts 1-3) reporting against a 4-point Likert scale (strongly disagree, disagree, agree, and strongly agree). Chi-square tests were conducted with strongly disagree, disagree, agree, or strongly agree, and never, low, or medium eHealth use combined.

Capacity to Engage

Summary measures for capacity to engage were created from survey section E1. To measure the capacity to engage, participants were asked about feeling “settled and secure,” “being independent,” and “achievement and progress” as an adaptation of the ICEpop Capability Measure for Adults [ 27 ], reporting against a 4-point Likert-like scale. Responses were scored from 1 (“I am unable to feel settled and secure in any areas of my life”) to 4 (“I am able to feel settled and secure in all areas of my life”).

The summary capacity measure was derived by the summation of responses across the 3 questions, which were classified into 4 groups, A to D, based on these scores. Where fewer than half of the responses were missing, mean imputation was used; otherwise, the record was excluded. Groups A and B were combined for significance testing.

Self-Efficacy

Summary measures for self-efficacy were adapted from the eHEALS (E3) and the HCEI (E2) [ 23 , 24 ].

Survey section E3—eHEALS—comprised 8 questions, with participants reporting against a 5-point Likert scale for each (strongly disagree, disagree, neither, agree, and strongly agree). These responses were assigned 1 to 5 points, respectively. The summary eHEALS measure was derived by the summation of responses across the 8 questions, which were classified into 5 groups, A to E, based on these scores. Where fewer than half of the responses were missing, mean imputation was used; otherwise, the record was excluded. Groups A to C and D to E were combined for significance testing.

Survey section E2—HCEI—comprised 5 questions, with participants reporting against a 5-point Likert scale for each (strongly disagree, disagree, neither, agree, and strongly agree). Strongly disagree and disagree and neither were combined, and similarly agree and strongly agree were combined for significance testing.

Qualitative Results

The demographic characteristics of the patients that we interviewed are presented in Table 1 .

The key barriers found to accessing digital health included (1) strong patient preference for human-based health services; (2) low trust in digital health services; (3) high financial costs of necessary tools, maintenance, and repairs; (4) poor publicly available internet access options; (5) reduced capacity to engage due to increased life pressures; and (6) low self-efficacy and confidence in using digital health.

Rather than being an equal list of factors, our interviewees described these barriers as a stepped series of cumulative hurdles, which is illustrated in Figure 1 . Initial issues of preference and trust were foundational to a person even when considering the option of digital health, while digital health confidence and literacy were barriers to full engagement with and optimal use of digital health. Alternatively, interviewees who did use digital health had been enabled by the same factors that were barriers to others.

a GP: general practitioner.

b Multiple answers per respondent.

Strong Patient Preference for Human-Based Health Services

Some patients expressed a strong preference for human-based health services rather than digital health services. In answer to a question about how digital health services could be improved, a patient said the following:

Well, having an option where you can actually bypass actually having to go through the app and actually talk directly to someone. [Participant #10]

For some patients, this preference for human-based health services appeared to be related to a lack of exposure to eHealth. These patients were not at all interested in or had never thought about digital health options. A participant responded the following to the interviewer’s questions:

Interviewer: So when...something feels not right, how do you find out what’s going on?
Respondent: I talk to Doctor XX.
Interviewer: Do you ever Google your symptoms or look online for information?
Respondent: No, I have never even thought of doing that actually. [Participant #11]

For other patients, their preference for human-based health care stemmed from negative experiences with technology. These patients reported actively disliking computers and technology in general and were generally frustrated with what they saw as the pitfalls of technology. A patient stated the following:

If computers and internet weren’t so frigging slow because everything is on like the slowest speed network ever and there’s ads blocking everything. Ads, (expletive) ads. [Participant #9]

A patient felt that he was pushed out of the workforce due his inability to keep up with technology-based changes and thus made a decision to never own a computer:

But, you know, in those days when I was a lot younger those sorts of things weren’t about and they’re just going ahead in leaps and bounds and that’s one of the reasons why I retired early. I retired at 63 because it was just moving too fast and it’s all computers and all those sorts of things and I just couldn’t keep up. [Participant #17]

Low Trust in Digital Health Services

Several patients described low trust levels for digital and internet-based technology in general. Their low trust was generally based on stories they had heard of other people’s negative experiences. A patient said the following:

I don’t trust the internet to be quite honest. You hear all these stories about people getting ripped off and I’ve worked too hard to get what I’ve got rather than let some clown get it on the internet for me. [Participant #11]

Some of this distrust was specific to eHealth. For example, some patients were highly suspicious of the government’s motives with regard to digital health and were concerned about the privacy of their health information, which made them hesitant about the concept of a universal electronic health record. In response to the interviewer’s question, a participant said the following:

Interviewer: Are there any other ways you think that eHealth might help you?
Respondent: I’m sorry but it just keeps coming back to me, Big Brother. [Participant #7]

Another participant said the following:

I just would run a mile from it because I just wouldn’t trust it. It wouldn’t be used to, as I said, for insurance or job information. [Participant #16]

High Financial Costs of the Necessary Tools, Maintenance, and Repairs

A wide variety of patients described affordability issues across several different aspects of the costs involved in digital health. They expressed difficulty in paying for the following items: a mobile phone that could connect to the internet, a mobile phone with enough memory space to download apps, mobile phone apps requiring extra payment without advertisements, mobile phone repair costs such as a broken screen, a computer or laptop, home internet access, and adequate monthly data allowance and speeds to functionally use the internet. Current popular payment systems, such as plans, were not feasible for some patients. A participant stated the following:

I don’t have a computer...I’m not in the income bracket to own a computer really. Like I could, if I got one on a plan kind of thing or if I saved up for x-amount of time. But then like if I was going on the plan I’d be paying interest for having it on like lay-buy kind of thing, paying it off, and if it ever got lost or stolen I would still have to repay that off, which is always a hassle. And yeah. Yeah, I’m like financially not in the state where I’m able to...own a computer right now as I’m kind of paying off a number of debts. [Participant #9]

Poor Publicly Available Internet Access Options

Some patients described struggling without home internet access. While they noted some cost-free public internet access points, such as libraries, hotel bars, and restaurants, they often found these to be inconvenient, lacking in privacy, and constituting low-quality options for digital health. A patient stated the following:

...it’s incredibly slow at the library. And I know why...a friend I went to school with used to belong to the council and the way they set it up, they just got the raw end of the stick and it is really, really slow. It’s bizarre but you can go to the X Hotel and it’s heaps quicker. [Participant #15]

In response to the interviewer's question, a participant said the following:

Interviewer: And do you feel comfortable doing private stuff on computers at the library...?
Respondent: Not really, no, but I don’t have any other choice, so, yeah. [Participant #9]

Reduced Capacity to Engage Due to Increased Life Pressures

When discussing why they were not using digital health or why they had stopped using digital health, patients often described significant competing priorities and life pressures that affected their capacity to engage. An unemployed patient mentioned that his time and energy on the internet were focused primarily on finding work and that he barely had time to focus on his health in general, let alone engage in digital health.

Other patients reported that they often felt that their ability to learn about and spend time on digital health was taken up by caring for sick family members, paying basic bills, or learning English. Some patients said that the time they would have spent learning digital skills when they were growing up had been lost to adverse life circumstances such as being in jail:

So we didn’t have computers in the house when I was growing up. And I didn’t know I’ve never...I’ve been in and out of jail for 28 odd years so it sort of takes away from learning from this cause it’s a whole different… it’s a whole different way of using a telephone from a prison. [Participant #11]

Low Self-Efficacy and Confidence in Starting the Digital Health Process

Some patients had a pervasive self-perception of being slow learners and being unable to use technology. Their stories of being unconfident learners seemed to stem from the fact that they had been told throughout their lives that they were intellectually behind. A patient said the following:

The computer people...wouldn’t take my calls because I’ve always been dumb with that sort of stuff. Like I only found out this later on in life, but I’m actually severely numerically dyslexic. Like I have to triple-check everything with numbers. [Participant #7]

Another patient stated the following:

I like went to two English classes like a normal English class with all the kids and then another English class with about seven kids in there because I just couldn’t I don’t know maybe because I spoke another language at home and they sort of like know I was a bit backward. [Participant #6]

These patients and others had multiple missing pieces of information that they felt made it harder to engage in digital health compared to “easier” human-based services. A patient said the following:

Yeah I’ve heard of booking online but I just I don’t know I find it easier just to ring up. And I’ll answer an email from a health care provider but I wouldn’t know where to start to look for their email address. [Participant #11]

In contrast, the patients who did connect with digital health described themselves as independent question askers and proactive people. Even when they did not know how to use a specific digital health tool, they were confident in attempting to and asking for help when they needed it. A patient said the following:

I’m a “I will find my way through this, no matter how long it takes me” kind of person. So maybe it’s more my personality...If I have to ask for help from somewhere, wherever it is, I will definitely do that. [Participant #3]

Quantitative Results

A total of 487 valid survey responses were received from participants across 24 general practices. The participant characteristics are presented in detail in Table S1 in Multimedia Appendix 3 .

The mean age of the participants was approximately 50 years (females 48.9, SD 19.4 years; males 52.8, SD 20.0 years), and 68.2% (332/487) of the participants identified as female. Overall, 34.3% (151/439) of respondents reported never using eHealth, and 53.8% (236/439) reported high eHealth use.

There were statistically significant ( P <.05) differences in the frequency of eHealth use in terms of age group, gender, state, remoteness, highest level of education, employment status, occupation group, marital status, and language spoken at home, with effect sizes being small to medium. Specifically, high eHealth characteristics were associated with younger age, being female, living in an urban area, and being employed.

Table 2 presents the frequency of eHealth use against 3 internet preference questions.

Preference for using the internet and technology in general and for health needs in particular were significantly related to the frequency of eHealth use ( P <.05 for each), with the effect sizes being small to medium.

a Excludes those for whom frequency of eHealth use is unknown.

b Chi-square tests conducted with strongly disagree and disagree combined, and agree and strongly agree combined.

Table 3 presents the frequency of eHealth use against 4 measures of trust.

The degree of trust was not statistically significantly different for the frequency of eHealth use for any of the domains.

b eHLQ: eHealth Literacy Questionnaire.

c Derived from survey question D1, parts 4 to 8. Mean imputation used where ≤2 responses were missing. If >2 responses were missing, the records were excluded.

d Derived from survey question D2, parts 1 to 5. Mean imputation used where ≤2 responses were missing. If >2 responses were missing, the records were excluded.

e Chi-square test conducted comparing low-to-moderate trust against high trust.

f Derived from survey question D2, parts 6, 7, 9, and 10. Mean imputation used where ≤2 responses were missing. If >2 responses were missing, the records were excluded.

g Derived from survey question D2 part 8.

h Chi-square test conducted comparing low trust against moderate-to-high trust.

Affordability of items and services was reported as No cost difficulty or Cost difficulty. eHealth frequency of use responses were available for 273 participants; among those with no cost difficulty , 1% (2/204) were never users, 14.2% (29/204) were low or medium users, and 84.8% (173/204) were high users of eHealth; among those with cost difficulty , 1% (1/69) were never users, 26% (18/69) were low or medium users, and 73% (50/69) were high users. There was a statistically significant difference in the presence of cost as a barrier between never and low or medium eHealth users compared to high users ( χ 2 1 =5.25; P =.02), although the effect size was small.

Table 4 presents the frequency of eHealth use for elements of structural access.

Quality of internet access and feeling limited in access to the internet were significantly associated with frequency of eHealth use ( P <.05), although the effect sizes were small.

b N/A: not applicable (cell sizes insufficient for chi-square test).

c Chi-square tests conducted with strongly disagree and disagree combined, agree and strongly agree combined, and never and low or medium categories combined.

Table 5 presents the frequency of eHealth use against respondents’ capacity to engage.

Capacity to engage was not significantly different for the frequency of eHealth use ( P =.54). 

b Derived from survey item E1. Where 1 response was missing, the mean imputation was used. If >1 response was missing, the record was excluded.

c Chi-square tests conducted with groups A and B combined.

Table 6 presents the frequency of eHealth use for elements of self-efficacy.

Statistically significant results were observed for the relationship between self-efficacy by eHEALS (moderate effect size) and frequency of eHealth use as well as for some of the questions from the HCEI (reliance on health professionals or others to access and explain information; small effect size; P <.05).

b eHEALS: eHealth Literacy Scale.

c eHEALS derived from item E3 (8 parts). Where ≤ 4 responses were missing, mean imputation was used. If >4 responses were missing, the records were excluded. Groups A to C as well as groups D to E were combined for the chi-square test.

d Strongly disagree, disagree, neither, and agree or strongly agree combined for significance testing.

Principal Findings

This paper reports on the findings of a sequential exploratory mixed methods study on the barriers to digital health access for a group of patients in Australian family medicine, with a particular focus on chronic disease and socioeconomic disadvantage.

In the qualitative first phase, the patients with socioeconomic disadvantage and chronic disease described 6 cumulative barriers, as demonstrated in Figure 1 . Many nonusers of digital health preferred human-based services and were not interested in technology, while others were highly suspicious of the technology in general. Some digitally interested patients could not afford quality hardware and internet connectivity, a barrier that was doubled by low quality and privacy when accessing publicly available internet connections. Furthermore, although some digitally interested patients had internet access, their urgent life circumstances left scarce opportunity to access digital health and develop digital health skills and confidence.

In our quantitative second phase, 31% (151/487) of the survey participants from Australian general practices were found to have never used a form of digital health. Survey participants were more likely to use digital health tools frequently when they also had a general digital interest and a digital health interest. Those who did not frequently access digital health were more likely to report difficulty affording the financial costs needed for digital access. The survey participants who frequently accessed digital health were more likely to have high eHealth literacy and high levels of patient empowerment.

Comparison With Prior Work

In terms of general digital health access, the finding that 31% (151/487) of the survey participants had never used one of the described forms of eHealth is in keeping with an Australian-based general digital participation study that found that approximately 9% of the participants were nonusers and 17% rarely engaged with the internet at all [ 34 ]. With regard to the digital health divide, another Australian-based digital health divide study found that increased age, living in a lower socioeconomic area, being Aboriginal or Torres Strait Islander, being male, and having no tertiary education were factors negatively associated with access to digital health services [ 17 ]. Their findings correspond to our findings that higher-frequency users of eHealth were associated with younger age, being female, living in an urban area, and being employed. Both studies reinforce the evidence of the digital health divide based on gender, age, and socioeconomic disadvantage in Australia.

With regard to digital health barriers, our findings provide expanded details on the range of digital health items and services that present a cost barrier to consumers. Affordability is a known factor in digital access and digital health access, and it is measured often by general self-report or relative expenditure on internet access to income [ 30 ]. Our study revealed the comprehensive list of relevant costs for patients. Our study also demonstrated factors of cost affordability beyond the dollar value of an item, as interviewees described the struggle of using slow public internet access without privacy features and the risks involved in buying a computer in installments. When we reflected on the complexity and detail of the cost barrier in our survey, participants demonstrated a clear association between cost and the frequency of digital health use. This suggests that a way to improve digital health access for some people is to improve the quality, security, and accessibility of public internet access options as well as to provide free or subsidized hardware, internet connection, and maintenance options for those in need, work that is being done by at least 1 digital inclusion charity in the United Kingdom [ 35 ].

Many studies recognize the factors of eHealth literacy and digital confidence for beneficial digital health access [ 36 ]. Our interviews demonstrated that some patients with socioeconomic disadvantage have low digital confidence, but that this is often underlined by a socially reinforced lifelong low self-confidence in their intellectual ability. In contrast, active users, regardless of other demographic factors, described themselves as innately proactive question askers. This was reinforced by our finding of a relationship between health care empowerment and the frequency of eHealth use. This suggests that while digital health education and eHealth literacy programs can improve access for some patients, broader and deeper long-term solutions addressing socioeconomic drivers of digital exclusion are needed to improve digital health access for some patients with socioeconomic disadvantage [ 8 ]. The deep permeation of socially enforced low self-confidence and lifelong poverty experienced by some interviewees demonstrate that the provision of free hardware and a class on digital health skills can be, for some, a superficial offering when the key underlying factor is persistent general socioeconomic inequality.

The digital health divide literature tends to identify the digital health divide, the factors and barriers that contribute to it, and the potential for it to widen if not specifically addressed [ 16 ]. Our findings have also identified the divide and the barriers, but what this study adds through our qualitative phase in particular is a description of the complex interaction of those barriers and the stepped nature of some of those barriers as part of the individual’s experience in trying to access digital health.

Strengths and Limitations

A key strength of this study is the use of a sequential exploratory mixed methods design. The initial qualitative phase guided a phenomenological exploration of digital health access experiences for patients with chronic disease and socioeconomic disadvantage. Our results in both study phases stem from the patients’ real-life experiences of digital health access. While some of our results echo the findings of other survey-based studies on general digital and digital health participation, our method revealed a greater depth and detail of some of these barriers, as demonstrated in how our findings compare to prior work.

As mentioned previously, the emphasis of this study on the qualitative first phase is a strength that helped describe the interactions between different barriers. The interviewees described their experiences as cumulative unequal stepped barriers rather than as producing a nonordered list of equal barriers. These findings expand on the known complexity of the issue of digital exclusion and add weight to the understanding that improving digital health access needs diverse, complex solutions [ 17 ]. There is no panacea for every individual’s digital health access, and thus, patient-centered digital health services, often guided by health professionals within the continuity of primary care, are also required to address the digital health divide [ 37 ].

While the sequential exploratory design is a strength of the study, it also created some limitations for the second quantitative phase. Our commitment to using the qualitative interview findings to inform the survey questions meant that we were unable to use previously validated scales for every question and that our results were less likely to lead to a normal distribution. This likely affected our ability to demonstrate significant associations for some barriers. We expect that further modeling is required to control for baseline characteristics and determine barrier patterns for different types of users.

One strength of this study is that the survey was administered to a broad population of Australian family medicine patients with diverse patterns of health via both paper-based and digital options. Many other digital health studies use solely digital surveys, which can affect the sample. However, we cannot draw conclusions from our survey about patients with chronic disease due to the limitations of the sample size for these subgroups.

Another sample-based limitation of this study was that our qualitative population did not include anyone aged from 18 to 24 years, despite multiple efforts to recruit. Future research will hopefully address this demographic more specifically.

While not strictly a limitation, we recognize that because this research was before COVID-19, it did not include questions about telehealth, which has become much more mainstream in recent years. The patients may also have changed their frequency of eHealth use because of COVID-19 and an increased reliance on digital services in general. Future work in this area or future versions of this survey should include telehealth and acknowledge the impact of COVID-19. However, the larger concept of the digital health divide exists before and after COVID-19, and in fact, our widespread increased reliance on digital services makes the digital divide an even more pressing issue [ 12 ].

Conclusions

The experience of digital health access across Australian primary care is highly variable and more difficult to access for those with socioeconomic disadvantage. While general digital interest, financial cost, and digital health literacy and empowerment are clear factors in digital health access in a broad primary care population, the digital health divide is also facilitated in part by a stepped series of complex and cumulative barriers.

Genuinely improving digital health access for 1 cohort or even 1 person requires a series of multiple different interventions tailored to specific sequential barriers. Given the rapid expansion of digital health during the global COVID-19 pandemic, attention to these issues is necessary if we are to avoid entrenching inequities in access to health care. Within primary care, patient-centered care that continues to recognize the complex individual needs of, and barriers facing, each patient should be a part of addressing the digital health divide.

Acknowledgments

The authors are thankful to the patients who shared their experiences with them via interview and survey completion. The authors are also very grateful to the general practices in the Australian Capital Territory and New South Wales who kindly gave their time and effort to help organize interviews, administer, and post surveys in the midst of the stress of day-to-day practice life and the bushfires of 2018-2019. The authors thank and acknowledge the creators of the eHealth Literacy Scale, the eHealth Literacy Questionnaire, the ICEpop Capability Measure for Adults, the Health Care Empowerment Inventory, the Patient-Doctor Relationship Questionnaire, the Chao continuity questionnaire, and the Southgate Institute for Health Society and Equity for their generosity in sharing their work with the authors [ 17 , 19 - 25 ]. This study would not have been possible without the support of the administrative team of the Academic Unit of General Practice. This project was funded by the Royal Australian College of General Practitioners (RACGP) through the RACGP Foundation IPN Medical Centres Grant, and the authors gratefully acknowledge their support.

Data Availability

The data sets generated during this study are not publicly available due to the nature of our original ethics approval but are available from the corresponding author on reasonable request.

Authors' Contributions

MAC acquired the funding, conceptualized the project, and organized interview recruitment. MAC and KB conducted interviews and analyzed the qualitative data. EAS, ER, and KD contributed to project planning, supervision and qualitative data analysis. MAC, KB and KO wrote the survey and planned quantitative data analysis. MAC and KB recruited practices for survey administration. KO and KB conducted the quantitative data analysis. MAC and KO, with KB drafted the paper. EAS, ER, and KD helped with reviewing and editing the paper.

Conflicts of Interest

None declared.

Phase 1 interview guide.

Phase 2 survey: eHealth and digital divide.

Phase 2 participant characteristics by frequency of eHealth use.

  • Eysenbach G. What is e-health? J Med Internet Res. 2001;3(2):E20. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iyawa GE, Herselman M, Botha A. Digital health innovation ecosystems: from systematic literature review to conceptual framework. Procedia Comput Sci. 2016;100:244-252. [ FREE Full text ] [ CrossRef ]
  • Berrouiguet S, Baca-García E, Brandt S, Walter M, Courtet P. Fundamentals for future mobile-health (mHealth): a systematic review of mobile phone and web-based text messaging in mental health. J Med Internet Res. Jun 10, 2016;18(6):e135. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shen H, van der Kleij RM, van der Boog PJ, Chang X, Chavannes NH. Electronic health self-management interventions for patients with chronic kidney disease: systematic review of quantitative and qualitative evidence. J Med Internet Res. Nov 05, 2019;21(11):e12384. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Australia's health 2018. Australian Institute of Health and Welfare. 2018. URL: https://www.aihw.gov.au/reports/australias-health/australias-health-2018/contents/table-of-contents [accessed 2024-04-04]
  • Australian Institute of Health and Welfare. Chronic Diseases and Associated Risk Factors in Australia, 2006. Canberra, Australia. Australian Institute of Health and Welfare; 2006.
  • Hart JT. The inverse care law. The Lancet. Feb 27, 1971;297(7696):405-412. [ CrossRef ]
  • Davies AR, Honeyman M, Gann B. Addressing the digital inverse care law in the time of COVID-19: potential for digital technology to exacerbate or mitigate health inequalities. J Med Internet Res. Apr 07, 2021;23(4):e21726. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Choi NG, Dinitto DM. The digital divide among low-income homebound older adults: internet use patterns, eHealth literacy, and attitudes toward computer/internet use. J Med Internet Res. May 02, 2013;15(5):e93. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Household use of information technology. Australian Bureau of Statistics. 2018. URL: https://tinyurl.com/4efm6u92 [accessed 2024-03-24]
  • Kontos E, Blake KD, Chou WY, Prestin A. Predictors of eHealth usage: insights on the digital divide from the health information national trends survey 2012. J Med Internet Res. Jul 16, 2014;16(7):e172. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Litchfield I, Shukla D, Greenfield S. Impact of COVID-19 on the digital divide: a rapid review. BMJ Open. Oct 12, 2021;11(10):e053440. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Butler DC, Joshy G, Douglas KA, Sayeed MS, Welsh J, Douglas A, et al. Changes in general practice use and costs with COVID-19 and telehealth initiatives: analysis of Australian whole-population linked data. Br J Gen Pract. Apr 27, 2023;73(730):e364-e373. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Arsenijevic J, Tummers L, Bosma N. Adherence to electronic health tools among vulnerable groups: systematic literature review and meta-analysis. J Med Internet Res. Feb 06, 2020;22(2):e11613. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kontos EZ, Bennett GG, Viswanath K. Barriers and facilitators to home computer and internet use among urban novice computer users of low socioeconomic position. J Med Internet Res. Oct 22, 2007;9(4):e31. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Latulippe K, Hamel C, Giroux D. Social health inequalities and eHealth: a literature review with qualitative synthesis of theoretical and empirical studies. J Med Internet Res. Apr 27, 2017;19(4):e136. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Foley K, Freeman T, Ward P, Lawler A, Osborne R, Fisher M. Exploring access to, use of and benefits from population-oriented digital health services in Australia. Health Promot Int. Aug 30, 2021;36(4):1105-1115. [ CrossRef ] [ Medline ]
  • Cresswell JW, Plano Clark VL. Designing and Conducting Mixed Methods Research. Thousand Oaks, CA. SAGE Publications; 2007.
  • Tappen RM, Cooley ME, Luckmann R, Panday S. Digital health information disparities in older adults: a mixed methods study. J Racial Ethn Health Disparities. Feb 2022;9(1):82-92. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Who can get a card. Services Australia. URL: https://www.servicesaustralia.gov.au/who-can-get-health-care-card?context=21981 [accessed 2023-11-03]
  • Levesque JF, Harris MF, Russell G. Patient-centred access to health care: conceptualising access at the interface of health systems and populations. Int J Equity Health. Mar 11, 2013;12:18. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bryant A, Charmaz K. The SAGE Handbook of Grounded Theory, Paperback Edition. Thousand Oaks, CA. SAGE Publications; 2010.
  • Johnson MO, Rose CD, Dilworth SE, Neilands TB. Advances in the conceptualization and measurement of health care empowerment: development and validation of the health care empowerment inventory. PLoS One. 2012;7(9):e45692. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Norman CD, Skinner HA. eHEALS: the eHealth literacy scale. J Med Internet Res. Nov 14, 2006;8(4):e27. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen KB, Elsworth G, et al. A multidimensional tool based on the eHealth literacy framework: development and initial validity testing of the eHealth Literacy Questionnaire (eHLQ). J Med Internet Res. Feb 12, 2018;20(2):e36. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Standards. Australian Bureau of Statistics. URL: https://www.abs.gov.au/statistics/standards [accessed 2024-04-04]
  • Al-Janabi H, Flynn TN, Coast J. Development of a self-report measure of capability wellbeing for adults: the ICECAP-A. Qual Life Res. Feb 2012;21(1):167-176. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Van der Feltz-Cornelis CM, Van Oppen P, Van Marwijk HW, De Beurs E, Van Dyck R. A patient-doctor relationship questionnaire (PDRQ-9) in primary care: development and psychometric evaluation. Gen Hosp Psychiatry. 2004;26(2):115-120. [ CrossRef ] [ Medline ]
  • Chao J. Continuity of care: incorporating patient perceptions. Fam Med. 1988;20(5):333-337. [ Medline ]
  • Wilson CK, Thomas J, Barraket J. Measuring digital inequality in Australia: the Australian digital inclusion index. JTDE. Jun 30, 2019;7(2):102-120. [ CrossRef ]
  • Digital participation: a view of Australia's online behaviours. Australia Post. Jul 2017. URL: https://auspost.com.au/content/dam/auspost_corp/media/documents/white-paper-digital-inclusion.pdf [accessed 2024-04-04]
  • Poli A, Kelfve S, Motel-Klingebiel A. A research tool for measuring non-participation of older people in research on digital health. BMC Public Health. Nov 08, 2019;19(1):1487. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cohen J. Statistical Power Analysis for the Behavioral Sciences Second Edition. London, UK. Routledge; 1988.
  • Borg K, Smith L. Digital inclusion and online behaviour: five typologies of Australian internet users. Behav Inf Technol. Feb 15, 2018;37(4):367-380. [ CrossRef ]
  • Mathers A, Richardson J, Vincent S, Joseph C, Stone E. Good Things Foundation COVID-19 response report. Good Things Foundation. 2020. URL: https://tinyurl.com/2peu3kak [accessed 2024-04-04]
  • Norman CD, Skinner HA. eHealth literacy: essential skills for consumer health in a networked world. J Med Internet Res. Jun 16, 2006;8(2):e9. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Neves AL, Burgers J. Digital technologies in primary care: implications for patient care and future research. Eur J Gen Pract. Dec 11, 2022;28(1):203-208. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T Leung; submitted 03.07.23; peer-reviewed by T Freeman, H Shen; comments to author 16.08.23; revised version received 30.11.23; accepted 31.01.24; published 11.04.24.

©Melinda Ada Choy, Kathleen O'Brien, Katelyn Barnes, Elizabeth Ann Sturgiss, Elizabeth Rieger, Kirsty Douglas. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Physical Review Research

  • Collections
  • Editorial Team
  • Editors' Suggestion
  • Open Access

Single-shot measurement of photonic topological invariant

Nathan roberts, guido baardink, anton souslov, and peter j. mosley, phys. rev. research 6 , l022010 – published 11 april 2024.

  • No Citing Articles

Supplemental Material

  • ACKNOWLEDGMENTS

Topological design enables robustness to be engineered into a system. However, a general challenge remains to experimentally characterize topological properties. In this work, we demonstrate a technique for directly observing a winding-number invariant using a single measurement. By propagating light with a sufficiently broad spectrum along a topological photonic crystal fiber, we calculate the winding number invariant from the output intensity pattern. We quantify the capabilities of this single-shot method, which works even for surprisingly narrow and asymmetric spectral distributions. We demonstrate our approach using topological fiber, but our method is generalizable to other platforms. Our method is experimentally straightforward: we use only a broadband input excitation and a single output to measure the topological invariant.

Figure

  • Received 19 July 2023
  • Revised 23 January 2024
  • Accepted 29 February 2024

DOI: https://doi.org/10.1103/PhysRevResearch.6.L022010

research methods journal articles

Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

Published by the American Physical Society

Physics Subject Headings (PhySH)

  • Research Areas
  • Physical Systems

Authors & Affiliations

  • 1 Department of Physics, University of Bath, Claverton Down, Bath BA2 7AY, United Kingdom
  • 2 Centre for Photonics and Photonic Materials, University of Bath, Bath BA2 7AY, United Kingdom
  • 3 TCM Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge, CB3 0HE, United Kingdom
  • * [email protected]
  • [email protected]

Article Text

Vol. 6, Iss. 2 — April - June 2024

Subject Areas

  • Condensed Matter Physics

research methods journal articles

Authorization Required

Other options.

  • Buy Article »
  • Find an Institution with the Article »

Download & Share

Schematic of the single-shot measurement. (a) Cross section of the 12-core topological photonic crystal fiber based on the SSH chain (left). The different air hole sizes (see right zoom) give rise to alternating weak-strong intercore couplings. (b) A narrow spectrum of light is coupled into a bulk core (left). The resultant intensity profile (right) is used to calculate a weighted intensity difference I d , which is then wavelength averaged to measure the system's topological invariant. (c) Our method uses a broadband spectrum, allowing the invariant to be calculated in a single shot.

(a) Heuristic explanation of the connection between the output intensity profile and the topological invariant. A broad spectrum excites a single core and the output intensity profiles are considered for two extreme cases, C 1 = 0 and C 2 = 0 . For the topologically trivial case C 2 = 0 , light stays within a single unit cell and the average intensity difference is zero. For the nontrivial case C 1 = 0 , light cannot couple to the other core within the same unit cell. On average, half the light intensity ends up in the neighboring unit cell, making the weighted intensity difference 2 〈 I d 〉 λ = 1 . (b) Schematic explanation of our experiment. The intensity distributions per unit wavelength for both the narrowest spectrum (purple) and the widest spectrum (dashed teal) are shown in the plot. The intensity distributions are used to excite core six of the topological fiber before the output is imaged onto a camera. The two intensity plots shown correspond to the two spectra shown in the plot. (c) Experimental data showing the effects of changing the spectral width on the winding-number measurement ( ν ). The black crosses are experimental averages of three measurements, with the error bars being their standard deviation. The observed winding numbers stay around the expected value of one, but the uncertainty associated with the measurements (gray shaded region) grows as the root mean square (RMS) spectral width decreases. The green diamonds and red triangles show the theoretical predictions of our measurement when the experimental spectra are propagated. The diamonds correspond to the system's topological state with the same couplings as in our experiment, while for the triangles, the C 1 and C 2 couplings are flipped, leaving the system in a topologically trivial phase.

(a) Four example distributions of the input spectrum. We vary the root mean square (RMS) width of the input spectrum from 27.8 nm (turquoise) to 5.8 nm (dark green) by reducing the standard deviation of the distribution. (b) shows the calculated winding number ( ν ) for each of these input spectra as a function of the RMS width of the distribution. (c) Response of the weighted intensity difference to changing wavelength (red) and changing distance (blue) in the topologically nontrivial case. Both plots show twice the weighted intensity difference oscillating around one, the expected value of the winding number that characterizes the system. (d) Winding numbers calculated by averaging the wavelength and distance curves plotted in (c). (e) Product of the distribution density and 2 I d [which approaches the winding number as shown in Eq. ( 2 )], for the distributions shown in (a). We show graphically that the mean of this function becomes closer to the winding number, ν = 1 as the RMS width of the exciting spectrum increases.

Sign up to receive regular email alerts from Physical Review Research

Reuse & Permissions

It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures.

  • Forgot your username/password?
  • Create an account

Article Lookup

Paste a citation or doi, enter a citation.

IMAGES

  1. How to Find a Suitable Journal to Publish My Research Papers?

    research methods journal articles

  2. 🏷️ How to write a journal article example. How to Write a Journal: 13

    research methods journal articles

  3. Journal of Research in Marketing Template

    research methods journal articles

  4. How to Structure your research article

    research methods journal articles

  5. academic journal style paper

    research methods journal articles

  6. Basic Research Journals

    research methods journal articles

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. How to Write Methods Section of a Journal Article [Urdu/Hindi]

  3. Class 33 Problem solving Journal entries #jaiib2024 #caiib2024 #jaiibexam2024 #viral #shorts #short

  4. Journal Prompt: Next Month's Plans #journalprompts #planning #shorts

  5. Research Methods for Historians. Part 1: Secondary Sources + Best Practices

  6. Class 20 Purchase Account Journal Entry #jaiib2024 #jaiibexam2024 #jaiibinfirstattempt #shorts

COMMENTS

  1. Planning Qualitative Research: Design and Decision ...

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  2. Methods

    About the journal. Methods publishes original review and research articles that cover emerging methodology in many areas of life and health sciences. The research areas covered by Methods include, but are not limited to, • Biochemistry and biophysics. • Cell, molecular, and developmental biology and genetics.

  3. Quantitative and Qualitative Approaches to Generalization and

    Hence, mixed methods methodology does not provide a conceptual unification of the two approaches. Lacking a common methodological background, qualitative and quantitative research methodologies have developed rather distinct standards with regard to the aims and scope of empirical science (Freeman et al., 2007). These different standards affect ...

  4. Literature review as a research methodology: An ...

    Her research interest relates to service innovation, customer creativity, deviant customer behavior, and value co-creation as well as a special interest in literature review methodology. She has published in the Journal of Business Research, European Journal of Marketing, Journal of Service Management and International Journal of Nursing Studies.

  5. Reviewing the research methods literature: principles and strategies

    An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type).

  6. Methodology for research II

    INTRODUCTION. Research uses a systematic approach to generate new knowledge to answer questions based on needs of patient health and practice. The investigator identifies research question, examines the ethical implications, describes the research design and collects appropriate data[1,2,3] which is evaluated by statistical tests before it can be published.[]

  7. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  8. Research Synthesis Methods

    Research Synthesis Methods, the official journal of the Society for Research Synthesis Methodology, is a multidisciplinary peer reviewed journal devoted to the development and dissemination of methods for designing, conducting, analyzing, interpreting, reporting, and applying systematic research synthesis.It aims to facilitate the creation and exchange of knowledge about research synthesis ...

  9. Full article: Methodology or method? A critical review of qualitative

    In Table III, we present the 34 case studies grouped by journal, and categorized by research topic, including health sciences, social sciences and anthropology, and methods research. There was a discrepancy in categorization of one article on pedagogy and a new teaching method published in Qualitative Inquiry (Jorrín-Abellán, Rubia-Avi ...

  10. The BMJ research methods & reporting

    Continue to all research methods & reporting articles. RMR articles discuss the nuts and bolts of doing and writing up research. For doctors interested in doing and interpreting clinical research. Also papers that present new or updated research reporting guidelines.

  11. How to use and assess qualitative research methods

    Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals' word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called "thick description". ... International Journal of Social Research ...

  12. (PDF) Research Methods and Methodology

    Research Methods: The Basics is an accessible, user-friendly introduction to the different aspects of research theory, methods and practice. ... Advances in Social Sciences Research Journal, 7(3 ...

  13. Open Methods

    Methods are research artifacts with intrinsic value to the scientific community. Open methods advance research integrity, understanding, reproducibility and reuse. ... Accepted protocols are published in the journal immediately, and authors receive an in-principle accept for the future article reporting the results of the study.

  14. International Journal of Social Research Methodology

    The International Journal of Social Research Methodology welcomes single article contributions relating to methodology and methods that: ... These are shorter in length than research articles (e.g., 1500-3000 words) and focus on an element of research practice and methods as highlighted within a specific context. Short articles may involve less ...

  15. Research Methods in Applied Linguistics

    Research Methods in Applied Linguistics is the first and only journal devoted exclusively to research methods in applied linguistics, a discipline that explores real-world language-related issues and phenomena. Core areas of applied linguistics include bilingualism and multilingualism, …. View full aims & scope. $1500. Article publishing charge.

  16. Research Methods in Psychology

    This course covers foundations of the research process for experimental Psychology: reviewing and evaluating published journal articles, refining new research questions, conducting pilot studies, creating stimuli, sequencing experiments for optimal control and data quality, analyzing data, and communicating scientific methods and results clearly, effectively, and professionally in APA style.

  17. The Use of Research Methods in Psychological Research: A Systematised

    The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. ... Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58 ...

  18. Omitting Axillary Dissection in Breast Cancer with Sentinel-Node

    In the European Organization for Research and Treatment of Cancer (EORTC) 10981-22023 Comparison of Complete Axillary Lymph Node Dissection with Axillary Radiation Therapy in Treating Women with ...

  19. Global cancer statistics 2022: GLOBOCAN estimates of incidence and

    CA: A Cancer Journal for Clinicians is ACS' flagship clinical oncology journal publishing information about the prevention, early detection, and treatment of cancer. Abstract This article presents global cancer statistics by world region for the year 2022 based on updated estimates from the International Agency for Research on Cancer (IARC).

  20. Journal of Medical Internet Research

    Methods: We applied an exploratory mixed methods design to ensure that our survey was primarily shaped by the experiences of our interviewees. First, we qualitatively explored the experience of digital health for 19 patients with socioeconomic disadvantage and chronic disease and second, we quantitatively measured some of these findings by ...

  21. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  22. Phys. Rev. Research 6, L022010 (2024)

    Reuse & Permissions. It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained.

  23. An Improved Identification Method of Pipeline Leak Using Acoustic

    Editor's Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area.