• Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

703k Accesses

275 Citations

85 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

scholarly articles on qualitative research design

  • Search Menu
  • Advance articles
  • Editor's Choice
  • ESHRE Pages
  • Mini-reviews
  • Author Guidelines
  • Submission Site
  • Reasons to Publish
  • Open Access
  • Advertising and Corporate Services
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Branded Books
  • Journals Career Network
  • About Human Reproduction
  • About the European Society of Human Reproduction and Embryology
  • Editorial Board
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ESHRE
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, when to use qualitative research, how to judge qualitative research, conclusions, authors' roles, conflict of interest.

  • < Previous

Qualitative research methods: when to use them and how to judge them

  • Article contents
  • Figures & tables
  • Supplementary Data

K. Hammarberg, M. Kirkman, S. de Lacey, Qualitative research methods: when to use them and how to judge them, Human Reproduction , Volume 31, Issue 3, March 2016, Pages 498–501, https://doi.org/10.1093/humrep/dev334

  • Permissions Icon Permissions

In March 2015, an impressive set of guidelines for best practice on how to incorporate psychosocial care in routine infertility care was published by the ESHRE Psychology and Counselling Guideline Development Group ( ESHRE Psychology and Counselling Guideline Development Group, 2015 ). The authors report that the guidelines are based on a comprehensive review of the literature and we congratulate them on their meticulous compilation of evidence into a clinically useful document. However, when we read the methodology section, we were baffled and disappointed to find that evidence from research using qualitative methods was not included in the formulation of the guidelines. Despite stating that ‘qualitative research has significant value to assess the lived experience of infertility and fertility treatment’, the group excluded this body of evidence because qualitative research is ‘not generally hypothesis-driven and not objective/neutral, as the researcher puts him/herself in the position of the participant to understand how the world is from the person's perspective’.

Qualitative and quantitative research methods are often juxtaposed as representing two different world views. In quantitative circles, qualitative research is commonly viewed with suspicion and considered lightweight because it involves small samples which may not be representative of the broader population, it is seen as not objective, and the results are assessed as biased by the researchers' own experiences or opinions. In qualitative circles, quantitative research can be dismissed as over-simplifying individual experience in the cause of generalisation, failing to acknowledge researcher biases and expectations in research design, and requiring guesswork to understand the human meaning of aggregate data.

As social scientists who investigate psychosocial aspects of human reproduction, we use qualitative and quantitative methods, separately or together, depending on the research question. The crucial part is to know when to use what method.

The peer-review process is a pillar of scientific publishing. One of the important roles of reviewers is to assess the scientific rigour of the studies from which authors draw their conclusions. If rigour is lacking, the paper should not be published. As with research using quantitative methods, research using qualitative methods is home to the good, the bad and the ugly. It is essential that reviewers know the difference. Rejection letters are hard to take but more often than not they are based on legitimate critique. However, from time to time it is obvious that the reviewer has little grasp of what constitutes rigour or quality in qualitative research. The first author (K.H.) recently submitted a paper that reported findings from a qualitative study about fertility-related knowledge and information-seeking behaviour among people of reproductive age. In the rejection letter one of the reviewers (not from Human Reproduction ) lamented, ‘Even for a qualitative study, I would expect that some form of confidence interval and paired t-tables analysis, etc. be used to analyse the significance of results'. This comment reveals the reviewer's inappropriate application to qualitative research of criteria relevant only to quantitative research.

In this commentary, we give illustrative examples of questions most appropriately answered using qualitative methods and provide general advice about how to appraise the scientific rigour of qualitative studies. We hope this will help the journal's reviewers and readers appreciate the legitimate place of qualitative research and ensure we do not throw the baby out with the bath water by excluding or rejecting papers simply because they report the results of qualitative studies.

In psychosocial research, ‘quantitative’ research methods are appropriate when ‘factual’ data are required to answer the research question; when general or probability information is sought on opinions, attitudes, views, beliefs or preferences; when variables can be isolated and defined; when variables can be linked to form hypotheses before data collection; and when the question or problem is known, clear and unambiguous. Quantitative methods can reveal, for example, what percentage of the population supports assisted conception, their distribution by age, marital status, residential area and so on, as well as changes from one survey to the next ( Kovacs et al. , 2012 ); the number of donors and donor siblings located by parents of donor-conceived children ( Freeman et al. , 2009 ); and the relationship between the attitude of donor-conceived people to learning of their donor insemination conception and their family ‘type’ (one or two parents, lesbian or heterosexual parents; Beeson et al. , 2011 ).

In contrast, ‘qualitative’ methods are used to answer questions about experience, meaning and perspective, most often from the standpoint of the participant. These data are usually not amenable to counting or measuring. Qualitative research techniques include ‘small-group discussions’ for investigating beliefs, attitudes and concepts of normative behaviour; ‘semi-structured interviews’, to seek views on a focused topic or, with key informants, for background information or an institutional perspective; ‘in-depth interviews’ to understand a condition, experience, or event from a personal perspective; and ‘analysis of texts and documents’, such as government reports, media articles, websites or diaries, to learn about distributed or private knowledge.

Qualitative methods have been used to reveal, for example, potential problems in implementing a proposed trial of elective single embryo transfer, where small-group discussions enabled staff to explain their own resistance, leading to an amended approach ( Porter and Bhattacharya, 2005 ). Small-group discussions among assisted reproductive technology (ART) counsellors were used to investigate how the welfare principle is interpreted and practised by health professionals who must apply it in ART ( de Lacey et al. , 2015 ). When legislative change meant that gamete donors could seek identifying details of people conceived from their gametes, parents needed advice on how best to tell their children. Small-group discussions were convened to ask adolescents (not known to be donor-conceived) to reflect on how they would prefer to be told ( Kirkman et al. , 2007 ).

When a population cannot be identified, such as anonymous sperm donors from the 1980s, a qualitative approach with wide publicity can reach people who do not usually volunteer for research and reveal (for example) their attitudes to proposed legislation to remove anonymity with retrospective effect ( Hammarberg et al. , 2014 ). When researchers invite people to talk about their reflections on experience, they can sometimes learn more than they set out to discover. In describing their responses to proposed legislative change, participants also talked about people conceived as a result of their donations, demonstrating various constructions and expectations of relationships ( Kirkman et al. , 2014 ).

Interviews with parents in lesbian-parented families generated insight into the diverse meanings of the sperm donor in the creation and life of the family ( Wyverkens et al. , 2014 ). Oral and written interviews also revealed the embarrassment and ambivalence surrounding sperm donors evident in participants in donor-assisted conception ( Kirkman, 2004 ). The way in which parents conceptualise unused embryos and why they discard rather than donate was explored and understood via in-depth interviews, showing how and why the meaning of those embryos changed with parenthood ( de Lacey, 2005 ). In-depth interviews were also used to establish the intricate understanding by embryo donors and recipients of the meaning of embryo donation and the families built as a result ( Goedeke et al. , 2015 ).

It is possible to combine quantitative and qualitative methods, although great care should be taken to ensure that the theory behind each method is compatible and that the methods are being used for appropriate reasons. The two methods can be used sequentially (first a quantitative then a qualitative study or vice versa), where the first approach is used to facilitate the design of the second; they can be used in parallel as different approaches to the same question; or a dominant method may be enriched with a small component of an alternative method (such as qualitative interviews ‘nested’ in a large survey). It is important to note that free text in surveys represents qualitative data but does not constitute qualitative research. Qualitative and quantitative methods may be used together for corroboration (hoping for similar outcomes from both methods), elaboration (using qualitative data to explain or interpret quantitative data, or to demonstrate how the quantitative findings apply in particular cases), complementarity (where the qualitative and quantitative results differ but generate complementary insights) or contradiction (where qualitative and quantitative data lead to different conclusions). Each has its advantages and challenges ( Brannen, 2005 ).

Qualitative research is gaining increased momentum in the clinical setting and carries different criteria for evaluating its rigour or quality. Quantitative studies generally involve the systematic collection of data about a phenomenon, using standardized measures and statistical analysis. In contrast, qualitative studies involve the systematic collection, organization, description and interpretation of textual, verbal or visual data. The particular approach taken determines to a certain extent the criteria used for judging the quality of the report. However, research using qualitative methods can be evaluated ( Dixon-Woods et al. , 2006 ; Young et al. , 2014 ) and there are some generic guidelines for assessing qualitative research ( Kitto et al. , 2008 ).

Although the terms ‘reliability’ and ‘validity’ are contentious among qualitative researchers ( Lincoln and Guba, 1985 ) with some preferring ‘verification’, research integrity and robustness are as important in qualitative studies as they are in other forms of research. It is widely accepted that qualitative research should be ethical, important, intelligibly described, and use appropriate and rigorous methods ( Cohen and Crabtree, 2008 ). In research investigating data that can be counted or measured, replicability is essential. When other kinds of data are gathered in order to answer questions of personal or social meaning, we need to be able to capture real-life experiences, which cannot be identical from one person to the next. Furthermore, meaning is culturally determined and subject to evolutionary change. The way of explaining a phenomenon—such as what it means to use donated gametes—will vary, for example, according to the cultural significance of ‘blood’ or genes, interpretations of marital infidelity and religious constructs of sexual relationships and families. Culture may apply to a country, a community, or other actual or virtual group, and a person may be engaged at various levels of culture. In identifying meaning for members of a particular group, consistency may indeed be found from one research project to another. However, individuals within a cultural group may present different experiences and perceptions or transgress cultural expectations. That does not make them ‘wrong’ or invalidate the research. Rather, it offers insight into diversity and adds a piece to the puzzle to which other researchers also contribute.

In qualitative research the objective stance is obsolete, the researcher is the instrument, and ‘subjects’ become ‘participants’ who may contribute to data interpretation and analysis ( Denzin and Lincoln, 1998 ). Qualitative researchers defend the integrity of their work by different means: trustworthiness, credibility, applicability and consistency are the evaluative criteria ( Leininger, 1994 ).

Trustworthiness

A report of a qualitative study should contain the same robust procedural description as any other study. The purpose of the research, how it was conducted, procedural decisions, and details of data generation and management should be transparent and explicit. A reviewer should be able to follow the progression of events and decisions and understand their logic because there is adequate description, explanation and justification of the methodology and methods ( Kitto et al. , 2008 )

Credibility

Credibility is the criterion for evaluating the truth value or internal validity of qualitative research. A qualitative study is credible when its results, presented with adequate descriptions of context, are recognizable to people who share the experience and those who care for or treat them. As the instrument in qualitative research, the researcher defends its credibility through practices such as reflexivity (reflection on the influence of the researcher on the research), triangulation (where appropriate, answering the research question in several ways, such as through interviews, observation and documentary analysis) and substantial description of the interpretation process; verbatim quotations from the data are supplied to illustrate and support their interpretations ( Sandelowski, 1986 ). Where excerpts of data and interpretations are incongruent, the credibility of the study is in doubt.

Applicability

Applicability, or transferability of the research findings, is the criterion for evaluating external validity. A study is considered to meet the criterion of applicability when its findings can fit into contexts outside the study situation and when clinicians and researchers view the findings as meaningful and applicable in their own experiences.

Larger sample sizes do not produce greater applicability. Depth may be sacrificed to breadth or there may be too much data for adequate analysis. Sample sizes in qualitative research are typically small. The term ‘saturation’ is often used in reference to decisions about sample size in research using qualitative methods. Emerging from grounded theory, where filling theoretical categories is considered essential to the robustness of the developing theory, data saturation has been expanded to describe a situation where data tend towards repetition or where data cease to offer new directions and raise new questions ( Charmaz, 2005 ). However, the legitimacy of saturation as a generic marker of sampling adequacy has been questioned ( O'Reilly and Parker, 2013 ). Caution must be exercised to ensure that a commitment to saturation does not assume an ‘essence’ of an experience in which limited diversity is anticipated; each account is likely to be subtly different and each ‘sample’ will contribute to knowledge without telling the whole story. Increasingly, it is expected that researchers will report the kind of saturation they have applied and their criteria for recognising its achievement; an assessor will need to judge whether the choice is appropriate and consistent with the theoretical context within which the research has been conducted.

Sampling strategies are usually purposive, convenient, theoretical or snowballed. Maximum variation sampling may be used to seek representation of diverse perspectives on the topic. Homogeneous sampling may be used to recruit a group of participants with specified criteria. The threat of bias is irrelevant; participants are recruited and selected specifically because they can illuminate the phenomenon being studied. Rather than being predetermined by statistical power analysis, qualitative study samples are dependent on the nature of the data, the availability of participants and where those data take the investigator. Multiple data collections may also take place to obtain maximum insight into sensitive topics. For instance, the question of how decisions are made for embryo disposition may involve sampling within the patient group as well as from scientists, clinicians, counsellors and clinic administrators.

Consistency

Consistency, or dependability of the results, is the criterion for assessing reliability. This does not mean that the same result would necessarily be found in other contexts but that, given the same data, other researchers would find similar patterns. Researchers often seek maximum variation in the experience of a phenomenon, not only to illuminate it but also to discourage fulfilment of limited researcher expectations (for example, negative cases or instances that do not fit the emerging interpretation or theory should be actively sought and explored). Qualitative researchers sometimes describe the processes by which verification of the theoretical findings by another team member takes place ( Morse and Richards, 2002 ).

Research that uses qualitative methods is not, as it seems sometimes to be represented, the easy option, nor is it a collation of anecdotes. It usually involves a complex theoretical or philosophical framework. Rigorous analysis is conducted without the aid of straightforward mathematical rules. Researchers must demonstrate the validity of their analysis and conclusions, resulting in longer papers and occasional frustration with the word limits of appropriate journals. Nevertheless, we need the different kinds of evidence that is generated by qualitative methods. The experience of health, illness and medical intervention cannot always be counted and measured; researchers need to understand what they mean to individuals and groups. Knowledge gained from qualitative research methods can inform clinical practice, indicate how to support people living with chronic conditions and contribute to community education and awareness about people who are (for example) experiencing infertility or using assisted conception.

Each author drafted a section of the manuscript and the manuscript as a whole was reviewed and revised by all authors in consultation.

No external funding was either sought or obtained for this study.

The authors have no conflicts of interest to declare.

Beeson D , Jennings P , Kramer W . Offspring searching for their sperm donors: how family types shape the process . Hum Reprod 2011 ; 26 : 2415 – 2424 .

Google Scholar

Brannen J . Mixing methods: the entry of qualitative and quantitative approaches into the research process . Int J Soc Res Methodol 2005 ; 8 : 173 – 184 .

Charmaz K . Grounded Theory in the 21st century; applications for advancing social justice studies . In: Denzin NK , Lincoln YS (eds). The Sage Handbook of Qualitative Research . California : Sage Publications Inc. , 2005 .

Google Preview

Cohen D , Crabtree B . Evaluative criteria for qualitative research in health care: controversies and recommendations . Ann Fam Med 2008 ; 6 : 331 – 339 .

de Lacey S . Parent identity and ‘virtual’ children: why patients discard rather than donate unused embryos . Hum Reprod 2005 ; 20 : 1661 – 1669 .

de Lacey SL , Peterson K , McMillan J . Child interests in assisted reproductive technology: how is the welfare principle applied in practice? Hum Reprod 2015 ; 30 : 616 – 624 .

Denzin N , Lincoln Y . Entering the field of qualitative research . In: Denzin NK , Lincoln YS (eds). The Landscape of Qualitative Research: Theories and Issues . Thousand Oaks : Sage , 1998 , 1 – 34 .

Dixon-Woods M , Bonas S , Booth A , Jones DR , Miller T , Shaw RL , Smith JA , Young B . How can systematic reviews incorporate qualitative research? A critical perspective . Qual Res 2006 ; 6 : 27 – 44 .

ESHRE Psychology and Counselling Guideline Development Group . Routine Psychosocial Care in Infertility and Medically Assisted Reproduction: A Guide for Fertility Staff , 2015 . http://www.eshre.eu/Guidelines-and-Legal/Guidelines/Psychosocial-care-guideline.aspx .

Freeman T , Jadva V , Kramer W , Golombok S . Gamete donation: parents' experiences of searching for their child's donor siblings or donor . Hum Reprod 2009 ; 24 : 505 – 516 .

Goedeke S , Daniels K , Thorpe M , Du Preez E . Building extended families through embryo donation: the experiences of donors and recipients . Hum Reprod 2015 ; 30 : 2340 – 2350 .

Hammarberg K , Johnson L , Bourne K , Fisher J , Kirkman M . Proposed legislative change mandating retrospective release of identifying information: consultation with donors and Government response . Hum Reprod 2014 ; 29 : 286 – 292 .

Kirkman M . Saviours and satyrs: ambivalence in narrative meanings of sperm provision . Cult Health Sex 2004 ; 6 : 319 – 336 .

Kirkman M , Rosenthal D , Johnson L . Families working it out: adolescents' views on communicating about donor-assisted conception . Hum Reprod 2007 ; 22 : 2318 – 2324 .

Kirkman M , Bourne K , Fisher J , Johnson L , Hammarberg K . Gamete donors' expectations and experiences of contact with their donor offspring . Hum Reprod 2014 ; 29 : 731 – 738 .

Kitto S , Chesters J , Grbich C . Quality in qualitative research . Med J Aust 2008 ; 188 : 243 – 246 .

Kovacs GT , Morgan G , Levine M , McCrann J . The Australian community overwhelmingly approves IVF to treat subfertility, with increasing support over three decades . Aust N Z J Obstetr Gynaecol 2012 ; 52 : 302 – 304 .

Leininger M . Evaluation criteria and critique of qualitative research studies . In: Morse J (ed). Critical Issues in Qualitative Research Methods . Thousand Oaks : Sage , 1994 , 95 – 115 .

Lincoln YS , Guba EG . Naturalistic Inquiry . Newbury Park, CA : Sage Publications , 1985 .

Morse J , Richards L . Readme First for a Users Guide to Qualitative Methods . Thousand Oaks : Sage , 2002 .

O'Reilly M , Parker N . ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research . Qual Res 2013 ; 13 : 190 – 197 .

Porter M , Bhattacharya S . Investigation of staff and patients' opinions of a proposed trial of elective single embryo transfer . Hum Reprod 2005 ; 20 : 2523 – 2530 .

Sandelowski M . The problem of rigor in qualitative research . Adv Nurs Sci 1986 ; 8 : 27 – 37 .

Wyverkens E , Provoost V , Ravelingien A , De Sutter P , Pennings G , Buysse A . Beyond sperm cells: a qualitative study on constructed meanings of the sperm donor in lesbian families . Hum Reprod 2014 ; 29 : 1248 – 1254 .

Young K , Fisher J , Kirkman M . Women's experiences of endometriosis: a systematic review of qualitative research . J Fam Plann Reprod Health Care 2014 ; 41 : 225 – 234 .

  • conflict of interest
  • credibility
  • qualitative research
  • quantitative methods

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2350
  • Copyright © 2024 European Society of Human Reproduction and Embryology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Open access
  • Published: 29 June 2021

Pragmatic approaches to analyzing qualitative data for implementation science: an introduction

  • Shoba Ramanadhan   ORCID: orcid.org/0000-0003-0650-9433 1 ,
  • Anna C. Revette 2 ,
  • Rebekka M. Lee 1 &
  • Emma L. Aveling 3  

Implementation Science Communications volume  2 , Article number:  70 ( 2021 ) Cite this article

36k Accesses

66 Citations

46 Altmetric

Metrics details

Qualitative methods are critical for implementation science as they generate opportunities to examine complexity and include a diversity of perspectives. However, it can be a challenge to identify the approach that will provide the best fit for achieving a given set of practice-driven research needs. After all, implementation scientists must find a balance between speed and rigor, reliance on existing frameworks and new discoveries, and inclusion of insider and outsider perspectives. This paper offers guidance on taking a pragmatic approach to analysis, which entails strategically combining and borrowing from established qualitative approaches to meet a study’s needs, typically with guidance from an existing framework and with explicit research and practice change goals.

Section 1 offers a series of practical questions to guide the development of a pragmatic analytic approach. These include examining the balance of inductive and deductive procedures, the extent to which insider or outsider perspectives are privileged, study requirements related to data and products that support scientific advancement and practice change, and strategic resource allocation. This is followed by an introduction to three approaches commonly considered for implementation science projects: grounded theory, framework analysis, and interpretive phenomenological analysis, highlighting core analytic procedures that may be borrowed for a pragmatic approach. Section 2 addresses opportunities to ensure and communicate rigor of pragmatic analytic approaches. Section 3 provides an illustrative example from the team’s work, highlighting how a pragmatic analytic approach was designed and executed and the diversity of research and practice products generated.

As qualitative inquiry gains prominence in implementation science, it is critical to take advantage of qualitative methods’ diversity and flexibility. This paper furthers the conversation regarding how to strategically mix and match components of established qualitative approaches to meet the analytic needs of implementation science projects, thereby supporting high-impact research and improved opportunities to create practice change.

Peer Review reports

Contributions to the literature

Qualitative methods are increasingly being used in implementation science, yet many researchers are new to these methods or unaware of the flexibility afforded by applied qualitative research.

Implementation scientists can benefit from guidance on creating a pragmatic approach to analysis, which includes the strategic combining and borrowing from established approaches to meet a given study’s needs, typically with guidance from an implementation science framework and explicit research and practice change goals.

Through practical questions and examples, we provide guidance for using pragmatic analytic approaches to meet the needs and constraints of implementation science projects while maintaining and communicating the work’s rigor.

Implementation science (IS) is truly pragmatic at its core, answering questions about how existing evidence can be best translated into practice to accelerate impact on population health and health equity. Qualitative methods are critical to support this endeavor as they support the examination of the dynamic context and systems into which evidence-based interventions (EBIs) are integrated — addressing the “hows and whys” of implementation [ 1 ]. Numerous IS frameworks highlight the complexity of the systems in which implementation efforts occur and the uncertainty regarding how various determinants interact to produce multi-level outcomes [ 2 ]. With that lens, it is unsurprising that diverse qualitative methodologies are receiving increasing attention in IS as they allow for an in-depth understanding of complex processes and interactions [ 1 , 3 , 4 ]. Given the wide variety of possible analytic approaches and techniques, an important question is which analytic approach best fits a given set of practice-driven research needs. Thoughtful design is needed to align research questions and objectives, the nature of the subject matter, the overall approach, the methods (specific tools and techniques used to achieve research goals, including data collection procedures), and the analytic strategies (including procedures used for exploring and interpreting data) [ 5 , 6 ]. Achieving this kind of alignment is often described as “fit,” “methodological integrity,” or “internal coherence” [ 3 , 7 , 8 ]. Tailoring research designs to the unique constellation of these considerations in a given study may also require creative adaptation or innovation of analytic procedures [ 7 ]. Yet, for IS researchers newer to qualitative approaches, a lack of understanding of the range of relevant options may limit their ability to effectively connect qualitative approaches and research goals.

For IS studies, several factors further complicate the selection of analytic approaches. First, there is a tension between the speed with which IS must move to be relevant and the need to conduct rigorous research. Second, though qualitative research is often associated with attempts to generate new theories, qualitative IS studies’ goals may also include elaborating conceptual definitions, creating classifications or typologies, and examining mechanisms and associations [ 9 ]. Given the wealth of existing IS frameworks and models, covering determinants, processes, and outcomes [ 10 ], IS studies often focus on extending or applying existing frameworks. Third, as an applied field, IS work usually entails integrating different kinds of “insider” and “outsider” expertise to support implementation or practice change [ 11 ]. Fourth, diverse traditions have contributed to the new field of IS, including agriculture, operations research, public health, medicine, anthropology, sociology, and more [ 12 ]. The diversity of disciplines among IS researchers can bring a wealth of complementary perspectives but may also pose challenges in communicating about research processes.

Pragmatic approaches to qualitative analysis are likely valuable for IS researchers yet have not received enough attention in the IS literature to support researchers in using them confidently. By pragmatic approaches, we mean strategic combining and borrowing from established qualitative approaches to meet the needs of a given IS study, often with guidance from an IS framework and with clear research and practice change goals. Pragmatic approaches are not new, but they receive less attention in qualitative research overall and are not always clearly explicated in the literature [ 9 ]. Part of the challenge in using pragmatic approaches is the lack of guidance on how to mix and match components of established approaches in a coherent, credible manner.

Our motivation in offering this guidance reflects our experiences as researchers, collaborators, and teachers connecting qualitative methods and IS research questions. The author team includes two behavioral scientists who conduct stakeholder-engaged implementation science and regularly utilize qualitative approaches (SR and RL). The team also includes a sociologist and a social psychologist who were trained in qualitative methods and have rich expertise with health services and implementation research (AR and EA). Through conducting qualitative IS studies and supporting students and colleagues new to qualitative approaches, we noticed a regularly occurring set of concerns and queries. Many questions seem to stem from a sense that there is a singular, “right” way to conduct qualitative projects. Such concerns are often amplified by fear that deviation from rigid adherence to established sets of procedures may jeopardize the (perceived or actual) rigor of the work. While the appeal of recipe-like means of ensuring rigor is understandable, fixation on compliance with “established” approaches overlooks the fact that versions of recognizable, named approaches (e.g., grounded theory) often use different procedures [ 7 ]. As Braun and Clarke suggest, this “hallowed quest” for a singular, ideal approach leads many researchers astray and risks limiting appropriate and necessary adaptations and innovations in methods [ 13 ]. IS researchers seeking to broaden the range of approaches they can apply should take comfort that there is “no single right way to do qualitative data analysis […]. Much depends on the purpose of the research, and it is important that the proposed method of analysis is carefully considered in planning the research, and is integrated from the start with other parts of the research, rather than being an afterthought.” [ 14 ]. At the same time, given the wealth of traditions represented in the IS community, it can be difficult for researchers to effectively ensure and convey the quality and rigor of their work. This paper aims to serve as a resource for IS researchers seeking innovative and accessible approaches to qualitative research. We present suggestions for developing and communicating approaches to analysis that are the right “fit” for complex IS research projects and demonstrate rigor and quality.

Accordingly, section 1 offers guidance on identifying an analytic approach that aligns with study goals and allows for practical constraints. We describe three approaches commonly considered for IS projects: grounded theory, framework analysis, and interpretive phenomenological analysis, highlighting core elements that researchers can borrow to create a tailored, pragmatic approach. Section 2 addresses opportunities to ensure and communicate the rigor of pragmatic analytic approaches. Section 3 provides an illustrative example from the team’s work, describing the design and execution of a pragmatic analytic approach and the diversity of research and practice products generated.

Section 1: ensuring fit between research goals, practical constraints, and analytic approaches

Decision-making about all aspects of research design, including analysis, entails judgment about “fit.” Researchers need not identify a single analytic approach and attempt to force its strict application, regardless of fit. Indeed, the flexible, study-specific combination of design elements is a hallmark of applied qualitative research practice [ 9 ]. Relevant considerations for fit include the inquiry’s purpose and nature of the subject matter; the diversity of intended audiences for findings; the criteria used to judge the quality and practical value of the results; and the research context (including characteristics of the setting, participants, and investigators). Other important considerations relate to constraints of available resources (e.g., funding, time, and staff) and access to relevant participants [ 3 ]. We contend that in the applied IS setting, finding an appropriate fit often includes borrowing procedures from different approaches to create a pragmatic, hybrid approach. A pragmatic approach also addresses the IS-specific tensions outlined above, i.e., a need to conduct research that is time-bounded, engages with theories/frameworks/models, supports application in practice, and speaks to a diversity of colleagues. To promote goals of achieving fit and internal coherence in light of IS-specific requirements, we offer the considerations above and additional guiding questions for selecting analytic procedures to create a pragmatic approach, as summarized in Fig. 1 .

figure 1

Developing a pragmatic qualitative data analysis approach for IS: key considerations for selection of analytic procedures

Key questions include the following:

What is the appropriate balance of inductive and deductive analytic procedures given the research goals?

A deductive process emphasizes themes and explanations derived from previously established concepts, pre-existing theories, or the relevant literature [ 9 ]. For example, an analysis that leans heavily on a deductive process might use the core components of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework [ 15 ] to inform the coding structure and analysis. This process would support efforts to bound the investigation’s scope or expand an existing framework or model [ 16 ]. On the other hand, rather than trying to fit data with pre-existing concepts or theory, an inductive process generates interpretation and understanding that is primarily grounded in and driven by the data [ 9 ].

A balance of deductive and inductive processes might use an IS framework as a starting point for the deductive portion and then emphasize inductive processes to garner additional insight into topics not anticipated by the team or framework. For example, a selected IS framework may not attend sufficiently to the ways in which implementation context drives inequities [ 17 ], if the dataset includes valuable information on this topic, including inductive processes would allow a fuller exploration of such patterns.

To what extent will the analysis emphasize the perspectives of participants vs. researchers?

An important decision relates to where the research team wishes to ground the analysis on the continuum between insider (emic) and outsider (etic) perspectives. The appropriate balance of insider/outsider orientation will reflect the overall research design and questions. Specific decisions about how to execute the desired balance through the analysis include; for example, the types of codes used or the value placed on participant reflections. As described below in section 2, value is often placed on incorporating participants’ feedback on the development analysis, sometimes called “member checks” or “member reflections” [ 8 ].

An insider (emic) orientation represents findings in the ways that participants experience them, and insider knowledge is valued and privileged [ 9 ]. As an example, MacFarlane and colleagues used Normalization Process Theory and participatory approaches to identify appropriate implementation strategies to support the integration of evidence-based cross-cultural communication in European primary care settings. The participatory nature of the project offered the opportunity to gain “insider” insight rather than imposing and prioritizing the academic researchers’ “outsider” perspective. The insider (emic) orientation was operationalized in the analytic approach by using stakeholder co-analysis, which engages a wider set of stakeholders in the iterative processes of thematically analyzing the data [ 18 ]. By contrast, an outsider (etic) orientation represents the setting and participants in terms that the researcher or external audiences bring to the study and emphasizes the outsider’s perspective [ 9 ]. For instance, Van deGriend and colleagues conducted an analysis of influences on scaling-up group prenatal care. They used outsider (etic) codes that drew on researchers’ concepts and the literature to complement the insider (emic) codes that reflected participants’ concepts and views [ 19 ]. Balancing insider and outsider orientations is useful for pragmatic, qualitative IS studies increase the potential for the study to highlight practice- and community-based expertise, build the literature, and ultimately support the integration of evidence into practice.

How can the analytic plan be designed to yield the outputs and products needed to support the integration of evidence into research and practice?

The research team can maximize efficiency and impact by intentionally connecting the analytic plan and the kind of products needed to meet scientific and practice goals (e.g., journal articles versus policy briefs). The ultimate use of the research outputs can also impact decisions around the breadth versus depth of the analysis. For example, in a recent implementation evaluation for community-clinical partnerships delivering EBIs in underserved communities, members of this author team (SR and RL) analyzed data to explore how partnership networks impacted implementation outcomes. At the same time, given the broader goal of supporting the establishment of health policies to support partnered EBI delivery, the team was also charged (by the state Department of Public Health) with capturing stories that would resonate with legislators regarding the need for broad, sustained investments [ 20 ]. We created a unique code to identify these stories during analysis and easily incorporate them into products for health department leaders. Given the practice-focused orientation, qualitative IS studies often support products for practitioners, e.g., “playbooks” to guide the process of implementing an intervention or novel care process [ 1 ].

How can analysis resources be used strategically in time-sensitive projects or where there is limited staff or resource availability?

IS research is often conducted by teams, and strategic analytic decisions can promote rigor while capitalizing on the potential for teamwork to speed up analysis. Deterding and Waters’ strategy of flexible coding, for example, offers such benefits [ 21 ]. Through an initial, framework-driven analytic step, large chunks of text can be quickly indexed deductively into predefined categories, such as the five Consolidated Framework for Implementation Research domains of inner setting, outer setting, characteristics of individuals, intervention attributes, and processes [ 22 ]. This is a more straightforward coding task appropriate for research assistants who have been trained in qualitative research and understand the IS framework. Then, during the second analytic coding step, more in-depth coding by research team members with more experience can ensure a deeper exploration of existing and new themes. This two-step process can also enable team members to lead different parts of an IS project with different goals, purposes, or audiences. Other innovations in team-based analyses are becoming increasingly common in IS, such as rapid ethnographic approaches [ 23 ].

Building blocks for pragmatic analysis: examples from pattern-based analytic approaches

We offer illustrative examples of established analytic approaches in the following, highlighting their utility for IS and procedures that a pragmatic approach might usefully borrow and combine. These examples are not exhaustive; instead, they represent selected, pattern-based analytic approaches commonly used in IS. We aim to offer helpful anchor points that encompass the breadth and flexibility to apply to a wide range of IS projects [ 24 ] while also reflecting and speaking to a diversity of home disciplines, including sociology, applied policy, and psychology.

Grounded theory

Grounded theory is one of the most recognizable and influential approaches to qualitative analysis, although many variations have emerged since its introduction. Sociologists developed the approach, and the history and underlying philosophy are richly detailed elsewhere [ 25 , 26 ]. The central goal of this approach is to generate a theoretical explanation grounded in close inspection of the data and without a preconceived starting point. In many instances, the emphasis of grounded theory on a purely inductive orientation may be at odds with the focus in IS on the use of existing theories and frameworks, as highlighted by the QUALRIS group [ 4 ]. Additionally, a “full” grounded theory study, aligned with all its methodological assumptions and prescriptions (e.g., for sampling), is very demanding and time-consuming and may not be appropriate when timely turnaround in the service of research or practice change is required. For these reasons, a full grounded theory approach is rarely seen in the IS literature. Instead, IS researchers who use this approach are likely to use a modified version, sometimes described as “grounded theory lite” [ 6 ].

Core features and procedures characteristic of grounded theory that can be incorporated into a pragmatic approach include inductive coding techniques [ 27 ]. Open, inductive coding allows the researcher to “open up the inquiry” by examining the data to see what concepts best fit the data, without a preconceived explanation or framework [ 28 , 29 , 30 ]. Concepts and categories derived from open coding prompt the researcher to consider aspects of the research topic that were overlooked or unanticipated [ 31 ]. The intermediate stages of coding in grounded theory, referred to as axial or focused coding, build on the open coding and generate a more refined set of key categories and identify relationships between these categories [ 32 ]. Another useful procedure from grounded theory is the constant comparison method, in which data are collected, categorized, and compared to previously collected data. This continuing, iterative process prompts continuous engagement with the analysis process and reshapes and redefines ideas, which is useful for most qualitative studies [ 25 , 29 , 33 ]. Grounded theory also allows for community expertise and broader outsider perspectives to complement one another for a more comprehensive understanding of practices [ 34 ].

An illustration of the utility of grounded theory procedures comes from a study that explored how implementing organizations can influence local context to support the scale-up of mental health interventions in middle-income countries [ 35 ]. Using a multiple case study design, the study team used an analytic approach based on grounded theory to analyze data from 159 semi-structured interviews across five case sites. They utilized line-by-line open coding, constant comparison, and exploration of connections between themes in the process of developing an overarching theoretical framework. To increase rigor, they employed triangulation by data source and type and member reflections. Their team-based plan included multiple coders who negotiated conflicts and refined the thematic framework jointly. The output of the analysis was a model of processes by which entrepreneurial organizations could marshal and create resources to support the delivery of mental health interventions in limited-resource settings. By taking a divergent perspective (grounded in social entrepreneurship, in this case), the study output provided a basis for further inquiry into the design and scale-up of mental health interventions in middle-income countries.

Framework analysis

Framework analysis comes from the policy sphere and tends to have a practical orientation; this applied nature typically includes a more structured and deductive approach. The history, philosophical assumptions, and core processes are richly described by Ritchie and Spencer [ 36 ]. Framework analysis entails several features common to many qualitative analytic approaches, including defining concepts, creating typologies, and identifying patterns and relationships, but does so in a more predefined and structured way [ 37 , 38 ]. For example, the research team can create codes based on a framework selected in advance and can also include open-ended inquiry to capture additional insights. This analytic approach is well-suited to multi-disciplinary teams whose members have varying levels of experience with qualitative research [ 37 ]. It may require fewer staff resources and less time than some other approaches.

The framework analysis process includes five key steps. Step 1 is familiarization: Team members immerse themselves in the data, e.g., reading, taking notes, and listening to audio. Step 2 is identifying a coding framework: The research team develops a coding scheme, typically using an iterative process primarily driven by deductive coding (e.g., based on the IS framework). Step 3 is indexing: The team applies the coding structure to the entire data set. Step 4 is charting: The team rearranges the coded data and compares patterns between and within cases. Step 5 is mapping and interpretation: The team looks at the range and nature of relationships across and between codes [ 36 , 39 , 40 ]. The team can use tables and diagrams to systematically synthesize and display the data based on predetermined concepts, frameworks, or areas of interest. While more structured than other approaches, framework analysis still presents a flexible design that combines well with other analytic approaches to achieve study objectives [ 37 ]. The case example given in section 3 offers a detailed application of a modified framework analytic approach.

Interpretive phenomenological analysis (IPA)

Broadly, the purpose of a phenomenological inquiry is to understand the experiences and perceptions of individuals related to an occurrence of interest [ 41 , 42 ]. For example, a phenomenological inquiry might focus on implementers’ experiences with remote training to support implementing a new EBI, aiming to explore their views, how those changed over time, and why implementers reacted the way they did. Drawing on this tradition, IPA focuses specifically on particular individuals (or cases), understanding both the experience of individuals and the sense they are making of those experiences. With roots in psychology, this approach prioritizes the perspective of the participant, who is understood to be part of a broader system of interest; additional details about the philosophical underpinnings are available elsewhere [ 41 ]. Research questions are open and broad, taking an inductive, exploratory perspective. Samples are typically small and somewhat homogeneous as the emphasis is placed on an in-depth exploration of a small set of cases to identify patterns of interest [ 43 ]. Despite the smaller sample size, the deep, detailed analysis requires thoughtful and time-intensive engagement with the data. The resulting outputs can be useful to develop theories that attend to a particular EBI or IS-related process or to refine existing frameworks and models [ 44 ].

A useful example comes from a study that sought to understand resistance to using evidence-based guidelines from the perspective of physicians focused on providing clinical care [ 45 ]. The analysis drew on data collected from interviews of 11 physicians selected for their expertise and diversity across a set of sociodemographic characteristics. In the first phase of the analysis, the team analyzed the full-length interviews and identified key themes and the relationships between them. Particular attention was paid to implicit and explicit meanings, repeated ideas or phrases, and metaphor choices. Two authors conducted the analyses separately and then compared them to reach a consensus. In the second phase of the analysis, the team considered the group of 11 interviews as a set. Using an inductive perspective, the team identified superordinate (or high-level) themes that addressed the full dataset. The final phase of the analysis was to identify a single superordinate theme that would serve as the core description of clinical practice. The team engaged other colleagues from diverse backgrounds to support reflection and refinement of the analysis. The analysis yielded a theoretical model that focused on a core concept (clinical practice as engagement), broken out into five constituent parts addressing how clinicians experience their practice, separate from following external guidelines.

Section 2: ensuring and communicating rigor of a pragmatic analysis

Building on the discussion of pragmatic combination of approaches for a given study, we turn now to the question of ensuring and communicating rigor so that consumers of the scientific products will feel confident assessing, interpreting, and engaging with the findings [ 46 ]. This is of particular importance for IS given that the field tends to emphasize quantitative methods and there may be perceptions that qualitative research (and particularly research that must be completed more quickly) is less rigorous. To address those field-specific concerns and ensure pragmatic approaches are understood and valued, IS researchers must ensure and communicate the rigor of their approach. Given journal constraints, authors may consider using supplementary files to offer rich details to describe the study context and details of coding and analysis procedures (see for example, Aveling et al. [ 47 ]). We build on the work of Mays and Pope [ 38 ], Tracy [ 8 ], and others [ 48 , 49 , 50 , 51 , 52 ] to offer a shortlist of considerations for IS researchers to ensure pragmatic analysis is conducted with rigor and its quality and credibility are communicated (Table 1 ). We also recommend these articles as valuable resources for further reading.

Reporting checklists can help researchers ensure the details of the pragmatic analytic approach are communicated effectively, and inclusion of such a checklist is often required by journals for manuscript submission. Popular choices include the Standards for Reporting Qualitative Research (SRQR) and Consolidated Criteria for Reporting Qualitative (COREQ) checklists. These were developed based on reviews of other checklists and are intended to capture a breadth of information to increase transparency, rather than being driven by a philosophical underpinning regarding how to design rigorous qualitative research [ 53 , 54 ]. For that reason, researchers should use these checklists with a critical lens as they do not alone demonstrate rigor. Instead, they can be thought of as a flexible guide and support, without focusing solely on technical components at the expense of the broader qualitative expertise that drives the research effort [ 55 ].

Section 3: case example of a modified framework analysis approach

To illustrate the ideas presented above, we offer a recent example of work conducted by two authors (AR and SR) and colleagues [ 56 ]. The broad motivation for the study was to increase the use of EBIs in community-based organizations (CBOs) and faith-based organizations (FBOs) working with underserved communities. Our past work and the literature highlighted challenges in matching practitioner capacity (i.e., knowledge, motivation, skills, and resources) with the skillset required to use EBIs successfully [ 57 , 58 ]. The study utilized a participatory implementation science perspective, which offered a unique opportunity to integrate insider and outsider perspectives and increase the likelihood that solutions developed would reflect the realities of practice. The work was conducted in partnership with a Community Advisory Board and attempted to balance research and action [ 59 , 60 ].

The qualitative portion of the project had two primary goals. The research goal was to identify improvements to the design and delivery of capacity-building interventions for CBOs and FBOs working with underserved populations. The practice-related goal was to identify local training needs and refine an existing EBI capacity-building curriculum. We drew on the EPIS Framework [ 15 ] to support our exploration of multi-level factors that drive EBI implementation in social service settings. We conducted four focus group discussions with intended capacity-building recipients ( n = 27) and key informant interviews with community leaders ( n = 15). Given (1) the applied nature of the research and practice goals, (2) our reliance on an existing IS framework, (3) limited staff resources, and (4) a need to analyze data rapidly to support intervention refinement, we chose a modified framework analysis approach. Modifications included incorporating aspects of grounded theory, including open coding, to increase the emphasis on inductive perspectives. The team also modified the charting procedures, replacing tabular summaries with narrative summaries of coded data.

Analysis was conducted by three doctoral-level researchers with complementary training (IS, sociology, and nursing). We started by familiarizing ourselves with the data — the three researchers read a subset of the transcripts, with purposeful overlap in reading assignments to facilitate discussion. Then, we created the coding framework and indexed the data. We went back and forth between indexing and charting, starting with deductive codes based on the EPIS framework, and then using a more inductive open coding strategy to identify emergent codes that fell outside the EPIS framework, e.g., the importance of investing in resources that remain in the community. The new coding framework, with both inductive and deductive codes, was applied to all interview transcripts. Each transcript was independently coded by two of the three investigators, followed by coding comparison to address discrepancies. We used NVivo 12 software [ 61 ], which enabled the exploration and reorganization of data to examine patterns within specific codes and across the data set. We utilized narrative summaries to organize our findings. Finally, we revisited the relevant data to identify broad themes of interest. This step was collaborative and iterative, with each team member taking the lead on a subset of codes and themes that aligned with their expertise, and the interpretations were shared with the other research investigators and discussed. This “divide-and-conquer” tactic was similar to the Deterding and Waters example of flexible coding [ 21 ]. We used triangulation to explore perceptions by different groups of participants (e.g., leaders vs. program implementers and individuals representing CBOs vs. FBOs). This type of triangulation is sometimes referred to as “triangulation of data” and stands in contrast to triangulation between different methods [ 62 ].

Our analytic plan was informed by the participatory design of the larger project. At multiple points in the analytic process, we presented interpretations to the advisory board and then refined interpretations and subsequent steps of the analysis accordingly. This was critical because our use of an IS framework likely imposed an outsider’s perspective on the use of EBIs in practice and we wanted to ensure the interpretations reflected insider perspectives on the realities of practice. The incorporation of practice-based expertise in our analytic process also reflected the participatory nature of the research project. We note that advisory board members did not wish to analyze the data in-depth and instead preferred this manner of engagement.

To meet our research goals, we produced scientific publications that expanded the literature on capacity-building strategies to promote evidence-based prevention in CBOs and FBOs addressing health equity. The modified framework analysis approach allowed us to build on and extend the EPIS framework by allowing for framework-driven deductive coding and open, inductive coding. As an example, the EPIS framework highlights relationships between patient/client characteristics (within the “outer context” domain) and EBI fit (within the “innovation” domain). We added an emergent code to capture the wide range of resources CBO- and FBO-based practitioners needed to improve the fit between available EBIs and community needs. This included attention to the limitations of available EBIs to address the multi-level barriers to good health experienced by underserved communities. Participants highlighted the importance of solutions to these gaps coming not from external resources (such as those highlighted within the “bridging factors” domain of the framework), but instead from resources built and maintained within the community. Per the journal’s requirements, we presented the SRQR checklist to explain how we ensured a rigorous analysis.

To achieve practice goals, we drew on the rich dataset to refine the capacity-building intervention, from recruitment to the training components and ongoing supports. For example, we were able to create more compelling arguments for organizational leaders to send staff to the training and support the use of EBIs in their organizations, use language during trainings that better resonated with trainees, and include local examples related to barriers and facilitators to EBI use. We also revised programmatic offerings to include co-teaching by community members and created shorter, implementation-focused training opportunities. The balance of framework-driven, deductive processes, and open, inductive processes allowed us to capture patterns in anticipated and unanticipated content areas. This balance also allowed us to develop research briefs that provide high-level summaries that could be useful to other practitioners considering how best to invest limited professional development resources.

Conclusions

We encourage IS researchers to explore the diversity and flexibility of qualitative analytic approaches and combine them pragmatically to best meet their needs. We recognize that some approaches to analysis are tied to particular methodological orientations and others are not, but a pragmatic approach can offer the opportunity to combine analytic strategies and procedures. To do this successfully, it is essential for the research team to ensure fit, preserve quality, and rigor, and provide transparent explanations connecting the analytic approach and findings so that others can assess and build on the research. We believe pragmatic approaches offer an important opportunity to make strategic analytic decisions, such as identifying an appropriate balance of insider and outsider perspectives, to extend current IS frameworks and models. Given the urgency to increase the utilization and utility of EBIs in practice settings, we see a natural fit with the pragmatist prompt to judge our research efforts based on whether or not the knowledge obtained serves our purposes [ 63 ]. In that spirit, the use of pragmatic approaches can support high-quality, efficient, practice-focused research, which can broaden the scope and ultimate impact of IS research.

Availability of data and materials

Not applicable

Hamilton AB, Finley EP. Qualitative methods in implementation research: an introduction. Psychiatry Res. 2019;280:112516. https://doi.org/10.1016/j.psychres.2019.112516 .

Article   PubMed   PubMed Central   Google Scholar  

Tabak RG, Chambers D, Hook M, Brownson RC. The conceptual basis for dissemination and implementation research: lessons from existing models and frameworks. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: translating science to practice. New York: Oxford University Press; 2018. p. 73–88.

Google Scholar  

Patton MQ. Qualitative research & evaluation methods: integrating theory and practice: Sage publications; 2014.

QualRIS (Qualitative Research in Implementation Science). Qualitative methods in implementation science. Division of Cancer Control and Population Sciences, National Cancer Institute; 2019.

Creswell JW, Poth CN. Qualitative inquiry and research design: choosing among five approaches: Sage publications; 2016.

Braun V, Clarke V. Successful qualitative research: a practical guide for beginners: sage; 2013.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22. https://doi.org/10.1037/qup0000082 .

Article   Google Scholar  

Tracy SJ. Qualitative quality: eight “big-tent” criteria for excellent qualitative research. Qualitative inquiry. 2010;16(10):837–51. https://doi.org/10.1177/1077800410383121 .

Green J, Thorogood N. Qualitative methods for health research. 4th ed. Thousand Oaks: SAGE; 2018.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53. https://doi.org/10.1186/s13012-015-0242-0 .

Aveling E-L, Zegeye DT, Silverman M. Obstacles to implementation of an intervention to improve surgical services in an Ethiopian hospital: a qualitative study of an international health partnership project. BMC health services research. 2016;16(1):393. https://doi.org/10.1186/s12913-016-1639-4 .

Dearing JW, Kee KF, Peng T. Historical roots of dissemination and implementation science. In: Brownson RC, Colditz GA, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 47–61.

Braun V, Clarke V. Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches. Counsel Psychother Res. 2021;21(1):37–47. https://doi.org/10.1002/capr.12360 .

Punch KF, Oancea A. Introduction to research methods in education: Sage; 2014.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Mental Health Mental Health Serv Res. 2011;38(1):4–23. https://doi.org/10.1007/s10488-010-0327-7 .

Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook: Sage; 1994.

Baumann AA, Cabassa LJ. Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res. 2020;20(1):190. https://doi.org/10.1186/s12913-020-4975-3 .

MacFarlane A, O’Donnell C, Mair F, O’Reilly-de Brún M, de Brún T, Spiegel W, et al. REsearch into implementation STrategies to support patients of different ORigins and language background in a variety of European primary care settings (RESTORE): study protocol. Implementation Science. 2012;7(1):111. https://doi.org/10.1186/1748-5908-7-111 .

Van De Griend KM, Billings DL, Frongillo EA, Messias DKH, Crockett AH, Covington-Kolb S. Core strategies, social processes, and contextual influences of early phases of implementation and statewide scale-up of group prenatal care in South Carolina. Eval Program Plann. 2020;79:101760. https://doi.org/10.1016/j.evalprogplan.2019.101760 .

Ramanadhan S, Daly J, Lee RM, Kruse G, Deutsch C. Network-based delivery and sustainment of evidence-based prevention in community-clinical partnerships addressing health equity: a qualitative exploration. Front Public Health. 2020;8:213. https://doi.org/10.3389/fpubh.2020.00213 .

Deterding NM, Waters MC. Flexible coding of in-depth interviews: a twenty-first-century approach. Soc Methods Res. 2018:0049124118799377.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50 .

Palinkas LA, Zatzick D. Rapid assessment procedure informed clinical ethnography (rapice) in pragmatic clinical trials of mental health services implementation: methods and applied case study. Administration and Policy in Mental Health and Mental Health Services Research. 2019;46(2):255–70. https://doi.org/10.1007/s10488-018-0909-3 .

Article   PubMed   Google Scholar  

Pistrang N, Barker C. Varieties of qualitative research: a pragmatic approach to selecting methods. 2012.

Glaser B, Strauss A. The discovery grounded theory: strategies for qualitative inquiry. Chicago: Aldine Publishing Company; 1967.

Birks M, Mills J. Grounded theory: a practical guide: Sage; 2015.

Lara Varpio MATM, Mylopoulos M. 21 Qualitative research methodologies: embracing methodological borrowing, shifting and importing. Res Med Educ. 2015;18:245.

Strauss AL. Qualitative analysis for social scientists: Cambridge university press; 1987, DOI: https://doi.org/10.1017/CBO9780511557842 .

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 3rd ed. Thousand Oaks: Sage Publications; 2012.

Burkholder GJ, Cox KA, Crawford LM, Hitchcock JH. Research design and methods: an applied guide for the scholar-practitioner: SAGE Publications, Incorporated; 2019.

Schutt RK. Investigating the social world: the process and practice of research. Thousand Oaks: Sage Publications; 2018.

Chun Tie Y, Birks M, Francis K. Grounded theory research: a design framework for novice researchers. SAGE Open Med. 2019;7:2050312118822927.

Curry L, Nunez-Smith M. Mixed methods in health sciences research: a practical primer: Sage Publications; 2014.

Hoare KJ, Buetow S, Mills J, Francis K. Using an emic and etic ethnographic technique in a grounded theory study of information use by practice nurses in New Zealand. Journal of Research in Nursing. 2013;18(8):720–31. https://doi.org/10.1177/1744987111434190 .

Kidd SA, Madan A, Rallabandi S, Cole DC, Muskat E, Raja S, et al. A multiple case study of mental health interventions in middle income countries: considering the science of delivery. PloS one. 2016;11(3):e0152083. https://doi.org/10.1371/journal.pone.0152083 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In A. M. Huberman & M. B. Miles (Eds.), The qualitative researcher’s companion. Thousand Oaks: SAGE; 2002;305–30.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):1–8.

Mays N, Pope C. Assessing quality in qualitative research. BMJ. 2000;320(7226):50–2. https://doi.org/10.1136/bmj.320.7226.50 .

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC medical research methodology. 2013;13(1):117. https://doi.org/10.1186/1471-2288-13-117 .

Bonello M, Meehan B. Transparency and coherence in a doctoral study case analysis: reflecting on the use of NVivo within a “framework” approach. Qual Rep. 2019;24(3):483–99.

Eatough V, Smith JA. Interpretative phenomenological analysis. In: Willig C, Stainton-Rogers W, editors. The Sage handbook of qualitative research in psychology. 179 Thousand Oaks: SAGE; 2008;193-211.

McWilliam CL, Kothari A, Ward-Griffin C, Forbes D, Leipert B, Collaboration SWCCACHC. Evolving the theory and praxis of knowledge translation through social interaction: a social phenomenological study. Implement Sci. 2009;4(1):26. https://doi.org/10.1186/1748-5908-4-26 .

Smith JA, Shinebourne P. Interpretative phenomenological analysis: American Psychological Association; 2012.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implementation Science. 2019;14(1):103. https://doi.org/10.1186/s13012-019-0957-4 .

Saraga M, Boudreau D, Fuks A. Engagement and practical wisdom in clinical practice: a phenomenological study. Medicine, Health Care and Philosophy. 2019;22(1):41–52. https://doi.org/10.1007/s11019-018-9838-x .

Lincoln YS, Guba EG. Naturalistic inquiry. Beverly Hill: Sage; 1985.

Book   Google Scholar  

Aveling E-L, Stone J, Sundt T, Wright C, Gino F, Singer S. Factors influencing team behaviors in surgery: a qualitative study to inform teamwork interventions. The Annals of thoracic surgery. 2018;106(1):115–20. https://doi.org/10.1016/j.athoracsur.2017.12.045 .

Waring J, Jones L. Maintaining the link between methodology and method in ethnographic health research. BMJ quality & safety. 2016;25(7):556–7. https://doi.org/10.1136/bmjqs-2016-005325 .

Ritchie J, Lewis J, Nicholls CM, Ormston R. Qualitative research practice: a guide for social science students and researchers: sage; 2013.

Patton MQ. Enhancing the quality and credibility of qualitative analysis. Health Serv Res. 1999;34(5 Pt 2):1189–208.

CAS   PubMed   PubMed Central   Google Scholar  

Barry CA, Britten N, Barber N, Bradley C, Stevenson F. Using reflexivity to optimize teamwork in qualitative research. Qual Health Res. 1999;9(1):26–44. https://doi.org/10.1177/104973299129121677 .

Article   CAS   PubMed   Google Scholar  

Booth A, Carroll C, Ilott I, Low LL, Cooper K. Desperately seeking dissonance: identifying the disconfirming case in qualitative evidence synthesis. Qual Health Res. 2013;23(1):126–41. https://doi.org/10.1177/1049732312466295 .

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51. https://doi.org/10.1097/ACM.0000000000000388 .

Barbour RS. Checklists for improving rigour in qualitative research: a case of the tail wagging the dog? BMJ. 2001;322(7294):1115–7. https://doi.org/10.1136/bmj.322.7294.1115 .

Ramanadhan S, Galbraith-Gyan K, Revette A, Foti A, James CR, Martinez-Dominguez VL, et al. Key considerations for designing capacity-building interventions to support evidence-based programming in underserved communities: a qualitative exploration. Translat Behav Med. 2021;11(2):452–61. https://doi.org/10.1093/tbm/ibz177 .

Ramanadhan S, Aronstein D, Martinez-Dominguez VL, Xuan Z, Viswanath K. Designing capacity-building supports to promote evidence-based programs in community-based organizations working with underserved populations. Progress in Community Health Partnerships. 2020;14(2):149–60. https://doi.org/10.1353/cpr.2020.0027 .

Leeman J, Calancie L, Hartman MA, Escoffery CT, Herrmann AK, Tague LE, et al. What strategies are used to build practitioners' capacity to implement community-based interventions and are they effective?: a systematic review. Implementation Science. 2015;10(1):80. https://doi.org/10.1186/s13012-015-0272-7 .

Ramanadhan S, Davis MM, Armstrong RA, Baquero B, Ko LK, Leng JC, et al. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29(3):363–9. https://doi.org/10.1007/s10552-018-1008-1 .

Minkler M, Salvatore AL, Chang C. Participatory approaches for study design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health. 2nd ed. New York: Oxford; 2018. p. 175–90.

QSR International Pty Ltd. NVivo qualitative data analysis software; Version 12. Melbourne, Australia. 2018.

Flick U. Triangulation in qualitative research. In: Flick U, vonKardorff E, Steinke I, editors. A companion to qualitative research. 2004;178-83.

Rorty RM. Philosophy and social hope: Penguin UK; 1999.

Download references

Acknowledgements

We wish to thank Priscilla Gazarian, RN, PhD, for insightful feedback on a draft of the manuscript.

This work was conducted with support from the National Cancer Institute (P50 NCA244433) and from Harvard Catalyst/National Center for Advancing Translational Sciences (UL 1TR002541). The content is solely the responsibility of the authors and does not necessarily represent the official views of Harvard Catalyst, Harvard University, or the National Institutes of Health.

Author information

Authors and affiliations.

Department of Social and Behavioral Sciences, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA

Shoba Ramanadhan & Rebekka M. Lee

Division of Population Sciences, Dana-Farber Cancer Institute, 450 Brookline Ave, Boston, MA, 02215, USA

Anna C. Revette

Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA

Emma L. Aveling

You can also search for this author in PubMed   Google Scholar

Contributions

SR conceptualized the manuscript. SR, AR, RL, and AE co-wrote and edited the manuscript. The authors read and approved the final version.

Corresponding author

Correspondence to Shoba Ramanadhan .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ramanadhan, S., Revette, A.C., Lee, R.M. et al. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun 2 , 70 (2021). https://doi.org/10.1186/s43058-021-00174-1

Download citation

Received : 14 December 2020

Accepted : 11 June 2021

Published : 29 June 2021

DOI : https://doi.org/10.1186/s43058-021-00174-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation science
  • Qualitative
  • Practice-based

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

scholarly articles on qualitative research design

  • Tools and Resources
  • Customer Services
  • Original Language Spotlight
  • Alternative and Non-formal Education 
  • Cognition, Emotion, and Learning
  • Curriculum and Pedagogy
  • Education and Society
  • Education, Change, and Development
  • Education, Cultures, and Ethnicities
  • Education, Gender, and Sexualities
  • Education, Health, and Social Services
  • Educational Administration and Leadership
  • Educational History
  • Educational Politics and Policy
  • Educational Purposes and Ideals
  • Educational Systems
  • Educational Theories and Philosophies
  • Globalization, Economics, and Education
  • Languages and Literacies
  • Professional Learning and Development
  • Research and Assessment Methods
  • Technology and Education
  • Share This Facebook LinkedIn Twitter

Article contents

Qualitative design research methods.

  • Michael Domínguez Michael Domínguez San Diego State University
  • https://doi.org/10.1093/acrefore/9780190264093.013.170
  • Published online: 19 December 2017

Emerging in the learning sciences field in the early 1990s, qualitative design-based research (DBR) is a relatively new methodological approach to social science and education research. As its name implies, DBR is focused on the design of educational innovations, and the testing of these innovations in the complex and interconnected venue of naturalistic settings. As such, DBR is an explicitly interventionist approach to conducting research, situating the researcher as a part of the complex ecology in which learning and educational innovation takes place.

With this in mind, DBR is distinct from more traditional methodologies, including laboratory experiments, ethnographic research, and large-scale implementation. Rather, the goal of DBR is not to prove the merits of any particular intervention, or to reflect passively on a context in which learning occurs, but to examine the practical application of theories of learning themselves in specific, situated contexts. By designing purposeful, naturalistic, and sustainable educational ecologies, researchers can test, extend, or modify their theories and innovations based on their pragmatic viability. This process offers the prospect of generating theory-developing, contextualized knowledge claims that can complement the claims produced by other forms of research.

Because of this interventionist, naturalistic stance, DBR has also been the subject of ongoing debate concerning the rigor of its methodology. In many ways, these debates obscure the varied ways DBR has been practiced, the varied types of questions being asked, and the theoretical breadth of researchers who practice DBR. With this in mind, DBR research may involve a diverse range of methods as researchers from a variety of intellectual traditions within the learning sciences and education research design pragmatic innovations based on their theories of learning, and document these complex ecologies using the methodologies and tools most applicable to their questions, focuses, and academic communities.

DBR has gained increasing interest in recent years. While it remains a popular methodology for developmental and cognitive learning scientists seeking to explore theory in naturalistic settings, it has also grown in importance to cultural psychology and cultural studies researchers as a methodological approach that aligns in important ways with the participatory commitments of liberatory research. As such, internal tension within the DBR field has also emerged. Yet, though approaches vary, and have distinct genealogies and commitments, DBR might be seen as the broad methodological genre in which Change Laboratory, design-based implementation research (DBIR), social design-based experiments (SDBE), participatory design research (PDR), and research-practice partnerships might be categorized. These critically oriented iterations of DBR have important implications for educational research and educational innovation in historically marginalized settings and the Global South.

  • design-based research
  • learning sciences
  • social-design experiment
  • qualitative research
  • research methods

Educational research, perhaps more than many other disciplines, is a situated field of study. Learning happens around us every day, at all times, in both formal and informal settings. Our worlds are replete with complex, dynamic, diverse communities, contexts, and institutions, many of which are actively seeking guidance and support in the endless quest for educational innovation. Educational researchers—as a source of potential expertise—are necessarily implicated in this complexity, linked to the communities and institutions through their very presence in spaces of learning, poised to contribute with possible solutions, yet often positioned as separate from the activities they observe, creating dilemmas of responsibility and engagement.

So what are educational scholars and researchers to do? These tensions invite a unique methodological challenge for the contextually invested researcher, begging them to not just produce knowledge about learning, but to participate in the ecology, collaborating on innovations in the complex contexts in which learning is taking place. In short, for many educational researchers, our backgrounds as educators, our connections to community partners, and our sociopolitical commitments to the process of educational innovation push us to ensure that our work is generative, and that our theories and ideas—our expertise—about learning and education are made pragmatic, actionable, and sustainable. We want to test what we know outside of laboratories, designing, supporting, and guiding educational innovation to see if our theories of learning are accurate, and useful to the challenges faced in schools and communities where learning is messy, collaborative, and contested. Through such a process, we learn, and can modify our theories to better serve the real needs of communities. It is from this impulse that qualitative design-based research (DBR) emerged as a new methodological paradigm for education research.

Qualitative design-based research will be examined, documenting its origins, the major tenets of the genre, implementation considerations, and methodological issues, as well as variance within the paradigm. As a relatively new methodology, much tension remains in what constitutes DBR, and what design should mean, and for whom. These tensions and questions, as well as broad perspectives and emergent iterations of the methodology, will be discussed, and considerations for researchers looking toward the future of this paradigm will be considered.

The Origins of Design-Based Research

Qualitative design-based research (DBR) first emerged in the learning sciences field among a group of scholars in the early 1990s, with the first articulation of DBR as a distinct methodological construct appearing in the work of Ann Brown ( 1992 ) and Allan Collins ( 1992 ). For learning scientists in the 1970s and 1980s, the traditional methodologies of laboratory experiments, ethnographies, and large-scale educational interventions were the only methods available. During these decades, a growing community of learning science and educational researchers (e.g., Bereiter & Scardamalia, 1989 ; Brown, Campione, Webber, & McGilley, 1992 ; Cobb & Steffe, 1983 ; Cole, 1995 ; Scardamalia & Bereiter, 1991 ; Schoenfeld, 1982 , 1985 ; Scribner & Cole, 1978 ) interested in educational innovation and classroom interventions in situated contexts began to find the prevailing methodologies insufficient for the types of learning they wished to document, the roles they wished to play in research, and the kinds of knowledge claims they wished to explore. The laboratory, or laboratory-like settings, where research on learning was at the time happening, was divorced from the complexity of real life, and necessarily limiting. Alternatively, most ethnographic research, while more attuned to capturing these complexities and dynamics, regularly assumed a passive stance 1 and avoided interceding in the learning process, or allowing researchers to see what possibility for innovation existed from enacting nascent learning theories. Finally, large-scale interventions could test innovations in practice but lost sight of the nuance of development and implementation in local contexts (Brown, 1992 ; Collins, Joseph, & Bielaczyc, 2004 ).

Dissatisfied with these options, and recognizing that in order to study and understand learning in the messiness of socially, culturally, and historically situated settings, new methods were required, Brown ( 1992 ) proposed an alternative: Why not involve ourselves in the messiness of the process, taking an active, grounded role in disseminating our theories and expertise by becoming designers and implementers of educational innovations? Rather than observing from afar, DBR researchers could trace their own iterative processes of design, implementation, tinkering, redesign, and evaluation, as it unfolded in shared work with teachers, students, learners, and other partners in lived contexts. This premise, initially articulated as “design experiments” (Brown, 1992 ), would be variously discussed over the next decade as “design research,” (Edelson, 2002 ) “developmental research,” (Gravemeijer, 1994 ), and “design-based research,” (Design-Based Research Collective, 2003 ), all of which reflect the original, interventionist, design-oriented concept. The latter term, “design-based research” (DBR), is used here, recognizing this as the prevailing terminology used to refer to this research approach at present. 2

Regardless of the evolving moniker, the prospects of such a methodology were extremely attractive to researchers. Learning scientists acutely aware of various aspects of situated context, and interested in studying the applied outcomes of learning theories—a task of inquiry into situated learning for which canonical methods were rather insufficient—found DBR a welcome development (Bell, 2004 ). As Barab and Squire ( 2004 ) explain: “learning scientists . . . found that they must develop technological tools, curriculum, and especially theories that help them systematically understand and predict how learning occurs” (p. 2), and DBR methodologies allowed them to do this in proactive, hands-on ways. Thus, rather than emerging as a strict alternative to more traditional methodologies, DBR was proposed to fill a niche that other methodologies were ill-equipped to cover.

Effectively, while its development is indeed linked to an inherent critique of previous research paradigms, neither Brown nor Collins saw DBR in opposition to other forms of research. Rather, by providing a bridge from the laboratory to the real world, where learning theories and proposed innovations could interact and be implemented in the complexity of lived socio-ecological contexts (Hoadley, 2004 ), new possibilities emerged. Learning researchers might “trace the evolution of learning in complex, messy classrooms and schools, test and build theories of teaching and learning, and produce instructional tools that survive the challenges of everyday practice” (Shavelson, Phillips, Towne, & Feuer, 2003 , p. 25). Thus, DBR could complement the findings of laboratory, ethnographic, and large-scale studies, answering important questions about the implementation, sustainability, limitations, and usefulness of theories, interventions, and learning when introduced as innovative designs into situated contexts of learning. Moreover, while studies involving these traditional methodologies often concluded by pointing toward implications—insights subsequent studies would need to take up—DBR allowed researchers to address implications iteratively and directly. No subsequent research was necessary, as emerging implications could be reflexively explored in the context of the initial design, offering considerable insight into how research is translated into theory and practice.

Since its emergence in 1992 , DBR as a methodological approach to educational and learning research has quickly grown and evolved, used by researchers from a variety of intellectual traditions in the learning sciences, including developmental and cognitive psychology (e.g., Brown & Campione, 1996 , 1998 ; diSessa & Minstrell, 1998 ), cultural psychology (e.g., Cole, 1996 , 2007 ; Newman, Griffin, & Cole, 1989 ; Gutiérrez, Bien, Selland, & Pierce, 2011 ), cultural anthropology (e.g., Barab, Kinster, Moore, Cunningham, & the ILF Design Team, 2001 ; Polman, 2000 ; Stevens, 2000 ; Suchman, 1995 ), and cultural-historical activity theory (e.g., Engeström, 2011 ; Espinoza, 2009 ; Espinoza & Vossoughi, 2014 ; Gutiérrez, 2008 ; Sannino, 2011 ). Given this plurality of epistemological and theoretical fields that employ DBR, it might best be understood as a broad methodology of educational research, realized in many different, contested, heterogeneous, and distinct iterations, and engaging a variety of qualitative tools and methods (Bell, 2004 ). Despite tensions among these iterations, and substantial and important variances in the ways they employ design-as-research in community settings, there are several common, methodological threads that unite the broad array of research that might be classified as DBR under a shared, though pluralistic, paradigmatic umbrella.

The Tenets of Design-Based Research

Why design-based research.

As we turn to the core tenets of the design-based research (DBR) paradigm, it is worth considering an obvious question: Why use DBR as a methodology for educational research? To answer this, it is helpful to reflect on the original intentions for DBR, particularly, that it is not simply the study of a particular, isolated intervention. Rather, DBR methodologies were conceived of as the complete, iterative process of designing, modifying, and assessing the impact of an educational innovation in a contextual, situated learning environment (Barab & Kirshner, 2001 ; Brown, 1992 ; Cole & Engeström, 2007 ). The design process itself—inclusive of the theory of learning employed, the relationships among participants, contextual factors and constraints, the pedagogical approach, any particular intervention, as well as any changes made to various aspects of this broad design as it proceeds—is what is under study.

Considering this, DBR offers a compelling framework for the researcher interested in having an active and collaborative hand in designing for educational innovation, and interested in creating knowledge about how particular theories of learning, pedagogical or learning practices, or social arrangements function in a context of learning. It is a methodology that can put the researcher in the position of engineer , actively experimenting with aspects of learning and sociopolitical ecologies to arrive at new knowledge and productive outcomes, as Cobb, Confrey, diSessa, Lehrer, and Schauble ( 2003 ) explain:

Prototypically, design experiments entail both “engineering” particular forms of learning and systematically studying those forms of learning within the context defined by the means of supporting them. This designed context is subject to test and revision, and the successive iterations that result play a role similar to that of systematic variation in experiment. (p. 9)

This being said, how directive the engineering role the researcher takes on varies considerably among iterations of DBR. Indeed, recent approaches have argued strongly for researchers to take on more egalitarian positionalities with respect to the community partners with whom they work (e.g., Zavala, 2016 ), acting as collaborative designers, rather than authoritative engineers.

Method and Methodology in Design-Based Research

Now, having established why we might use DBR, a recurring question that has faced the DBR paradigm is whether DBR is a methodology at all. Given the variety of intellectual and ontological traditions that employ it, and thus the pluralism of methods used in DBR to enact the “engineering” role (whatever shape that may take) that the researcher assumes, it has been argued that DBR is not, in actuality a methodology at all (Kelly, 2004 ). The proliferation and diversity of approaches, methods, and types of analysis purporting to be DBR have been described as a lack of coherence that shows there is no “argumentative grammar” or methodology present in DBR (Kelly, 2004 ).

Now, the conclusions one will eventually draw in this debate will depend on one’s orientations and commitments, but it is useful to note that these demands for “coherence” emerge from previous paradigms in which methodology was largely marked by a shared, coherent toolkit for data collection and data analysis. These previous paradigmatic rules make for an odd fit when considering DBR. Yet, even if we proceed—within the qualitative tradition from which DBR emerges—defining methodology as an approach to research that is shaped by the ontological and epistemological commitments of the particular researcher, and methods as the tools for research, data collection, and analysis that are chosen by the researcher with respect to said commitments (Gutiérrez, Engeström, & Sannino, 2016 ), then a compelling case for DBR as a methodology can be made (Bell, 2004 ).

Effectively, despite the considerable variation in how DBR has been and is employed, and tensions within the DBR field, we might point to considerable, shared epistemic common ground among DBR researchers, all of whom are invested in an approach to research that involves engaging actively and iteratively in the design and exploration of learning theory in situated, natural contexts. This common epistemic ground, even in the face of pluralistic ideologies and choices of methods, invites in a new type of methodological coherence, marked by “intersubjectivity without agreement” (Matusov, 1996 ), that links DBR from traditional developmental and cognitive psychology models of DBR (e.g., Brown, 1992 ; Brown & Campione, 1998 ; Collins, 1992 ), to more recent critical and sociocultural manifestations (e.g., Bang & Vossoughi, 2016 ; Engeström, 2011 ; Gutiérrez, 2016 ), and everything in between.

Put in other terms, even as DBR researchers may choose heterogeneous methods for data collection, data analysis, and reporting results complementary to the ideological and sociopolitical commitments of the particular researcher and the types of research questions that are under examination (Bell, 2004 ), a shared epistemic commitment gives the methodology shape. Indeed, the common commitment toward design innovation emerges clearly across examples of DBR methodological studies ranging in method from ethnographic analyses (Salvador, Bell, & Anderson, 1999 ) to studies of critical discourse within a design (Kärkkäinen, 1999 ), to focused examinations of metacognition of individual learners (White & Frederiksen, 1998 ), and beyond. Rather than indicating a lack of methodology, or methodological weakness, the use of varying qualitative methods for framing data collection and retrospective analyses within DBR, and the tensions within the epistemic common ground itself, simply reflects the scope of its utility. Learning in context is complex, contested, and messy, and the plurality of methods present across DBR allow researchers to dynamically respond to context as needed, employing the tools that fit best to consider the questions that are present, or may arise.

All this being the case, it is useful to look toward the coherent elements—the “argumentative grammar” of DBR, if you will—that can be identified across the varied iterations of DBR. Understanding these shared features, in the context and terms of the methodology itself, help us to appreciate what is involved in developing robust and thorough DBR research, and how DBR seeks to make strong, meaningful claims around the types of research questions it takes up.

Coherent Features of Design-Based Research

Several scholars have provided comprehensive overviews and listings of what they see as the cross-cutting features of DBR, both in the context of more traditional models of DBR (e.g., Cobb et al., 2003 ; Design-Based Research Collective, 2003 ), and in regards to newer iterations (e.g., Gutiérrez & Jurow, 2016 ; Bang & Vossoughi, 2016 ). Rather than try to offer an overview of each of these increasingly pluralistic classifications, the intent here is to attend to three broad elements that are shared across articulations of DBR and reflect the essential elements that constitute the methodological approach DBR offers to educational researchers.

Design research is concerned with the development, testing, and evolution of learning theory in situated contexts

This first element is perhaps most central to what DBR of all types is, anchored in what Brown ( 1992 ) was initially most interested in: testing the pragmatic validity of theories of learning by designing interventions that engaged with, or proposed, entire, naturalistic, ecologies of learning. Put another way, while DBR studies may have various units of analysis, focuses, and variables, and may organize learning in many different ways, it is the theoretically informed design for educational innovation that is most centrally under evaluation. DBR actively and centrally exists as a paradigm that is engaged in the development of theory, not just the evaluation of aspects of its usage (Bell, 2004 ; Design-Based Research Collective, 2003 ; Lesh & Kelly, 2000 ; van den Akker, 1999 ).

Effectively, where DBR is taking place, theory as a lived possibility is under examination. Specifically, in most DBR, this means a focus on “intermediate-level” theories of learning, rather than “grand” ones. In essence, DBR does not contend directly with “grand” learning theories (such as developmental or sociocultural theory writ large) (diSessa, 1991 ). Rather, DBR seeks to offer constructive insights by directly engaging with particular learning processes that flow from these theories on a “grounded,” “intermediate” level. This is not, however, to say DBR is limited in what knowledge it can produce; rather, tinkering in this “intermediate” realm can produce knowledge that informs the “grand” theory (Gravemeijer, 1994 ). For example, while cognitive and motivational psychology provide “grand” theoretical frames, interest-driven learning (IDL) is an “intermediate” theory that flows from these and can be explored in DBR to both inform the development of IDL designs in practice and inform cognitive and motivational psychology more broadly (Joseph, 2004 ).

Crucially, however, DBR entails putting the theory in question under intense scrutiny, or, “into harm’s way” (Cobb et al., 2003 ). This is an especially core element to DBR, and one that distinguishes it from the proliferation of educational-reform or educational-entrepreneurship efforts that similarly take up the discourse of “design” and “innovation.” Not only is the reflexive, often participatory element of DBR absent from such efforts—that is, questioning and modifying the design to suit the learning needs of the context and partners—but the theory driving these efforts is never in question, and in many cases, may be actively obscured. Indeed, it is more common to see educational-entrepreneur design innovations seek to modify a context—such as the way charter schools engage in selective pupil recruitment and intensive disciplinary practices (e.g., Carnoy et al., 2005 ; Ravitch, 2010 ; Saltman, 2007 )—rather than modify their design itself, and thus allow for humility in their theory. Such “innovations” and “design” efforts are distinct from DBR, which must, in the spirit of scientific inquiry, be willing to see the learning theory flail and struggle, be modified, and evolve.

This growth and evolution of theory and knowledge is of course central to DBR as a rigorous research paradigm; moving it beyond simply the design of local educational programs, interventions, or innovations. As Barab and Squire ( 2004 ) explain:

Design-based research requires more than simply showing a particular design works but demands that the researcher (move beyond a particular design exemplar to) generate evidence-based claims about learning that address contemporary theoretical issues and further the theoretical knowledge of the field. (pp. 5–6)

DBR as a research paradigm offers a design process through which theories of learning can be tested; they can be modified, and by allowing them to operate with humility in situated conditions, new insights and knowledge, even new theories, may emerge that might inform the field, as well as the efforts and directions of other types of research inquiry. These productive, theory-developing outcomes, or “ontological innovations” (diSessa & Cobb, 2004 ), represent the culmination of an effective program of DBR—the production of new ways to understand, conceptualize, and enact learning as a lived, contextual process.

Design research works to understand learning processes, and the design that supports them in situated contexts

As a research methodology that operates by tinkering with “grounded” learning theories, DBR is itself grounded, and seeks to develop its knowledge claims and designs in naturalistic, situated contexts (Brown, 1992 ). This is, again, a distinguishing element of DBR—setting it apart from laboratory research efforts involving design and interventions in closed, controlled environments. Rather than attempting to focus on singular variables, and isolate these from others, DBR is concerned with the multitude of variables that naturally occur across entire learning ecologies, and present themselves in distinct ways across multiple planes of possible examination (Rogoff, 1995 ; Collins, Joseph, & Bielaczyc, 2004 ). Certainly, specific variables may be identified as dependent, focal units of analysis, but identifying (while not controlling for) the variables beyond these, and analyzing their impact on the design and learning outcomes, is an equally important process in DBR (Collins et al., 2004 ; Barab & Kirshner, 2001 ). In practice, this of course varies across iterations in its depth and breadth. Traditional models of developmental or cognitive DBR may look to account for the complexity and nuance of a setting’s social, developmental, institutional, and intellectual characteristics (e.g., Brown, 1992 ; Cobb et al., 2003 ), while more recent, critical iterations will give increased attention to how historicity, power, intersubjectivity, and culture, among other things, influence and shape a setting, and the learning that occurs within it (e.g., Gutiérrez, 2016 ; Vakil, de Royston, Nasir, & Kirshner, 2016 ).

Beyond these variations, what counts as “design” in DBR varies widely, and so too will what counts as a naturalistic setting. It has been well documented that learning occurs all the time, every day, and in every space imaginable, both formal and informal (Leander, Phillips, & Taylor, 2010 ), and in ways that span strictly defined setting boundaries (Engeström, Engeström, & Kärkkäinen, 1995 ). DBR may take place in any number of contexts, based on the types of questions asked, and the learning theories and processes that a researcher may be interested in exploring. DBR may involve one-to-one tutoring and learning settings, single classrooms, community spaces, entire institutions, or even holistically designed ecologies (Design-Based Research Collective, 2003 ; Engeström, 2008 ; Virkkunen & Newnham, 2013 ). In all these cases, even the most completely designed experimental ecology, the setting remains naturalistic and situated because DBR actively embraces the uncontrollable variables that participants bring with them to the learning process for and from their situated worlds, lives, and experiences—no effort is made to control for these complicated influences of life, simply to understand how they operate in a given ecology as innovation is attempted. Thus, the extent of the design reflects a broader range of qualitative and theoretical study, rather than an attempt to control or isolate some particular learning process from outside influence.

While there is much variety in what design may entail, where DBR takes place, what types of learning ecologies are under examination, and what methods are used, situated ecologies are always the setting of this work. In this way, conscious of naturalistic variables, and the influences that culture, historicity, participation, and context have on learning, researchers can use DBR to build on prior research, and extend knowledge around the learning that occurs in the complexity of situated contexts and lived practices (Collins et al., 2004 ).

Design based research is iterative; it changes, grows, and evolves to meet the needs and emergent questions of the context, and this tinkering process is part of the research

The final shared element undergirding models of DBR is that it is an iterative, active, and interventionist process, interested in and focused on producing educational innovation by actually and actively putting design innovations into practice (Brown, 1992 , Collins, 1992 ; Gutiérrez, 2008 ). Given this interventionist, active stance, tinkering with the design and the theory of learning informing the design is as much a part of the research process as the outcome of the intervention or innovation itself—we learn what impacts learning as much, if not more, than we learn what was learned. In this sense, DBR involves a focus on analyzing the theory-driven design itself, and its implementation as an object of study (Edelson, 2002 ; Penuel, Fishman, Cheng, & Sabelli, 2011 ), and is ultimately interested in the improvement of the design—of how it unfolds, how it shifts, how it is modified, and made to function productively for participants in their contexts and given their needs (Kirshner & Polman, 2013 ).

While DBR is iterative and contextual as a foundational methodological principle, what this means varies across conceptions of DBR. For instance, in more traditional models, Brown and Campione ( 1996 ) pointed out the dangers of “lethal mutation” in which a design, introduced into a context, may become so warped by the influence, pressures, incomplete implementation, or misunderstanding of participants in the local context, that it no longer reflects or tests the theory under study. In short, a theory-driven intervention may be put in place, and then subsumed to such a degree by participants based on their understanding and needs, that it remains the original innovative design in name alone. The assertion here is that in these cases, the research ceases to be DBR in the sense that the design is no longer central, actively shaping learning. We cannot, they argue, analyze a design—and the theory it was meant to reflect—as an object of study when it has been “mutated,” and it is merely a banner under which participants are enacting their idiosyncratic, pragmatic needs.

While the ways in which settings and individuals might disrupt designs intended to produce robust learning is certainly a tension to be cautious of in DBR, it is also worth noting that in many critical approaches to DBR, such mutations—whether “lethal” to the original design or not—are seen as compelling and important moments. Here, where collaboration and community input is more central to the design process, iterative is understood differently. Thus, a “mutation” becomes a point where reflexivity, tension, and contradiction might open the door for change, for new designs, for reconsiderations of researcher and collaborative partner positionalities, or for ethnographic exploration into how a context takes up, shapes, and ultimately engages innovations in a particular sociocultural setting. In short, accounting for and documenting changes in design is a vital part of the DBR process, allowing researchers to respond to context in a variety of ways, always striving for their theories and designs to act with humility, and in the interest of usefulness .

With this in mind, the iterative nature of DBR means that the relationships researchers have with other design partners (educators and learners) in the ecology are incredibly important, and vital to consider (Bang et al., 2016 ; Engeström, 2007 ; Engeström, Sannino, & Virkkunen, 2014 ). Different iterations of DBR might occur in ways in which the researcher is more or less intimately involved in the design and implementation process, both in terms of actual presence and intellectual ownership of the design. Regarding the former, in some cases, a researcher may hand a design off to others to implement, periodically studying and modifying it, while in other contexts or designs, the researcher may be actively involved, tinkering in every detail of the implementation and enactment of the design. With regard to the latter, DBR might similarly range from a somewhat prescribed model, in which the researcher is responsible for the original design, and any modifications that may occur based on their analyses, without significant input from participants (e.g., Collins et al., 2004 ), to incredibly participatory models, in which all parties (researchers, educators, learners) are part of each step of the design-creation, modification, and research process (e.g., Bang, Faber, Gurneau, Marin, & Soto, 2016 ; Kirshner, 2015 ).

Considering the wide range of ideological approaches and models for DBR, we might acknowledge that DBR can be gainfully conducted through many iterations of “openness” to the design process. However, the strength of the research—focused on analyzing the design itself as a unit of study reflective of learning theory—will be bolstered by thoughtfully accounting for how involved the researcher will be, and how open to participation the modification process is. These answers should match the types of questions, and conceptual or ideological framing, with which researchers approach DBR, allowing them to tinker with the process of learning as they build on prior research to extend knowledge and test theory (Barab & Kirshner, 2001 ), while thoughtfully documenting these changes in the design as they go.

Implementation and Research Design

As with the overarching principles of design-based research (DBR), even amid the pluralism of conceptual frameworks of DBR researchers, it is possible, and useful, to trace the shared contours in how DBR research design is implemented. Though texts provide particular road maps for undertaking various iterations of DBR consistent with the specific goals, types of questions, and ideological orientations of these scholarly communities (e.g., Cole & Engeström, 2007 ; Collins, Joseph, & Bielaczyc, 2004 ; Fishman, Penuel, Allen, Cheng, & Sabelli, 2013 ; Gutiérrez & Jurow, 2016 ; Virkkunen & Newnham, 2013 ), certain elements, realized differently, can be found across all of these models, and may be encapsulated in five broad methodological phases.

Considering the Design Focus

DBR begins by considering what the focus of the design, the situated context, and the units of analysis for research will be. Prospective DBR researchers will need to consider broader research in regard to the “grand” theory of learning with which they work to determine what theoretical questions they have, or identify “intermediate” aspects of the theories that might be studied and strengthened by a design process in situated contexts, and what planes of analysis (Rogoff, 1995 ) will be most suitable for examination. This process allows for the identification of the critical theoretical elements of a design, and articulation of initial research questions.

Given the conceptual framework, theoretical and research questions, and sociopolitical interests at play, researchers may undertake this, and subsequent steps in the process, on their own, or in close collaboration with the communities and individuals in the situated contexts in which the design will unfold. As such, across iterations of DBR, and with respect to the ways DBR researchers choose to engage with communities, the origin of the design will vary, and might begin in some cases with theoretical questions, or arise in others as a problem of practice (Coburn & Penuel, 2016 ), though as has been noted, in either case, theory and practice are necessarily linked in the research.

Creating and Implementing a Designed Innovation

From the consideration and identification of the critical elements, planned units of analysis, and research questions that will drive a design, researchers can then actively create (either on their own or in conjunction with potential design partners) a designed intervention reflecting these critical elements, and the overarching theory.

Here, the DBR researcher should consider what partners exist in the process and what ownership exists around these partnerships, determine exactly what the pragmatic features of the intervention/design will be and who will be responsible for them, and consider when checkpoints for modification and evaluation will be undertaken, and by whom. Additionally, researchers should at this stage consider questions of timeline and of recruiting participants, as well as what research materials will be needed to adequately document the design, its implementation, and its outcomes, and how and where collected data will be stored.

Once a design (the planned, theory-informed innovative intervention) has been produced, the DBR researcher and partners can begin the implementation process, putting the design into place and beginning data collection and documentation.

Assessing the Impact of the Design on the Learning Ecology

Chronologically, the next two methodological steps happen recursively in the iterative process of DBR. The researcher must assess the impact of the design, and then, make modifications as necessary, before continuing to assess the impact of these modifications. In short, these next two steps are a cycle that continues across the life and length of the research design.

Once a design has been created and implemented, the researcher begins to observe and document the learning, the ecology, and the design itself. Guided by and in conversation with the theory and critical elements, the researcher should periodically engage in ongoing data analysis, assessing the success of the design, and of learning, paying equal attention to the design itself, and how its implementation is working in the situated ecology.

Within the realm of qualitative research, measuring or assessing variables of learning and assessing the design may look vastly different, require vastly different data-collection and data-analysis tools, and involve vastly different research methods among different researchers.

Modifying the Design

Modification, based on ongoing assessment of the design, is what makes DBR iterative, helping the researcher extend the field’s knowledge about the theory, design, learning, and the context under examination.

Modification of the design can take many forms, from complete changes in approach or curriculum, to introducing an additional tool or mediating artifact into a learning ecology. Moreover, how modification unfolds involves careful reflection from the researcher and any co-designing participants, deciding whether modification will be an ongoing, reflexive, tinkering process, or if it will occur only at predefined checkpoints, after formal evaluation and assessment. Questions of ownership, issues of resource availability, technical support, feasibility, and communication are all central to the work of design modification, and answers will vary given the research questions, design parameters, and researchers’ epistemic commitments.

Each moment of modification indicates a new phase in a DBR project, and a new round of assessing—through data analysis—the impact of the design on the learning ecology, either to guide continued or further modification, report the results of the design, or in some cases, both.

Reporting the Results of the Design

The final step in DBR methodology is to report on the results of the designed intervention, how it contributed to understandings of theory, and how it impacted the local learning ecology or context. The format, genre, and final data analysis methods used in reporting data and research results will vary across iterations of DBR. However, it is largely understood that to avoid methodological confusion, DBR researchers should clearly situate themselves in the DBR paradigm by clearly describing and detailing the design itself; articulating the theory, central elements, and units of analysis under scrutiny, what modifications occurred and what precipitated these changes, and what local effects were observed; and exploring any potential contributions to learning theory, while accounting for the context and their interventionist role and positionality in the design. As such, careful documentation of pragmatic and design decisions for retrospective data analysis, as well as research findings, should be done at each stage of this implementation process.

Methodological Issues in the Design-Based Research Paradigm

Because of its pluralistic nature, its interventionist, nontraditional stance, and the fact that it remains in its conceptual infancy, design-based research (DBR) is replete with ongoing methodological questions and challenges, both from external and internal sources. While there are many more that may exist, addressed will be several of the most pressing the prospective DBR researcher may encounter, or want to consider in understanding the paradigm and beginning a research design.

Challenges to Rigor and Validity

Perhaps the place to begin this reflection on tensions in the DBR paradigm is the recurrent and ongoing challenge to the rigor and validity of DBR, which has asked: Is DBR research at all? Given the interventionist and activist way in which DBR invites the researcher to participate, and the shift in orientation from long-accepted research paradigms, such critiques are hardly surprising, and fall in line with broader challenges to the rigor and objectivity of qualitative social science research in general. Historically, such complaints about DBR are linked to decades of critique of any research that does not adhere to the post-positivist approach set out as the U.S. Department of Education began to prioritize laboratory and large-scale randomized control-trial experimentation as the “gold standard” of research design (e.g., Mosteller & Boruch, 2002 ).

From the outset, DBR, as an interventionist, local, situated, non-laboratory methodology, was bound to run afoul of such conservative trends. While some researchers involved in (particularly traditional developmental and cognitive) DBR have found broader acceptance within these constraints, the rigor of DBR remains contested. It has been suggested that DBR is under-theorized and over-methologized, a haphazard way for researchers to do activist work without engaging in the development of robust knowledge claims about learning (Dede, 2004 ), and an approach lacking in coherence that sheltered interventionist projects of little impact to developing learning theory and allowed researchers to make subjective, pet claims through selective analysis of large bodies of collected data (Kelly, 2003 , 2004 ).

These critiques, however, impose an external set of criteria on DBR, desiring it to fit into the molds of rigor and coherence as defined by canonical methodologies. Bell ( 2004 ) and Bang and Vossoughi ( 2016 ) have made compelling cases for the wide variety of methods and approaches present in DBR not as a fracturing, but as a generative proliferation of different iterations that can offer powerful insights around the different types of questions that exist about learning in the infinitely diverse settings in which it occurs. Essentially, researchers have argued that within the DBR paradigm, and indeed within educational research more generally, the practical impact of research on learning, context, and practices should be a necessary component of rigor (Gutiérrez & Penuel, 2014 ), and the pluralism of methods and approaches available in DBR ensures that the practical impacts and needs of the varied contexts in which the research takes place will always drive the design and research tools.

These moves are emblematic of the way in which DBR is innovating and pushing on paradigms of rigor in educational research altogether, reflecting how DBR fills a complementary niche with respect to other methodologies and attends to elements and challenges of learning in lived, real environments that other types of research have consistently and historically missed. Beyond this, Brown ( 1992 ) was conscious of the concerns around data collection, validity, rigor, and objectivity from the outset, identifying this dilemma—the likelihood of having an incredible amount of data collected in a design only a small fraction of which can be reported and shared, thus leading potentially to selective data analysis and use—as the Bartlett Effect (Brown, 1992 ). Since that time, DBR researchers have been aware of this challenge, actively seeking ways to mitigate this threat to validity by making data sets broadly available, documenting their design, tinkering, and modification processes, clearly situating and describing disconfirming evidence and their own position in the research, and otherwise presenting the broad scope of human and learning activity that occurs within designs in large learning ecologies as comprehensively as possible.

Ultimately, however, these responses are likely to always be insufficient as evidence of rigor to some, for the root dilemma is around what “counts” as education science. While researchers interested and engaged in DBR ought rightly to continue to push themselves to ensure the methodological rigor of their work and chosen methods, it is also worth noting that DBR should seek to hold itself to its own criteria of assessment. This reflects broader trends in qualitative educational research that push back on narrow constructions of what “counts” as science, recognizing the ways in which new methodologies and approaches to research can help us examine aspects of learning, culture, and equity that have continued to be blind spots for traditional education research; invite new voices and perspectives into the process of achieving rigor and validity (Erickson & Gutiérrez, 2002 ); bolster objectivity by bringing it into conversation with the positionality of the researcher (Harding, 1993 ); and perhaps most important, engage in axiological innovation (Bang, Faber, Gurneau, Marin, & Soto, 2016 ), or the exploration of and design for what is, “good right, true, and beautiful . . . in cultural ecologies” (p. 2).

Questions of Generalizability and Usefulness

The generalizability of research results in DBR has been an ongoing and contentious issue in the development of the paradigm. Indeed, by the standards of canonical methods (e.g., laboratory experimentation, ethnography), these local, situated interventions should lack generalizability. While there is reason to discuss and question the merit of generalizability as a goal of qualitative research at all, researchers in the DBR paradigm have long been conscious of this issue. Understanding the question of generalizability around DBR, and how the paradigm has responded to it, can be done in two ways.

First, by distinguishing questions specific to a particular design from the generalizability of the theory. Cole’s (Cole & Underwood, 2013 ) 5th Dimension work, and the nationwide network of linked, theoretically similar sites, operating nationwide with vastly different designs, is a powerful example of this approach to generalizability. Rather than focus on a single, unitary, potentially generalizable design, the project is more interested in variability and sustainability of designs across local contexts (e.g., Cole, 1995 ; Gutiérrez, Bien, Selland, & Pierce, 2011 ; Jurow, Tracy, Hotchkiss, & Kirshner, 2012 ). Through attention to sustainable, locally effective innovations, conscious of the wide variation in culture and context that accompanies any and all learning processes, 5th Dimension sites each derive their idiosyncratic structures from sociocultural theory, sharing some elements, but varying others, while seeking their own “ontological innovations” based on the affordances of their contexts. This pattern reflects a key element of much of the DBR paradigm: that questions of generalizability in DBR may be about the generalizability of the theory of learning, and the variability of learning and design in distinct contexts, rather than the particular design itself.

A second means of addressing generalizability in DBR has been to embrace the pragmatic impacts of designing innovations. This response stems from Messick ( 1992 ) and Schoenfeld’s ( 1992 ) arguments early on in the development of DBR that the consequentialness and validity of DBR efforts as potentially generalizable research depend on the “ usefulness ” of the theories and designs that emerge. Effectively, because DBR is the examination of situated theory, a design must be able to show pragmatic impact—it must succeed at showing the theory to be useful . If there is evidence of usefulness to both the context in which it takes place, and the field of educational research more broadly, then the DBR researcher can stake some broader knowledge claims that might be generalizable. As a result, the DBR paradigm tends to “treat changes in [local] contexts as necessary evidence for the viability of a theory” (Barab & Squire, 2004 , p. 6). This of course does not mean that DBR is only interested in successful efforts. A design that fails or struggles can provide important information and knowledge to the field. Ultimately, though, DBR tends to privilege work that proves the usefulness of designs, whose pragmatic or theoretical findings can then be generalized within the learning science and education research fields.

With this said, the question of usefulness is not always straightforward, and is hardly unitary. While many DBR efforts—particularly those situated in developmental and cognitive learning science traditions—are interested in the generalizability of their useful educational designs (Barab & Squire, 2004 ; Cobb, Confrey, diSessa, Lehrer, & Schauble, 2003 ; Joseph, 2004 ; Steffe & Thompson, 2000 ), not all are. Critical DBR researchers have noted that if usefulness remains situated in the extant sociopolitical and sociocultural power-structures—dominant conceptual and popular definitions of what useful educational outcomes are—the result will be a bar for research merit that inexorably bends toward the positivist spectrum (Booker & Goldman, 2016 ; Dominguez, 2015 ; Zavala, 2016 ). This could potentially, and likely, result in excluding the non-normative interventions and innovations that are vital for historically marginalized communities, but which might have vastly different-looking outcomes, that are nonetheless useful in the sociopolitical context they occur in. Alternative framings to this idea of usefulness push on and extend the intention, and seek to involve the perspectives and agency of situated community partners and their practices in what “counts” as generative and rigorous research outcomes (Gutiérrez & Penuel, 2014 ). An example in this regard is the idea of consequential knowledge (Hall & Jurow, 2015 ; Jurow & Shea, 2015 ), which suggests outcomes that are consequential will be taken up by participants in and across their networks, and over-time—thus a goal of consequential knowledge certainly meets the standard of being useful , but it also implicates the needs and agency of communities in determining the success and merit of a design or research endeavor in important ways that strict usefulness may miss.

Thus, the bar of usefulness that characterizes the DBR paradigm should not be approached without critical reflection. Certainly designs that accomplish little for local contexts should be subject to intense questioning and critique, but considering the sociopolitical and systemic factors that might influence what “counts” as useful in local contexts and education science more generally, should be kept firmly in mind when designing, choosing methods, and evaluating impacts (Zavala, 2016 ). Researchers should think deeply about their goals, whether they are reaching for generalizability at all, and in what ways they are constructing contextual definitions of success, and be clear about these ideologically influenced answers in their work, such that generalizability and the usefulness of designs can be adjudicated based on and in conversation with the intentions and conceptual framework of the research and researcher.

Ethical Concerns of Sustainability, Participation, and Telos

While there are many external challenges to rigor and validity of DBR, another set of tensions comes from within the DBR paradigm itself. Rather than concerns about rigor or validity, these internal critiques are not unrelated to the earlier question of the contested definition of usefulness , and more accurately reflect questions of research ethics and grow from ideological concerns with how an intentional, interventionist stance is taken up in research as it interacts with situated communities.

Given that the nature of DBR is to design and implement some form of educational innovation, the DBR researcher will in some way be engaging with an individual or community, becoming part of a situated learning ecology, complete with a sociopolitical and cultural history. As with any research that involves providing an intervention or support, the question of what happens when the research ends is as much an ethical as a methodological one. Concerns then arise given how traditional models of DBR seem intensely focused on creating and implementing a “complete” cycle of design, but giving little attention to what happens to the community and context afterward (Engeström, 2011 ). In contrast to this privileging of “completeness,” sociocultural and critical approaches to DBR have suggested that if research is actually happening in naturalistic, situated contexts that authentically recognize and allow social and cultural dimensions to function (i.e., avoid laboratory-type controls to mitigate independent variables), there can never be such a thing as “complete,” for the design will, and should, live on as part of the ecology of the space (Cole, 2007 ; Engeström, 2000 ). Essentially, these internal critiques push DBR to consider sustainability, and sustainable scale, as equally important concerns to the completeness of an innovation. Not only are ethical questions involved, but accounting for the unbounded and ongoing nature of learning as a social and cultural activity can help strengthen the viability of knowledge claims made, and what degree of generalizability is reasonably justified.

Related to this question of sustainability are internal concerns regarding the nature and ethics of participation in DBR, whether partners in a design are being adequately invited to engage in the design and modification processes that will unfold in their situated contexts and lived communities (Bang et al., 2016 ; Engeström, 2011 ). DBR has actively sought to examine multiple planes of analysis in learning that might be occurring in a learning ecology but has rarely attended to the subject-subject dynamics (Bang et al., 2016 ), or “relational equity” (DiGiacomo & Gutiérrez, 2015 ) that exists between researchers and participants as a point of focus. Participatory design research (PDR) (Bang & Vossoughi, 2016 ) models have recently emerged as a way to better attend to these important dimensions of collective participation (Engeström, 2007 ), power (Vakil et al., 2016 ), positionality (Kirshner, 2015 ), and relational agency (Edwards, 2007 , 2009 ; Sannino & Engeström, 2016 ) as they unfold in DBR.

Both of these ethical questions—around sustainability and participation—reflect challenges to what we might call the telos —or direction—that DBR takes to innovation and research. These are questions related to whose voices are privileged, in what ways, for what purposes, and toward what ends. While DBR, like many other forms of educational research, has involved work with historically marginalized communities, it has, like many other forms of educational research, not always done so in humanizing ways. Put another way, there are ethical and political questions surrounding whether the designs, goals, and standards of usefulness we apply to DBR efforts should be purposefully activist, and have explicitly liberatory ends. To this point, critical and decolonial perspectives have pushed on the DBR paradigm, suggesting that DBR should situate itself as being a space of liberatory innovation and potential, in which communities and participants can become designers and innovators of their own futures (Gutiérrez, 2005 ). This perspective is reflected in the social design experiment (SDE) approach to DBR (Gutiérrez, 2005 , 2008 ; Gutierréz & Vossoughi, 2010 ; Gutiérrez, 2016 ; Gutiérrez & Jurow, 2016 ), which begins in participatory fashion, engaging a community in identifying its own challenges and desires, and reflecting on the historicity of learning practices, before proleptic design efforts are undertaken that ensure that research is done with , not on , communities of color (Arzubiaga, Artiles, King, & Harris-Murri, 2008 ), and intentionally focused on liberatory goals.

Global Perspectives and Unique Iterations

While design-based research (DBR) has been a methodology principally associated with educational research in the United States, its development is hardly limited to the U.S. context. Rather, while DBR emerged in U.S. settings, similar methods of situated, interventionist research focused on design and innovation were emerging in parallel in European contexts (e.g., Gravemeijer, 1994 ), most significantly in the work of Vygotskian scholars both in Europe and the United States (Cole, 1995 ; Cole & Engeström, 1993 , 2007 ; Engeström, 1987 ).

Particularly, where DBR began in the epistemic and ontological terrain of developmental and cognitive psychology, this vein of design-based research work began deeply grounded in cultural-historical activity theory (CHAT). This ontological and epistemic grounding meant that the approach to design that was taken was more intensively conscious of context, historicity, hybridity, and relational factors, and framed around understanding learning as a complex, collective activity system that, through design, could be modified and transformed (Cole & Engeström, 2007 ). The models of DBR that emerged in this context abroad were the formative intervention (Engeström, 2011 ; Engeström, Sannino, & Virkkunen, 2014 ), which relies heavily on Vygotskian double-stimulation to approach learning in nonlinear, unbounded ways, accounting for the role of learner, educator, and researcher in a collective process, shifting and evolving and tinkering with the design as the context needs and demands; and the Change Laboratory (Engeström, 2008 ; Virkkunen & Newnham, 2013 ), which similarly relies on the principle of double stimulation, while presenting holistic way to approach transforming—or changing—entire learning activity systems in fundamental ways through designs that encourage collective “expansive learning” (Engeström, 2001 ), through which participants can produce wholly new activity systems as the object of learning itself.

Elsewhere in the United States, still parallel to the developmental- or cognitive-oriented DBR work that was occurring, American researchers employing CHAT began to leverage the tools and aims of expansive learning in conversation with the tensions and complexity of the U.S. context (Cole, 1995 ; Gutiérrez, 2005 ; Gutiérrez & Rogoff, 2003 ). Like the CHAT design research of the European context, there was a focus on activity systems, historicity, nonlinear and unbounded learning, and collective learning processes and outcomes. Rather than a simple replication, however, these researchers put further attention on questions of equity, diversity, and justice in this work, as Gutiérrez, Engeström, and Sannino ( 2016 ) note:

The American contribution to a cultural historical activity theoretic perspective has been its attention to diversity, including how we theorize, examine, and represent individuals and their communities. (p. 276)

Effectively, CHAT scholars in parts of the United States brought critical and decolonial perspectives to bear on their design-focused research, focusing explicitly on the complex cultural, racial, and ethnic terrain in which they worked, and ensuring that diversity, equity, justice, and non-dominant perspectives would become central principles to the types of design research conducted. The result was the emergence of the aforementioned social design experiments (e.g., Gutiérrez, 2005 , 2016 ), and participatory design research (Bang & Vossoughi, 2016 ) models, which attend intentionally to historicity and relational equity, tailor their methods to the liberation of historically marginalized communities, aim intentionally for liberatory outcomes as key elements of their design processes, and seek to produce outcomes in which communities of learners become designers of new community futures (Gutiérrez, 2016 ). While these approaches emerged in the United States, their origins reflect ontological and ideological perspectives quite distinct from more traditional learning science models of DBR, and dominant U.S. ontologies in general. Indeed, these iterations of DBR are linked genealogically to the ontologies, ideologies, and concerns of peoples in the Global South, offering some promise for the method in those regions, though DBR has yet to broadly take hold among researchers beyond the United States and Europe.

There is, of course, much more nuance to these models, and each of these models (formative interventions, Change Laboratories, social design experiments, and participatory design research) might itself merit independent exploration and review well beyond the scope here. Indeed, there is some question as to whether all adherents of these CHAT design-based methodologies, with their unique genealogies and histories, would even consider themselves under the umbrella of DBR. Yet, despite significant ontological divergences, these iterations share many of the same foundational tenets of the traditional models (though realized differently), and it is reasonable to argue that they do indeed share the same, broad methodological paradigm (DBR), or at the very least, are so intimately related that any discussion of DBR, particularly one with a global view, should consider the contributions CHAT iterations have made to the DBR methodology in the course of their somewhat distinct, but parallel, development.

Possibilities and Potentials for Design-Based Research

Since its emergence in 1992 , the DBR methodology for educational research has continued to grow in popularity, ubiquity, and significance. Its use has begun to expand beyond the confines of the learning sciences, taken up by researchers in a variety of disciplines, and across a breadth of theoretical and intellectual traditions. While still not as widely recognized as more traditional and well-established research methodologies, DBR as a methodology for rigorous research is unquestionably here to stay.

With this in mind, the field ought to still be cautious of the ways in which the discourse of design is used. Not all design is DBR, and preserving the integrity, rigor, and research ethics of the paradigm (on its own terms) will continue to require thoughtful reflection as its pluralistic parameters come into clearer focus. Yet the proliferation of methods in the DBR paradigm should be seen as a positive. There are far too many theories of learning and ideological perspectives that have meaningful contributions to make to our knowledge of the world, communities, and learning to limit ourselves to a unitary approach to DBR, or set of methods. The paradigm has shown itself to have some core methodological principles, but there is no reason not to expect these to grow, expand, and evolve over time.

In an increasingly globalized, culturally diverse, and dynamic world, there is tremendous potential for innovation couched in this proliferation of DBR. Particularly in historically marginalized communities and across the Global South, we will need to know how learning theories can be lived out in productive ways in communities that have been understudied, and under-engaged. The DBR paradigm generally, and critical and CHAT iterations particularly, can fill an important need for participatory, theory-developing research in these contexts that simultaneously creates lived impacts. Participatory design research (PDR), social design experiments (SDE), and Change Laboratory models of DBR should be of particular interest and attention moving forward, as current trends toward culturally sustaining pedagogies and learning will need to be explored in depth and in close collaboration with communities, as participatory design partners, in the press toward liberatory educational innovations.

Bibliography

The following special issues of journals are encouraged starting points for engaging more deeply with current and past trends in design-based research.

  • Bang, M. , & Vossoughi, S. (Eds.). (2016). Participatory design research and educational justice: Studying learning and relations within social change making [Special issue]. Cognition and Instruction , 34 (3).
  • Barab, S. (Ed.). (2004). Design-based research [Special issue]. Journal of the Learning Sciences , 13 (1).
  • Cole, M. , & The Distributed Literacy Consortium. (2006). The Fifth Dimension: An after-school program built on diversity . New York, NY: Russell Sage Foundation.
  • Kelly, A. E. (Ed.). (2003). Special issue on the role of design in educational research [Special issue]. Educational Researcher , 32 (1).
  • Arzubiaga, A. , Artiles, A. , King, K. , & Harris-Murri, N. (2008). Beyond research on cultural minorities: Challenges and implications of research as situated cultural practice. Exceptional Children , 74 (3), 309–327.
  • Bang, M. , Faber, L. , Gurneau, J. , Marin, A. , & Soto, C. (2016). Community-based design research: Learning across generations and strategic transformations of institutional relations toward axiological innovations. Mind, Culture, and Activity , 23 (1), 28–41.
  • Bang, M. , & Vossoughi, S. (2016). Participatory design research and educational justice: Studying learning and relations within social change making. Cognition and Instruction , 34 (3), 173–193.
  • Barab, S. , Kinster, J. G. , Moore, J. , Cunningham, D. , & The ILF Design Team. (2001). Designing and building an online community: The struggle to support sociability in the Inquiry Learning Forum. Educational Technology Research and Development , 49 (4), 71–96.
  • Barab, S. , & Squire, K. (2004). Design-based research: Putting a stake in the ground. Journal of the Learning Sciences , 13 (1), 1–14.
  • Barab, S. A. , & Kirshner, D. (2001). Methodologies for capturing learner practices occurring as part of dynamic learning environments. Journal of the Learning Sciences , 10 (1–2), 5–15.
  • Bell, P. (2004). On the theoretical breadth of design-based research in education. Educational Psychologist , 39 (4), 243–253.
  • Bereiter, C. , & Scardamalia, M. (1989). Intentional learning as a goal of instruction. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 361–392). Hillsdale, NJ: Lawrence Erlbaum.
  • Booker, A. , & Goldman, S. (2016). Participatory design research as a practice for systemic repair: Doing hand-in-hand math research with families. Cognition and Instruction , 34 (3), 222–235.
  • Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences , 2 (2), 141–178.
  • Brown, A. , & Campione, J. C. (1996). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations in learning: New environments for education (pp. 289–325). Mahwah, NJ: Lawrence Erlbaum.
  • Brown, A. L. , & Campione, J. C. (1998). Designing a community of young learners: Theoretical and practical lessons. In N. M. Lambert & B. L. McCombs (Eds.), How students learn: Reforming schools through learner-centered education (pp. 153–186). Washington, DC: American Psychological Association.
  • Brown, A. , Campione, J. , Webber, L. , & McGilley, K. (1992). Interactive learning environments—A new look at learning and assessment. In B. R. Gifford & M. C. O’Connor (Eds.), Future assessment: Changing views of aptitude, achievement, and instruction (pp. 121–211). Boston, MA: Academic Press.
  • Carnoy, M. , Jacobsen, R. , Mishel, L. , & Rothstein, R. (2005). The charter school dust-up: Examining the evidence on enrollment and achievement . Washington, DC: Economic Policy Institute.
  • Carspecken, P. (1996). Critical ethnography in educational research . New York, NY: Routledge.
  • Cobb, P. , Confrey, J. , diSessa, A. , Lehrer, R. , & Schauble, L. (2003). Design experiments in educational research. Educational Researcher , 32 (1), 9–13.
  • Cobb, P. , & Steffe, L. P. (1983). The constructivist researcher as teacher and model builder. Journal for Research in Mathematics Education , 14 , 83–94.
  • Coburn, C. , & Penuel, W. (2016). Research-practice partnerships in education: Outcomes, dynamics, and open questions. Educational Researcher , 45 (1), 48–54.
  • Cole, M. (1995). From Moscow to the Fifth Dimension: An exploration in romantic science. In M. Cole & J. Wertsch (Eds.), Contemporary implications of Vygotsky and Luria (pp. 1–38). Worcester, MA: Clark University Press.
  • Cole, M. (1996). Cultural psychology: A once and future discipline . Cambridge, MA: Harvard University Press.
  • Cole, M. (2007). Sustaining model systems of educational activity: Designing for the long haul. In J. Campione , K. Metz , & A. S. Palinscar (Eds.), Children’s learning in and out of school: Essays in honor of Ann Brown (pp. 71–89). New York, NY: Routledge.
  • Cole, M. , & Engeström, Y. (1993). A cultural historical approach to distributed cognition. In G. Saloman (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 1–46). Cambridge, U.K.: Cambridge University Press.
  • Cole, M. , & Engeström, Y. (2007). Cultural-historical approaches to designing for development. In J. Valsiner & A. Rosa (Eds.), The Cambridge handbook of sociocultural psychology , Cambridge, U.K.: Cambridge University Press.
  • Cole, M. , & Underwood, C. (2013). The evolution of the 5th Dimension. In The Story of the Laboratory of Comparative Human Cognition: A polyphonic autobiography . https://lchcautobio.ucsd.edu/polyphonic-autobiography/section-5/chapter-12-the-later-life-of-the-5th-dimension-and-its-direct-progeny/ .
  • Collins, A. (1992). Toward a design science of education. In E. Scanlon & T. O’Shea (Eds.), New directions in educational technology (pp. 15–22). New York, NY: Springer-Verlag.
  • Collins, A. , Joseph, D. , & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences , 13 (1), 15–42.
  • Dede, C. (2004). If design-based research is the answer, what is the question? A commentary on Collins, Joseph, and Bielaczyc; DiSessa and Cobb; and Fishman, Marx, Blumenthal, Krajcik, and Soloway in the JLS special issue on design-based research. Journal of the Learning Sciences , 13 (1), 105–114.
  • Design-Based Research Collective . (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher , 32 (1), 5–8.
  • DiGiacomo, D. , & Gutiérrez, K. D. (2015). Relational equity as a design tool within making and tinkering activities. Mind, Culture, and Activity , 22 (3), 1–15.
  • diSessa, A. A. (1991). Local sciences: Viewing the design of human-computer systems as cognitive science. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface (pp. 162–202). Cambridge, U.K.: Cambridge University Press.
  • diSessa, A. A. , & Cobb, P. (2004). Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences , 13 (1), 77–103.
  • diSessa, A. A. , & Minstrell, J. (1998). Cultivating conceptual change with benchmark lessons. In J. G. Greeno & S. Goldman (Eds.), Thinking practices (pp. 155–187). Mahwah, NJ: Lawrence Erlbaum.
  • Dominguez, M. (2015). Decolonizing teacher education: Explorations of expansive learning and culturally sustaining pedagogy in a social design experiment (Doctoral dissertation). University of Colorado, Boulder.
  • Edelson, D. (2002). Design research: What we learn when we engage in design. Journal of the Learning Sciences , 11 (1), 105–121.
  • Edwards, A. (2007). Relational agency in professional practice: A CHAT analysis. Actio: An International Journal of Human Activity Theory , 1 , 1–17.
  • Edwards, A. (2009). Agency and activity theory: From the systemic to the relational. In A. Sannino , H. Daniels , & K. Gutiérrez (Eds.), Learning and expanding with activity theory (pp. 197–211). Cambridge, U.K.: Cambridge University Press.
  • Engeström, Y. (1987). Learning by expanding . Helsinki, Finland: University of Helsinki, Department of Education.
  • Engeström, Y. (2000). Can people learn to master their future? Journal of the Learning Sciences , 9 , 525–534.
  • Engeström, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work , 14 (1), 133–156.
  • Engeström, Y. (2007). Enriching the theory of expansive learning: Lessons from journeys toward co-configuration. Mind, Culture, and Activity , 14 (1–2), 23–39.
  • Engeström, Y. (2008). Putting Vygotksy to work: The Change Laboratory as an application of double stimulation. In H. Daniels , M. Cole , & J. Wertsch (Eds.), Cambridge companion to Vygotsky (pp. 363–382). New York, NY: Cambridge University Press.
  • Engeström, Y. (2011). From design experiments to formative interventions. Theory & Psychology , 21 (5), 598–628.
  • Engeström, Y. , Engeström, R. , & Kärkkäinen, M. (1995). Polycontextuality and boundary crossing in expert cognition: Learning and problem solving in complex work activities. Learning and Instruction , 5 (4), 319–336.
  • Engeström, Y. , & Sannino, A. (2010). Studies of expansive learning: Foundations, findings and future challenges. Educational Research Review , 5 (1), 1–24.
  • Engeström, Y. , & Sannino, A. (2011). Discursive manifestations of contradictions in organizational change efforts: A methodological framework. Journal of Organizational Change Management , 24 (3), 368–387.
  • Engeström, Y. , Sannino, A. , & Virkkunen, J. (2014). On the methodological demands of formative interventions. Mind, Culture, and Activity , 2 (2), 118–128.
  • Erickson, F. , & Gutiérrez, K. (2002). Culture, rigor, and science in educational research. Educational Researcher , 31 (8), 21–24.
  • Espinoza, M. (2009). A case study of the production of educational sanctuary in one migrant classroom. Pedagogies: An International Journal , 4 (1), 44–62.
  • Espinoza, M. L. , & Vossoughi, S. (2014). Perceiving learning anew: Social interaction, dignity, and educational rights. Harvard Educational Review , 84 (3), 285–313.
  • Fine, M. (1994). Dis-tance and other stances: Negotiations of power inside feminist research. In A. Gitlin (Ed.), Power and method (pp. 13–25). New York, NY: Routledge.
  • Fishman, B. , Penuel, W. , Allen, A. , Cheng, B. , & Sabelli, N. (2013). Design-based implementation research: An emerging model for transforming the relationship of research and practice. National Society for the Study of Education , 112 (2), 136–156.
  • Gravemeijer, K. (1994). Educational development and developmental research in mathematics education. Journal for Research in Mathematics Education , 25 (5), 443–471.
  • Gutiérrez, K. (2005). Intersubjectivity and grammar in the third space . Scribner Award Lecture.
  • Gutiérrez, K. (2008). Developing a sociocritical literacy in the third space. Reading Research Quarterly , 43 (2), 148–164.
  • Gutiérrez, K. (2016). Designing resilient ecologies: Social design experiments and a new social imagination. Educational Researcher , 45 (3), 187–196.
  • Gutiérrez, K. , Bien, A. , Selland, M. , & Pierce, D. M. (2011). Polylingual and polycultural learning ecologies: Mediating emergent academic literacies for dual language learners. Journal of Early Childhood Literacy , 11 (2), 232–261.
  • Gutiérrez, K. , Engeström, Y. , & Sannino, A. (2016). Expanding educational research and interventionist methodologies. Cognition and Instruction , 34 (2), 275–284.
  • Gutiérrez, K. , & Jurow, A. S. (2016). Social design experiments: Toward equity by design. Journal of Learning Sciences , 25 (4), 565–598.
  • Gutiérrez, K. , & Penuel, W. R. (2014). Relevance to practice as a criterion for rigor. Educational Researcher , 43 (1), 19–23.
  • Gutiérrez, K. , & Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher , 32 (5), 19–25.
  • Gutierréz, K. , & Vossoughi, S. (2010). Lifting off the ground to return anew: Mediated praxis, transformative learning, and social design experiments. Journal of Teacher Education , 61 (1–2), 100–117.
  • Hall, R. , & Jurow, A. S. (2015). Changing concepts in activity: Descriptive and design studies of consequential learning in conceptual practices. Educational Psychologist , 50 (3), 173–189.
  • Harding, S. (1993). Rethinking standpoint epistemology: What is “strong objectivity”? In L. Alcoff & E. Potter (Eds.), Feminist epistemologies (pp. 49–82). New York, NY: Routledge.
  • Hoadley, C. (2002). Creating context: Design-based research in creating and understanding CSCL. In G. Stahl (Ed.), Computer support for collaborative learning 2002 (pp. 453–462). Mahwah, NJ: Lawrence Erlbaum.
  • Hoadley, C. (2004). Methodological alignment in design-based research. Educational Psychologist , 39 (4), 203–212.
  • Joseph, D. (2004). The practice of design-based research: Uncovering the interplay between design, research, and the real-world context. Educational Psychologist , 39 (4), 235–242.
  • Jurow, A. S. , & Shea, M. V. (2015). Learning in equity-oriented scale-making projects. Journal of the Learning Sciences , 24 (2), 286–307.
  • Jurow, S. , Tracy, R. , Hotchkiss, J. , & Kirshner, B. (2012). Designing for the future: How the learning sciences can inform the trajectories of preservice teachers. Journal of Teacher Education , 63 (2), 147–60.
  • Kärkkäinen, M. (1999). Teams as breakers of traditional work practices: A longitudinal study of planning and implementing curriculum units in elementary school teacher teams . Helsinki, Finland: University of Helsinki, Department of Education.
  • Kelly, A. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences , 13 (1), 115–128.
  • Kelly, A. E. , & Sloane, F. C. (2003). Educational research and the problems of practice. Irish Educational Studies , 22 , 29–40.
  • Kirshner, B. (2015). Youth activism in an era of education inequality . New York: New York University Press.
  • Kirshner, B. , & Polman, J. L. (2013). Adaptation by design: A context-sensitive, dialogic approach to interventions. National Society for the Study of Education Yearbook , 112 (2), 215–236.
  • Leander, K. M. , Phillips, N. C. , & Taylor, K. H. (2010). The changing social spaces of learning: Mapping new mobilities. Review of Research in Education , 34 , 329–394.
  • Lesh, R. A. , & Kelly, A. E. (2000). Multi-tiered teaching experiments. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 197–230). Mahwah, NJ: Lawrence Erlbaum.
  • Matusov, E. (1996). Intersubjectivty without agreement. Mind, Culture, and Activity , 3 (1), 29–45.
  • Messick, S. (1992). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher , 23 (2), 13–23.
  • Mosteller, F. , & Boruch, R. F. (Eds.). (2002). Evidence matters: Randomized trials in education research . Washington, DC: Brookings Institution Press.
  • Newman, D. , Griffin, P. , & Cole, M. (1989). The construction zone: Working for cognitive change in school . London, U.K.: Cambridge University Press.
  • Penuel, W. R. , Fishman, B. J. , Cheng, B. H. , & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational Researcher , 40 (7), 331–337.
  • Polman, J. L. (2000). Designing project-based science: Connecting learners through guided inquiry . New York, NY: Teachers College Press.
  • Ravitch, D. (2010). The death and life of the great American school system: How testing and choice are undermining education . New York, NY: Basic Books.
  • Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context . New York, NY: Oxford University Press.
  • Rogoff, B. (1995). Observing sociocultural activity on three planes: Participatory appropriation, guided participation, and apprenticeship. In J. V. Wertsch , P. D. Rio , & A. Alvarez (Eds.), Sociocultural studies of mind (pp. 139–164). Cambridge U.K.: Cambridge University Press.
  • Saltman, K. J. (2007). Capitalizing on disaster: Taking and breaking public schools . Boulder, CO: Paradigm.
  • Salvador, T. , Bell, G. , & Anderson, K. (1999). Design ethnography. Design Management Journal , 10 (4), 35–41.
  • Sannino, A. (2011). Activity theory as an activist and interventionist theory. Theory & Psychology , 21 (5), 571–597.
  • Sannino, A. , & Engeström, Y. (2016). Relational agency, double stimulation and the object of activity: An intervention study in a primary school. In A. Edwards (Ed.), Working relationally in and across practices: Cultural-historical approaches to collaboration (pp. 58–77). Cambridge, U.K.: Cambridge University Press.
  • Scardamalia, M. , & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. Journal of the Learning Sciences , 1 , 37–68.
  • Schoenfeld, A. H. (1982). Measures of problem solving performance and of problem solving instruction. Journal for Research in Mathematics Education , 13 , 31–49.
  • Schoenfeld, A. H. (1985). Mathematical problem solving . Orlando, FL: Academic Press.
  • Schoenfeld, A. H. (1992). On paradigms and methods: What do you do when the ones you know don’t do what you want them to? Issues in the analysis of data in the form of videotapes. Journal of the Learning Sciences , 2 (2), 179–214.
  • Scribner, S. , & Cole, M. (1978). Literacy without schooling: Testing for intellectual effects. Harvard Educational Review , 48 (4), 448–461.
  • Shavelson, R. J. , Phillips, D. C. , Towne, L. , & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher , 32 (1), 25–28.
  • Steffe, L. P. , & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and essential elements. In A. Kelly & R. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 267–307). Mahwah, NJ: Erlbaum.
  • Stevens, R. (2000). Divisions of labor in school and in the workplace: Comparing computer and paper-supported activities across settings. Journal of the Learning Sciences , 9 (4), 373–401.
  • Suchman, L. (1995). Making work visible. Communications of the ACM , 38 (9), 57–64.
  • Vakil, S. , de Royston, M. M. , Nasir, N. , & Kirshner, B. (2016). Rethinking race and power in design-based research: Reflections from the field. Cognition and Instruction , 34 (3), 194–209.
  • van den Akker, J. (1999). Principles and methods of development research. In J. van den Akker , R. M. Branch , K. Gustafson , N. Nieveen , & T. Plomp (Eds.), Design approaches and tools in education and training (pp. 1–14). Boston, MA: Kluwer Academic.
  • Virkkunen, J. , & Newnham, D. (2013). The Change Laboratory: A tool for collaborative development of work and education . Rotterdam, The Netherlands: Sense.
  • White, B. Y. , & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction , 16 , 3–118.
  • Zavala, M. (2016). Design, participation, and social change: What design in grassroots spaces can teach learning scientists. Cognition and Instruction , 34 (3), 236–249.

1. The reader should note the emergence of critical ethnography (e.g., Carspecken, 1996 ; Fine, 1994 ), and other more participatory models of ethnography that deviated from this traditional paradigm during this same time period. These new forms of ethnography comprised part of the genealogy of the more critical approaches to DBR, described later in this article.

2. The reader will also note that the adjective “qualitative” largely drops away from the acronym “DBR.” This is largely because, as described, DBR, as an exploration of naturalistic ecologies with multitudes of variables, and social and learning dynamics, necessarily demands a move beyond what can be captured by quantitative measurement alone. The qualitative nature of the research is thus implied and embedded as part of what makes DBR a unique and distinct methodology.

Related Articles

  • Qualitative Data Analysis
  • The Entanglements of Ethnography and Participatory Action Research (PAR) in Educational Research in North America
  • Writing Educational Ethnography
  • Qualitative Data Analysis and the Use of Theory
  • Comparative Case Study Research
  • Use of Qualitative Methods in Evaluation Studies
  • Writing Qualitative Dissertations
  • Ethnography in Early Childhood Education
  • A History of Qualitative Research in Education in China
  • Qualitative Research in the Field of Popular Education
  • Qualitative Methodological Considerations for Studying Undocumented Students in the United States
  • Culturally Responsive Evaluation as a Form of Critical Qualitative Inquiry
  • Participatory Action Research in Education
  • Complexity Theory as a Guide to Qualitative Methodology in Teacher Education
  • Observing Schools and Classrooms

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 18 April 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|195.158.225.244]
  • 195.158.225.244

Character limit 500 /500

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

Qualitative research: extending the range with flexible pattern matching

  • Original Paper
  • Open access
  • Published: 16 February 2021
  • Volume 15 , pages 251–273, ( 2021 )

Cite this article

You have full access to this open access article

  • Ricarda B. Bouncken   ORCID: orcid.org/0000-0003-0510-7491 1 ,
  • Yixin Qiu   ORCID: orcid.org/0000-0002-9866-5283 1 ,
  • Noemi Sinkovics   ORCID: orcid.org/0000-0002-5143-6870 2 &
  • Wolfgang Kürsten 3  

22k Accesses

85 Citations

12 Altmetric

Explore all metrics

The flexible pattern matching approach has witnessed increasing popularity. By combining deduction with induction in logic, flexible pattern matching is well suited for exploration and theory development. The paper discusses its logic, advantages and process of this approach while offering a review of research adopting this approach. We also compare and contrast it with another popular qualitative data analysis technique, the grounded theory approach, to further ground the method on the established knowledge and elaborate its strength and fitting context. This paper advances the flexible pattern matching approach by suggesting a five-step roadmap to conduct qualitative research with the approach.

Similar content being viewed by others

scholarly articles on qualitative research design

What is Qualitative in Qualitative Research

Patrik Aspers & Ugo Corte

Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations

Gordon W. Cheung, Helena D. Cooper-Thomas, … Linda C. Wang

scholarly articles on qualitative research design

How to use and assess qualitative research methods

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Avoid common mistakes on your manuscript.

1 Introduction

Qualitative techniques are widely applied in business and management research (Cassell and Bishop 2019 ; Gioia et al. 2013 ; Strauss and Corbin 1998 ). The principal reason is that qualitative studies can provide rich insights that explain underlying mechanisms and processes (Grund and Walter 2015 ; Muhic and Bengtsson 2019 ). Table 1 lists some of the recent qualitative studies published in Review of Managerial Science. However, there has been criticism with respect to their replicability and rigor. To address such criticism, a range of criteria and techniques were developed over the past decades (Cassell et al. 2018 ; Langley and Abdallah 2011 ). Whereas some researchers welcome these more structured guiding principles (Eisenhardt 1989 ; Gioia et al. 2013 ; Yin 1994b ), others voice concerns about a potential loss of creativity (Langley and Abdallah 2011 ).

The pattern matching framework put forward by Sinkovics ( 2018 ) creates a balance between the extremes of rigid standardization and complete anarchy. The framework organizes the diverse qualitative techniques into different categories based on the degree of the pattern matching. Pattern matching is both a logic and a technique. As a logic, the aim is to identify and describe patterns with as much accuracy as possible. However, while some patterns can be described and interpreted with a high level of accuracy, others can only be interpreted with a certain level of probability (Hammond 1966 ). The reason comes from the observers’ interpretation; they do it through a path-dependent lens developed through their past experiences and based on the amount of available information at the time of the observation. As a technique, pattern matching applies this logic and provides a set of guidelines with respect to how to make the researchers’ internal mental models as explicit as possible. In this way, the approach provides readers with a roadmap to reproduce the researchers’ thought processes and how they arrived at the conclusions they present in their manuscript.

This pattern matching framework allows the organization of the diverse qualitative techniques into three overarching categories; namely partial pattern matching, full pattern matching, and flexible pattern matching. Partial pattern matching aims to engage the investigator’s mental models in the theorizing process. Grounded theory, as put forward by Glaser and Strauss ( 1968 ), and the Gioia ( 2004 ) method of structural coding represent examples of bottom-up partial pattern matching. The purpose is to start with the data and identify patterns that emerge from it. Top-down partial pattern matching employs visualization techniques to specific bodies of literature to identify innovative research questions (Sinkovics 2016 ). Full pattern matching generally includes all statistical methods (Trochim 1985 ; Trochim and Donnelly 2008 ). In the qualitative domain, full pattern matching entails the operationalization of multiple theoretical explanations for a specific phenomenon to determine which explanation is the most accurate (Yin 2009 ; Yin and Moore 1988 ).

In contrast, flexible pattern matching spans the space between partial and full pattern matching. It allows the interaction of deductive and inductive components, thus combining rigor with a high level of flexibility. Eisenhardt’s ( 1989 ) theory building approach from cases and King’s ( 2004 ) template analysis are examples of the flexible pattern matching logic. Recently, there is an emerging body of work in this category that builds on a systematic or semi-systematic review of the literature to define the initial theoretical patterns that are then matched to the data (Bouncken and Barwinski 2020 ; Sinkovics et al. 2014 , 2019 ). Further theorizing and/or theory development is then triggered by mismatches between theoretical patterns and observed patterns, or by the emergence of unexpected observed patterns (Bouncken and Barwinski 2020 ; Bouncken et al. 2021 ). However, before Sinkovics ( 2018 ) systematically categorized pattern matching into the three overarching categories, scholars did not differentiate between these distinct approaches, referring to all three as pattern matching. In this paper, to ensure the consistent use of terms and avoid confusion, we refer to the approach that involves the flexible pattern matching logic as “flexible pattern matching”.

Despite or rather because of the growing number of studies applying the principles of this type of flexible pattern matching (Bouncken and Barwinski 2020 ; Gatignon and Capron 2020 ; Sinkovics et al. 2014 , 2019 ), there is a need to gain more insights into its foundation, its relation to established qualitative methods, and how it best informs theory building. Moreover, the growing questions on the well-developed grounded theory also call for avenues with more rigorous analysis processes and stronger relationships with the existing literature (Cuervo-Cazurra et al. 2016 ).

This paper aims to provide a comprehensive understanding and guideline for conducting qualitative research with a flexible-pattern-matching design. We first present and explain the general logic of the flexible pattern matching approach and its advantages, augmented by explaining prior studies adopting flexible pattern matching. Then, we analyze its combination and differences with an established but also questioned qualitative method, grounded theory, to further elaborate on the flexible pattern matching approach and its strength, as well as to extend our discussion on the approach to a broader context of qualitative studies. Subsequently, we develop a roadmap for conducting qualitative studies with the method. Finally, future research directions are discussed and how the flexible pattern matching approach can help to gather more insights towards future theory development.

2 Explaining the flexible pattern matching approach

Flexible pattern matching involves the iterative matching between theoretical patterns derived from the literature and observed patterns emerging from empirical data (Sinkovics 2018 ). Its particular focus on the interplay between theoretical and empirical patterns distinguishes it from the “Gioia Method”, which focuses on capturing informant meaning, and the “Eisenhardt Method”, which focuses on sharpening particular settings distinction (Langley and Abdallah 2011 ). The aim of flexible-pattern-matching studies is to explore how inconsistencies and breakdowns derived from the pattern-matching process can help to problematize and develop theoretical ideas (Alvesson and Kärreman 2007 ; Shane 2000 ) or how consistencies can help to test or expand the contextual boundaries of existing theories (Ross and Staw 1993 ).

Theoretical patterns and observed empirical patterns are two key components in flexible pattern matching. A pattern indicates the arrangement of objects or entities that are non-random and describable (Trochim 1989 ). Following this logic, all theories are built on patterns, but they are also different from patterns. Completed theories contain structural relationships that enable the generation of predictions as predicted patterns. For example, based on prior work about behavioral difficulties of mergers and hostility of employees, Greenwood et al. ( 1994 ) developed two hypotheses on how organizational (in)compatibility will escalate from the courtship stage to the consummation stage in the merging process.

Similarly, observed patterns are distinct from collected data; they are “constellations” of observations or social logics whose independence and ordering compose items that can be identified as a pattern (Jancsary et al. 2017 ). For example, in Shane’s ( 2000 ) study on how entrepreneurs discover opportunities, three patterns are generated from extensive and multiple-source data on eight firms, explaining the effect from the dissemination of information, situation-specific superiority, and prior knowledge.

However, theoretical and observed patterns are developed from distinct processes. Theoretical patterns are deduced from related theories, while observed empirical patterns emerge through continuous iteration and comparison between theoretical patterns and collected data (Bitektine 2008 ; Bouncken et al. 2021 ; Sinkovics et al. 2019 ). This matching technique infuses the analysis and theorizing process with both imagination and discipline. That is, empirical materials create room for the emergence of new, unexpected dimensions, as well as revision of the theoretical patterns. Simultaneously, relevant theories create relative boundaries to anchor the theorizing process in specific perspectives and areas, prohibiting spurious ideas (Alvesson and Kärreman 2007 ; Sinkovics et al. 2019 ). The process of flexible pattern matching, therefore, consists of the deduction of a set of theoretical patterns from prior studies, the formation of actual observed patterns through the lens of these theoretical patterns, and the comparison between them that enables the emergence of new patterns. The resulting theoretical model or framework may represent a revision and/or extension of the initial theoretical patterns (Gatignon and Capron 2020 ; Greenwood et al. 1994 ; Sinkovics et al. 2014 , 2019 ). A more detailed roadmap for this process will be discussed later.

The adoption of flexible pattern matching can be found in studies on organizational culture (Ainsworth and Cox 2003 ; Canato et al. 2013 ; Montealegre 2002 ), entrepreneurship (Chiles et al. 2007 ; Shane 2000 ), institutional logic (Jancsary et al. 2017 ; Reay and Jones 2016 ; Van de Ven and Huber 1990 ), and organizational pathways (Bouncken and Barwinski 2020 ; Gatignon and Capron 2020 ; Rindova et al. 2007 ). The approach has five key advantages that advocate its more frequent application, especially in research areas where there are suitable theories and prior studies to deduce initial theoretical patterns.

First, the idea of relating several pieces of information from the collected data to some theoretical proposition can strengthen the internal validity of case studies (Eisenhardt 1989 ; Yin 1994a ). To compare a predicted theoretical pattern with an observed empirical pattern, scholars need to design a rich theoretical framework based on existing studies guiding the research design. The initial framework and the comparison legitimate the establishment of a causal relationship and guide the generation of observed patterns from collected data (Massaro et al. 2017 ). This advantage represents the foundation for the other four advantages. For example, in Massaro et al. ( 2017 ) study, they developed a matrix with dependent and independent variables that served as a collection of theoretical patterns. The later comparison between the matrix and the collected data increased the reliability of the findings on the joint effect of trust and control mechanisms on knowledge transfer in the networks of small and medium-sized enterprises.

Second, the interplay between theories and observations in flexible pattern matching allows readers to follow the researcher’s thought processes from the conceptualization stage to the data interpretation stage . A sound conceptual background at the front end of the manuscript provides a roadmap for the data analysis as well as the articulation of the findings. This is because it aids the researcher to go beyond a mere description of divergent empirical findings (Brooks and King 2014 ; Brooks et al. 2015 ; Thornton et al. 2012 ). Therefore, readers are better able to make theoretical sense of ‘complex phenomena’ while comprehending and evaluating the findings in relation to prior work (Sinkovics et al. 2014 , 2019 ). For instance, Gatignon and Capron ( 2020 ) adapted the eight-dimension principles of polycentric governance from Ostrom ( 2005 ) as their analytical framework to examine how firms address voids in emerging markets. The eight principles unraveled the differences and relationship between the two theoretical patterns and the observed pattern, and therefore offers a structural process for readers to apprehend the articulated theoretical implication and comprehend findings in relation to prior work.

Third, the matches and mismatches between theoretical and observed patterns, as well as the emergence of unexpected patterns, provides the researcher with a structure for theorizing about the findings . While a mismatch between the two sets of patterns may initially appear to be a “defect”, in fact, it represents an “opportunity” to revise or extend existing theory or initiate the theorizing process towards a new theory. For example, in Bouncken and Barwinski’s ( 2020 ) study on knowledge ties in global digital business, they first detected potential modification in the theoretical patterns through collected data, then they conducted a second-round of pattern matching to analyze the cases with mismatches and the causes behind. This two-stage flexible pattern matching facilitates the identification of strategies to grow a global business in the 3D printing industry and leads to the introduction of the concept of shared digital identity.

Fourth, the approach is effective in capturing categories for analysis. The systematic or semi-systematic review of the relevant literature helps researchers engage in a priori theorizing process. In other words, researchers can engage in thought experiments based on the existing literature in terms of what they expect the observed patterns to be. This a priori framework also provides a “grammar” for data analysis that describes the direction of analysis while allowing for emergent themes from cases and data. For instance, to reveal the complex constitution of organizational culture and dynamics of control, Ainsworth and Cox ( 2003 ) developed initial patterns including three interpretive divisions from the literature and the concept of shared understandings of interpretative divisions from Parker ( 1995 ) to further develop and refine the typology. The comparison of the initial patterns with the collected data showed that the meaning and implications of these divisions varied depending on the local context, vis-à-vis the complex dynamics of control. Ainsworth and Cox ( 2003 : p. 1469) described the initial patterns as “a generalizable ‘grammar’” that is “flexible enough to explore how similar dynamics could yield local and particular meanings across the cases”.

Five, the flexible pattern matching approach has substantial promise in longitudinal and/or single-case studies . The a priori patterns as theoretical interpretation provide material for replication and comparison, and thus represent a powerful way to gain insights from a single case. For example, Ross and Staw’s ( 1993 ) goal was to ground an escalation theory further to address the theory void at an organizational level by testing two propositions (as theoretical patterns) derived from the literature, and also to explore the organizational escalation and exit process. Therefore, they conducted a longitudinal single-case study with a pattern-matching approach for both theory testing and development. By examining the archival and interview data of the case, Ross and Staw not only discerned fit with the a priori propositions, supporting the tested theories, but also developed an understanding of organizational escalation, exit, and how it is resolved. Analyzing the single case enabled Ross and Staw ( 1993 ) to “comprehend as fully as we could the events of Shoreham and to match those events with potential models of escalation and exit” through “moving back and forth between the empirical data and possible theoretical conceptualization”.

3 Flexible pattern matching and grounded theory: comparison and bridging

Grounded theory is one of the most cited qualitative research methods in business and organization studies (Gioia et al. 2013 ; Glaser and Strauss 1968 ; Langley 1999 ; Strauss and Corbin 1998 ). It is a technique for surfacing and constructing relevant categories that provide deep and rich theoretical descriptions of the context to explain the phenomenon of interest and clarify all data-to-category connections (Chiles et al. 2010 ; Gioia et al. 2013 ; Soto-Simeone and Kautonen 2020 ). Flexible pattern matching similarly involves the process of “coding” data where researchers distill insights from qualitative data into more manageable “categories” or “patterns”. Although flexible pattern matching starts with an a priori development of codes/patterns/constructs from the literature, it also allows the a posteriori development of codes that are completely data-driven (Sinkovics 2018 ). In contrast, grounded theory starts with an inductive exploration of the data (Langley 1999 ; Sang and Sitko 2015 ), the literature is consulted during and after the coding process (Kenealy 2012 ). Although partly intertwined, flexible pattern matching and grounded theory differ in multiple aspects. The grounded theory has reached wide legitimacy from its development and adoption while also received increasing critiques in the recent year. In this section, we discuss the connections and differences between the flexible pattern matching and the grounded theory (as summarized in Table 2 ), to further introduce the approach by drawing on the established method and address the features of the approach to the recent call for more rigorous qualitative methods (Cuervo-Cazurra et al. 2016 ; Langley and Abdallah 2011 ).

The shared features of the flexible pattern matching and the grounded theory approaches mostly originate in the process of exploring the meaning of certain social phenomena or activities that are entangled with the context. Studies adopting either flexible pattern matching or the grounded theory approach commonly investigate revelatory research sites, instead of critical or extreme ones (Langley and Abdallah 2011 ; Yin 1994a ), where they believe that they may find interesting answers to the research question with rich and trustworthy data. After selecting relevant case(s), researchers gather empirical data—ranging from interviews, archive data to ethnographic observations—to gain an understanding of settings and events from insiders’, and maybe also other stakeholders’, experience, opinions, and explanations. In data analysis, both methods involve organizing nuanced descriptions or statements into categories or patterns that show the particular logic. In the final presentation of these findings, both methods favor providing a visualized representation with templates, figures, or tables. A widely accepted example of structuring and visualizing data connected to grounded theory is the Gioia method. It introduces a highly disciplined coding and analyzing process, presenting output with a three-order hierarchical data structure in a tree-shaped fashion (Gioia et al. 2010 , 2013 ; Langley and Abdallah 2011 ).

Although there are similar components in these two methods, they have different epistemological foundations. Flexible pattern matching builds on the assumption that the way we understand the social world is constructed through iterations between prior theories and empirical observations, so the way of theory development is to compare and contrast prior knowledge with empirical observations (Burchell and Kolb 2003 ; Reay and Jones 2016 ; Sinkovics 2018 ). In contrast, grounded theory is driven by an interpretive philosophy. It is based on the assumption that “reality is a constructed and shifting entity and social processes can be created and changed by interactions among people” (Grbich 2013 : 80). Symbols, signs, and language are used to achieve meaning. Prior theoretical assumptions are limited to a minimum to leave room for the construction of meaning from raw data (Gioia et al. 2013 ; McCutcheon and Meredith 1993 ; Strauss and Corbin 1998 ). Grounded theory is best suited under three conditions: (1) When there is very little or no prior knowledge of the phenomenon and thus a new theory is needed to explain it, (2) when the focus of the investigation is on a microcosm of interaction in a specific setting where all related aspects need to be examined, or (3) when the purpose of the research is to build new theoretical explanations to explain changes in a field (Grbich 2013 ).

However, outside of these three conditions, as Eisenhardt states, a “clean theoretical slate” approach is not convincing enough, since the whole research process, including site selection, data collection, and analysis, is guided by specific rationale and research questions, indicating at least some theoretical basis and voids to address (Eisenhardt 1989 ; McCutcheon and Meredith 1993 ). Further, if grounded theory is used in settings other than the ones described above, it presents challenges to draw generalizations and a “substantive” theory beyond the specific research context (Brooks et al. 2015 ; Langley 1999 ). Therefore, outside of the three categories where the use of grounded theory is needed, flexible pattern matching is better suited as a method.

A particular feature of the flexible pattern matching approach is its use of a priori patterns, building on existing theories, developing ideas from relevant studies, and ensuring focus on the investigated area (Baxter and Berente 2010 ; Burton and Khammash 2010 ; Cuervo-Cazurra et al. 2016 ). This feature is an essential difference between flexible pattern matching and ground theory derived from their distinct epistemological foundation. This difference is also represented in Sinkovics’ ( 2018 ) pattern-matching framework, where grounded theory is classified as a partial pattern matching technique.

The use of a priori patterns or constructs in flexible pattern matching can be especially advantageous in qualitative studies as they provide a clear rationale and logic for data collection and analysis (Brooks et al. 2015 ; Waring and Wainwright 2008 ). On the other hand, the a priori patterns are themselves subject to revision, redefinition, changing scopes, or other types of modification, so that leaves space for exploration and emergence of new themes (Burchell and Kolb 2003 ; McCutcheon and Meredith 1993 ; Sinkovics 2018 ). Multiple iterations between theories and empirical evidence is another feature of flexible pattern matching linked to the previous one. The data analysis, namely the matching of theoretical and observed patterns, requires particular attention to the interplay between theories and data, by focusing on how insights from empirical data break down or challenge those theoretical constructs (Alvesson and Kärreman 2007 ; Chiles et al. 2010 ). This strong connection between prior studies and collected data also enhances the generalizability of the generated findings and developed theory since the theoretical base functions as a counterpart for comparison and as external verification. Further, flexible pattern matching also allows the emergence of empirical patterns from the data. This inductive element adds to the already flexible method and allows the extension of theories by complementing them with new concepts. Therefore, flexible pattern matching can be adapted to a broad range of studies (Corsaro and Snehota 2010 ; Waring and Wainwright 2008 ).

It is important to note, however, that grounded theory and flexible pattern matching are not mutually exclusive. The inductive element within the flexible pattern matching approach makes it suitable to create a reverse flexible pattern matching design. This represents a bridge between grounded theory and flexible pattern matching. A good example of this is Chiles et al.’s ( 2010 ) study on organizational emergence. The authors started with grounded theory, aiming to develop new theoretical insights on the emergence of organizational collectives. However, after realizing the substantial fit between their data-driven framework and the literature on complexity theory, they shifted to a flexible pattern matching approach, aiming to compare the empirical framework with complexity theory. The switch to complexity theory enables the study to complement complexity theory with a collective-level perspective by matching and adding to the patterns from complexity theory with the field data (Chiles et al. 2010 ).

4 Research with flexible pattern matching: stages and roadmap

While the flexible pattern matching approach enjoys increasing popularity in qualitative business and management studies, there is a need to provide further guidelines for performing it. To this end, we provide a synthesis of steps identified by going through prior studies employing this technique. We suggest five critical steps outlined in Fig.  1 . We also present five ways to present pattern-matching results.

figure 1

Iteration between theoretical and observational realm in conducting flexible-pattern-matching research

4.1 Formulating the research question(s)

An initial definition of the research question(s) is a critical start of studies with a flexible-pattern-matching design, and also of general explorative studies (Cassell and Bishop 2019 ; Cassell et al. 2006 ; Sinkovics 2018 ). If the researcher is struggling to identify a meaningful research question, a top-down partial pattern matching technique is recommended, as outlined by Sinkovics ( 2016 ). The researcher can then build on the results of this pre-literature review literature analysis to theoretically ground their research question for their flexible pattern matching study. Theory-driven research questions are generally derived from identified voids or conflicts in the extant literature. The contribution of the study then rests on the rich and in-depth qualitative data that offers insights to answer the questions (Eisenhardt 1989 ; Eisenhardt and Graebner 2007 ). For example, Sinkovics et al. ( 2014 ) first constructed a research gap in the organizational literature on a deep understanding of how the base-of-the-pyramid business models and social value creation are related. They then applied a flexible-pattern-matching logic to conduct field research by linking it to extant studies. Phenomenon-driven research questions highlight the importance of a specific phenomenon and deficiency of plausible extant theory to explain it, contributing by complementing or challenging existing theories (Eisenhardt 1989 ; Eisenhardt and Graebner 2007 ). For example, Gatignon and Capron ( 2020 ) motivated their study with the observation that a fast-expanding company in Brazil overcame institutional challenges in a way distinct from two patterns suggested by the literature. Whether theory-driven or phenomenon-driven, this initial grounding is of great importance in flexible pattern matching to convince readers of the study’s contribution to business and theory building or testing. This is in addition to demonstrating a lack of or insufficient evidence from the current literature to address it. Moreover, it is equally significant to know that the research questions are tentative and can be modified along with more insights derived from the study (Sinkovics 2018 ).

4.2 Generating theoretical patterns

In flexible pattern matching, patterns/constructs/templates deduced from the theories are constructed and documented before the data collection (Bitektine 2007 ; Kauppila 2010 ; Sinkovics et al. 2019 ). The generation of a priori patterns from pre-existing theoretical knowledge or the construction of a conceptual framework differs from widely-adopted qualitative study methods, including the grounded theory and the Gioia method. This feature enables studies to have a firm grounding in related literature, shape the initial theoretical design of the research, and design the interview guide that seamlessly feeds into the data analysis. Researchers should also consider the scope and extension of the initial patterns because too many predefined patterns might hamper the analysis and incite exploration out of bounds, while at the other extreme, too few patterns may neglect insights from relevant studies (King 2004 ; King, Brooks, & Tabari, 2018 ). We suggest following a well-defined research focus and conducting an exhaustive review of relevant studies to form an appropriate number of patterns that fit the study’s objective, as in Sinkovics et al. ( 2019 ) and Bouncken and Barwinski ( 2020 ). To develop an initial concept model, Bouncken and Barwinski ( 2020 ) built on knowledge exchange, digitalization, and global business literature and identified five forms of global knowledge exchange, four mechanisms of exchange, and two possible outcomes. The initial conceptual model generates an overall picture of the extant patterns on the investigated topic.

4.3 Theoretical sampling and data collection

The previous procedure provides the theoretical basis for the following field research and establishes a connection between theories and practical social activities. Therefore, case or object selection is determined by the theoretical base and the nature and accessibility of social activities. With the generated theoretical patterns as a foundation, researchers adopt theoretical sampling to select revelatory cases that are particularly suitable for examining, testing, or extending the logics indicated by the established patterns. The sample of flexible pattern matching studies can be a single case or multiple cases. The choice of a single case is mostly based on the uniqueness, revelation of the given case, and its potential in developing theories. In contrast, multiple cases are selected for theoretical relevance, aiming to achieve replication, diverse profiles, alternative explanations, or extension of theory (Eisenhardt 1989 ; Eisenhardt and Graebner 2007 ; Yin 1994a ). For example, Ross and Staw ( 1993 ) chose the Shoreham case to achieve both theory testing and development because of its fit with the research focus on organizational escalation and exit, as well as the uniqueness of this case in terms of scale, amount of investment, and the escalated result. In contrast, Saka ( 2004 ) selected three cases one large and two medium-sized Japanese companies in the UK to study the process of cross-national diffusion of work systems. The aim was to conduct a rigorous comparison of the diffusion process through flexible pattern matching. In estimating the correct time, sites, and informants for data collection, researchers should follow the guidance of theoretical focus and the information from industrial experts. Multiple data sources can be collected and combined to meet the need of specific research aims. In-depth interviews, observation, documents, and artifacts are four primary sources of qualitative data; Table 3 provides an overview of key types and corresponding functions.

4.4 Analyzing and matching data

The appearance of data analysis and matching after theoretical sampling and data collection does not necessarily mean that it should be conducted after the completion of the previous step. When possible, a constant comparison approach is recommended. This entails a process of simultaneous data collection and data analysis (Eisenhardt and Graebner 2007 ) to enable researchers to glean predominant and characteristic patterns from data and take advantage of flexible data collection (Muhic and Bengtsson 2019 ). The analysis process in flexible pattern matching is iterative and involves moving back and forth between theoretical patterns and the empirical data to discern the observed patterns. Inconsistencies with observed patterns or the emergence of unexpected patterns may trigger a further literature analysis that in turn can drive further explorations of the data (Montealegre 2002 ; Muhic and Bengtsson 2019 ; Shane 2000 ). For example, Sinkovics, Hoque, and Sinkovics ( 2016 ) operationalized two theoretical concepts, social upgrading and social value creation, as the initial theoretical template to identify the overlaps and discrepancies with the observed patterns. The matching and comparison results enabled the authors to demonstrate that social upgrading can destroy pre-existing social value. In the flexible pattern matching approach, the analysis saturation is reached when the matching process reveals adequate information to answer the research question that has been observed, and new insights stop appearing (Glaser and Strauss 1968 ; Montealegre 2002 ; Strauss and Corbin 1998 ). It is also possible that at a certain stage of the data analysis, the data collection method undergoes alteration, addition, or modification. This change might be needed, especially in explaining future developments as well as in research aiming at theory-building, because the iteration between theory and empirical observations might induce a new line of thinking, or new data collection opportunities might arise. However, this flexibility should be channeled into controlled alterations aiming at better grounding the theory or providing additional, relevant, and valuable theoretical insights. Arbitrary, researchers should avoid opportunistic modifications.

4.5 Interpreting and theorizing

In studies adopting a flexible pattern matching approach, findings and space for theory development derive from the mismatch between theoretical pattern and observed empirical patterns (Bouncken and Barwinski 2020 ; Sinkovics et al. 2019 ), the match between modified theoretical framework and empirical observation (Jobber and Lucas 2000 ), or the emergence of new, unexpected patterns from the data (Sinkovics et al. 2014 ). In line with general qualitative studies, the presenting of findings derived from a flexible pattern matching approach also involves story narration interspersed with quotations from informants and other supporting materials. However, the interpretation of insights from this approach is neither constrained in relating the narrative of each case, resulting in theory losing and text balloons, nor trapped in generating spurious insights. Rather, the articulation of findings from flexible pattern matching is structured by theoretical patterns or a theoretical framework that enables the comparison between theoretical and empirical patterns. This comparison involves asking what is consistent, what is inconsistent, in which way, and why. Multiple approaches to present the comparison and findings from flexible pattern matching have been adopted by researchers, including template analysis, tables, matrix, figures, and narratives. In the following paragraphs, we will briefly introduce each of them along with studies adopting each form of presentation.

Template analysis is a category of well-developed flexible pattern matching logic (Crabtree and Miller 1999 ; King 2004 ; King et al. 2018 ). Template analysis deploys hierarchical coding to “produce a list of codes (‘template’) representing themes identified in their text data” (King 2004 ). The initial themes and sub-themes derived from the literature review and were revised in the light of the ongoing analysis of empirical data when mismatches between theoretical and empirical patterns, in the form of templates, appears. For example, in their study on service delivery work in complex case management, Spurrell et al. ( 2019 ) began with developing a template from the literature, including three themes that explain the network context: patient network perspective, commissioner network perspective, and clinical team network perspective. Their empirical data analysis suggests adding network interconnection as an additional perspective for understanding and researching context functioning.

Making use of tables is an efficient way to compare sets of patterns and present insights from the findings. In most cases, a framework derived from prior studies is used as the measure or underlying analytical structure for comparison. The advantage of the form of tables is that it systematically presents the contrast of patterns from attribute to attribute. For instance, in the light of Ostrom’s eight design principles, Gatignon and Capron ( 2020 ) evaluated two patterns from the extant literature and one pattern from the case regarding how they address each of the principles and the degree of their consistency with the principles. It distinctly presents how the new pattern from the case is different from the others and how it contributes to addressing voids in market-based institutions.

Matrix is a variation of tables, with numbers or other forms of measurement scales to operationalize the distance or closeness between the initial patterns and the observed ones. For example, Sinkovics et al. ( 2019 ) provide an overview of corresponding operationalization to theoretical patterns, expected observed patterns, and empirically observed patterns with the evaluation of low, medium or high, aiming to “identify potential ‘breakdowns’ in our understanding based on current frameworks and/or empirical findings”.

Using figures enables the presentation and visualization of more complex relationships and interactions between constructs. For example, to examine the integrating view of technical, political, and cultural aspects on firms’ marketing performance, Jobber and Lucas ( 2000 ) first developed a modified version of Tichy’s ( 1983 ) framework based on a review of the literature. Subsequently, they mapped out the relationships among the elements by using a graphical representation. After that, they created another figure with the same structure to indicate the causes and linkages from the empirical cases. They used the comparison between these two figures to support their theorizing.

Narration is the most fundamental and informative way of presenting the findings from flexible pattern matching that can complement and interlink with all the aforementioned approaches. There are a number of prior studies that provide instructive insights on presenting verbal narration as the “art of words” (cf. Bansal and Corley 2012 ; Graebner et al. 2012 ; Pratt 2009 ).

There is no one-size-fits-all approach to craft high-quality write-ups of flexible-pattern-matching studies. Nevertheless, by relying on the general logic of flexible pattern matching (Sinkovics 2018 ), researchers can tailor this roadmap to fit the requirements of their particular study.

5 Challenges

Flexible pattern matching is a qualitative research approach based on the linkage and iteration between theory and observation. Although this design allows improved rigor in qualitative studies through a strong grounding in extant theories without sacrificing the rich insights and exploration from qualitative data, these advantages still come at some cost related to challenges for the researcher(s). First, the researcher might encounter difficulties in deducing convincing, inclusive, and focused theoretical patterns from various theoretical perspectives. It initially requires a carefully-considered research aim, followed by a specified but comprehensive literature search within this sphere (Brooks and King 2014 ). To further aid controlled deduction (Jancsary et al. 2017 ; Trochim 1989 ), researchers can draw on the expertise of co-authors as indicated by the roadmap in this study. A main issue is to consider that words and sentences as the medium of logical deduction should not weaken the logic of the reasoning process of theoretical pattern deduction (Lee 1989 ). As Lee ( 1989 ) stated, “deductions with verbal propositions (i.e., qualitative analysis) therefore only deprive itself of the convenience of the rules of algebra; it does not deprive itself of the rules of formal logic”.

Second, applying flexible pattern matching involves an exceptionally large commitment of time and effort to generate sets of patterns and repeatedly move back and forth between them (Reay and Jones 2016 ). As the matching process is based on the development of both theoretical and observed patterns, it necessitates an exhaustive literature review on certain research topics, insight generation therefrom, and iteration between pieces of empirical data and theoretical patterns. When longitudinal investigation and data are included, then the research is further associated with a substantial time span and tracing on the formation of the focal events that matter for the study. A team-based approach to conducting such arduous work can help ensure the research project’s continuity and eliminate bias.

Third, some researchers might be concerned about the generalizability of the insights from studies adopting a flexible pattern matching approach. This is, in fact, a common discussion on some other qualitative methods. One can address this by articulating the object of interest within a context that includes a range of related elements for adopting the theory (Langley 1999 ; Trochim 1989 ). As Allison and Zelikow ( 1971 ) argued in their book, “refining partial paradigms, and specifying the classes of actions for which they are relevant, maybe a more fruitful path to limited theory and propositions than the route of instant generalization”. Further, while adopting the flexile pattern matching approach, the theoretical patterns derived from the literature and multiple perspectives can increase the external vitality of the findings and insights. Ultimately, the aim of flexible pattern matching is to pave the way for meaningful large-scale studies.

6 Suggestions and future developments

Business administration, as an application-oriented discipline inside the humanities and social sciences, nowadays has recognized the need for methodological openness. Qualitative research and methodological pluralism have proved to expand the discipline´s practical relevance in recent times considerably. As flexible pattern matching rests on an iterative linkage between theoretical patterns, heterogeneous qualitative techniques, and empirical observation, this approach may serve as a promising qualitative research design to improve rigor especially in those fields of business administration that, unlike humanities-related fields like, say, organizational theory or general management, show a strong resemblance to the natural sciences. For example, equilibrium models of asset pricing or firm financing in perfect capital markets typically rely on mathematics (e.g., arbitrage theory Harrison and Kreps 1979 ; Ross 1978 )) or physics (e.g., stochastic processes like Geometric Brownian motion, (Black and Scholes 1973 )). In practice, however, the state of equilibrium will often not be attained due to limits of arbitrage (Shleifer and Vishny 1997 ). In fact, empirical evidence sheds doubt on the basic assumption that investors behave rationally but are misguided by noisy external signals, path-dependent loss of confidence during the arbitrage process, and cognitive dissonances induced by framing effects and mental accounting (Barberis and Thaler 2002 ; De Long et al. 1990 ). These inconsistencies between theoretical and observed patterns result in a bidirectional relationship between single facts and empirical observations on the one side and contextual framing effects on the other side.

The flexible pattern matching approach, although becoming increasingly popular, is still in a developing stage. Therefore, this paper contributes by synthesizing insights from existing studies with respect to its logic, application, advantages, as well as a comparison and combination with grounded theory-based approaches. We are also providing a set of guidelines on the process of theory development. Researchers from multiple domains have recognized the promise of this approach for theorizing and theory building. Specifically, the continuous iteration between theoretical and observational realms (as presented in Fig.  1 ) leverages and balances both the in-depth investigation from qualitative data and the underlying rationale derived from the literature (Zardini et al. 2020 ). Thus, this approach paves the way to identify, problematize, revise, and complement existing theories. The advancement in theories lays the foundation for more convincing and valid strategies in business and management studies. Furthermore, when combined with longitudinal research design, the flexible pattern matching approach facilitates the capture of changes and processes in the investigated issues and factors (Grund and Walter 2015 ; Muhic and Bengtsson 2019 ).

To unlock the full potential of the flexible pattern matching approach, additional work is need on further refining and extending the guidelines for constructing theoretical patterns, integrating data from various sources to generate patterns and to support the matching process. Additionally, there is a need for more refined criteria or measures to assess the degree of the pattern match or mismatch and for the validation of results.

A growing set of modern, promising concepts of economic research nowadays can facilitate scholars to extract hidden relationships. For example, supported by the enormous recent progress in data storage capabilities, smart data analytics, and digital finance features (Gomber et al. 2017 ; McAfee and Brynjolfsson 2012 ) like crowdfunding, social trading, and friendship networks (Belleflamme et al. 2014 ; Lin et al. 2013 ; Pan et al. 2012 ) may be used to come to a better pattern matching between empirical observation and existing theories. In a similar vein, the modern concept of axiomatic risk measures (Artzner et al. 1999 ) provides an appealing and widely recognized theory of how application-oriented classes of (e.g., coherent, spectral, or convex) risk measures can be defined (Acerbi and Tasche 2002 ; Artzner et al. 1999 ; Föllmer and Schied 2002 ) and what sets of “reasonable” properties risk measures within the respective classes should fulfill. However, practical decisions resulting from the use of these axiomatic risk measures may contradict economic intuition or well-established empirical patterns. For example, risk-averse individuals, when utilizing the most prominent axiomatic risk measure of Conditional Value-at-Risk (Acerbi and Tasche 2002 ), may violate (Brandtner 2013 ) the empirical paradigm of diversification in portfolio selection (Koumou 2018 ; Markowitz 1952 ), increase risk exposure even though their risk aversion is expected to rise (Brandtner and Kürsten 2015 ), or expand risky investment though the threat of some additional background risk looms (Brandtner 2018 ). In any of these behavioral inconsistencies between theoretical and observed patterns, there is the need for ongoing data collection and adaption of the framework in order to see how the contextual boundaries of both theoretical risk measure concepts and observed empirical behavior in practical situations of decision making under risk could be adjusted accordingly. We hope that the extensive discussion of the method in this article will stimulate its adoption.

Acerbi C, Tasche D (2002) On the coherence of expected shortfall. J Bank Finance 26(7):1487–1503

Google Scholar  

Ainsworth S, Cox JW (2003) Families divided: Culture and control in small family business. Organ Stud 24(9):1463–1485

Allison GT, Zelikow P (1971) Essence of decision: explaining the Cuban missile crisis. Little, Brown, Boston

Alvesson M, Kärreman D (2007) Constructing mystery: empirical matters in theory development. Acad Manag Rev 32(4):1265–1281

Artzner P, Delbaen F, Eber J-M, Heath D (1999) Coherent measures of risk. Math Finance 9(3):203–228

Bansal P, Corley K (2012) Publishing in AMJ - part 7: what’s different about qualitative research? Acad Manag J 55(3):509–513

Barberis N, Thaler R (2002) A survey of behavioral finance. In Constantinides GM, Harris M, Stulz RM (eds) Handbook of the economics of finance, vol 1. Elsevier, pp 1053–1128

Baxter RJ, Berente N (2010) The process of embedding new information technology artifacts into innovative design practices. Inf Organ 20(3–4):133–155

Belleflamme P, Lambert T, Schwienbacher A (2014) Crowdfunding: tapping the right crowd. J Bus Ventur 29(5):585–609

Bhatti MW, Ahsan A (2016) Global software development: an exploratory study of challenges of globalization, HRM practices and process improvement. RMS 10:649–682

Bitektine A (2007) Prospective case study design: Qualitative method for deductive theory testing. Organ Res Methods. https://doi.org/10.1177/1094428106292900

Article   Google Scholar  

Bitektine A (2008) Prospective case study design-qualitative method for deductive theory testing. Organ Res Methods 11(1):160–180

Black F, Scholes M (1973) The pricing of options and corporate liabilities. J Polit Econ 81(3):637–654

Bouncken R, Barwinski R (2020) Shared digital identity and rich knowledge ties in global 3d printing—a drizzle in the clouds? Glob Strategy J. https://doi.org/10.1002/gsj.1370

Bouncken RB, Laudien SM, Fredrich V, Görmar L (2018) Coopetition in coworking-spaces: value creation and appropriation tensions in an entrepreneurial space. RMS 12(2):385–410

Bouncken R, Qiu Y, García FJS (2021) Flexible pattern matching approach: suggestions for augmenting theory evolvement. Technol Forecast Soc Change ( Accepted )

Brandtner M (2013) Conditional value-at-risk, spectral risk measures and (non-)diversification in portfolio selection problems—a comparison with mean–variance analysis. J Bank Finance 37(12):5526–5537

Brandtner M (2018) Expected shortfall, spectral risk measures, and the aggravating effect of background risk, or: risk vulnerability and the problem of subadditivity. J Bank Finance 89:138–149

Brandtner M, Kürsten W (2015) Decision making with expected shortfall and spectral risk measures: the problem of comparative risk aversion. J Bank Finance 58:268–280

Brooks J, King N (2014) Doing template analysis: evaluating an end-of-life care service. SAGE Publications, Thousand Oaks

Brooks J, McCluskey S, Turley E, King N (2015) The utility of template analysis in qualitative psychology research. Qual Res Psychol 12(2):202–222

Burchell N, Kolb D (2003) Pattern matching organisational cultures. J Manag Organ 9(3):50–61

Burton J, Khammash M (2010) Why do people read reviews posted on consumer-opinion portals? J Mark Manag 26(3–4):230–255

Canato A, Ravasi D, Phillips N (2013) Coerced practice implementation in cases of low cultural fit: cultural change and practice adaptation during the implementation of six sigma at 3m. Acad Manag J 56(6):1724–1753

Cassell C, Bishop V (2019) Qualitative data analysis: exploring themes, metaphors and stories. Eur Manag Rev 16(1):195–207

Cassell C, Symon G, Buehring A, Johnson P (2006) The role and status of qualitative methods in management research: an empirical account. Manag Decis 44(2):290–303

Cassell C, Cunliffe AL, Grandy G (2018) Introduction: Qualitative research in business and management. In: Cassel C, Cunliffe A, Grandy G (eds) The sage handbook of qualitative business and management research methods. SAGE Publications, Thousand Oaks, pp 1–13

Chiles TH, Bluedorn AC, Gupta VK (2007) Beyond creative destruction and entrepreneurial discovery: a radical austrian approach to entrepreneurship. Organ Stud 28(4):467–493

Chiles TH, Meyer AD, Hench TJ (2010) Organizational emergence: the origin and transformat of branson, missouri’s musical theaters. Organ Sci 19(6):907–918

Corsaro D, Snehota I (2010) Searching for relationship value in business markets: are we missing something? Ind Mark Manag 39(6):986–995

Crabtree BF, Miller WL (1999) Using codes and code manuals: a template organizing style of interpretation. In: Crabtree BF, Miller WL (eds) Doing qualitative research. SAGE Publications, Thousand Oaks, pp 163–177

Cuervo-Cazurra A, Andersson U, Brannen MY, Nielsen BB, Reuber AR (2016) From the editors: can i trust your findings? Ruling out alternative explanations in international business research. J Int Bus Stud 47:881–897

De Long JB, Shleifer A, Summers LH, Waldmann RJ (1990) Noise trader risk in financial markets. J Polit Econ 98(4):703–738

Eisenhardt KM (1989) Building theories from case study research. Acad Manag Rev 14(4):532–550

Eisenhardt KM, Graebner ME (2007) Theory building from cases: opportunities and challenges. Acad Manag J 50(1):25–32

Föllmer H, Schied A (2002) Convex measures of risk and trading constraints. Finance Stochast 6(4):429–447

Gatignon A, Capron L (2020) The firm as an architect of polycentric governance: building open institutional infrastructure in emerging markets. Strateg Manag J. https://doi.org/10.1002/smj.3124

Gioia DA (2004) A renaissance self: prompting personal and professional revitalization. In: Stablein RE, Frost PJ (eds) Renewing research practice. Stanford University Press, Stanford, pp 97–114

Gioia DA, Price KN, Hamilton AL, Thomas JB (2010) Forging an identity: an insider-outsider study of processes involved in the formation of organizational identity. Adm Sci Q 55(1):1–46

Gioia DA, Corley KG, Hamilton AL (2013) Seeking qualitative rigor in inductive research: notes on the gioia methodology. Organ Res Methods 16(1):15–31

Gittins T, Lang R, Sass M (2015) The effect of return migration driven social capital on SME internationalisation: a comparative case study of it sector entrepreneurs in Central and Eastern Europe. RMS 9:385–409

Glaser BG, Strauss AL (1968) The discovery of grounded theory: strategies for qualitative research. Nurs Res 17(4):364–366

Gomber P, Koch J-A, Siering M (2017) Digital finance and fintech: current research and future research directions. J Bus Econ 87(5):537–580

Graebner ME, Martin JA, Roundy PT (2012) Qualitative data: cooking without a recipe. Strateg Organ 10(3):276–284

Grbich C (2013) Qualitative data analysis: an introduction. SAGE Publications, Los Angeles

Greenwood R, Hinings CR, Brown J (1994) Merging professional service firms. Organ Sci 5(2):239–257

Grund C, Walter T (2015) Management compensation and the economic crisis: longitudinal evidence from the German chemical sector. RMS 9:751–777

Hammond KR (1966) Probabilistic functionalism: Egon brunswick’s integration of the history, theory, and method of psychology. In: Hammond KR (ed) The psychology of Egon Brunswik. Holt, Rinehart and Winston, Inc, New York, pp 15–80

Harrison JM, Kreps DM (1979) Martingales and arbitrage in multiperiod securities markets. J Econ Theory 20(3):381–408

Hiebl MRW, Mayrleitner B (2019) Professionalization of management accounting in family firms: the impact of family members. RMS 13:1037–1068

Idemen E, Elmadag AB, Okan M (2020) A qualitative approach to designer as a product cue: proposed conceptual model of consumers perceptions and attitudes. Rev Manag Sci. https://doi.org/10.1007/s11846-020-00381-5

Jancsary D, Meyer RE, Höllerer MA, Barberio V (2017) Toward a structural model of organizational-level institutional pluralism and logic interconnectedness. Organ Sci 28(6):1150–1167

Jobber D, Lucas GJ (2000) The modified tichy tpc framework for pattern matching and hypothesis development in historical case study research. Strateg Manag J 21(8):865–874

Kauppila OP (2010) Creating ambidexterity by integrating and balancing structurally separate interorganizational partnerships. Strateg Organ 8(4):283–312

Kenealy GJJ (2012) Grounded theory: a theory building approach. In: Symon G, Cassell C (eds) Qualitative organizational research core methods and current challenges. SAGE, London, pp 408–425

King N (2004) Using templates in the thematic analysis of text. In: Cassell C, Symon G (eds) Essential guide to qualitative methods in organizational research, vol 256. SAGE Publications, London

King N, Brooks J, Tabari S (2018) Template analysis in business and management research. In: Ciesielska M, Jemielniak D (eds) Qualitative methodologies in organization studies, vol 2. Palgrave Macmillan, Cham, pp 179–206

Koumou GB (2018) Diversification and portfolio theory: a review. Rev Req Financ Mark Portf Manag 34:267–312

Langley A (1999) Strategies for theorizing from process data. Acad Manag Rev 24(4):691–710

Langley A, Abdallah C (2011) Templates and turns in qualitative studies of strategy and management. In: Bergh DD, Ketchen DJ (eds) Building methodological bridges. Emerald Group Publishing Limited, pp 201–235

Lee AS (1989) A scientific methodology for MIS case studies. MIS Q Manag Inf Syst 13(1):33–50

Lin M, Prabhala NR, Viswanathan S (2013) Judging borrowers by the company they keep: friendship networks and information asymmetry in online peer-to-peer lending. Manag Sci 59(1):17–35

Markowitz H (1952) Portfolio selection. J Finance 7:77–91

Massaro M, Moro A, Aschauer E, Fink M (2017) Trust, control and knowledge transfer in small business networks. RMS 13(2):267–301

McAfee A, Brynjolfsson E (2012) Big data: the management revolution. Harv Bus Rev 90(10):51–68

McCutcheon DM, Meredith JR (1993) Conducting case study research in operations management. J Oper Manag 11(3):239–256

Montealegre R (2002) A process model of capability development: Lessons from the electronic commerce strategy at bolsa de valores de guayaquil. Organ Sci 13(5):514–531

Muhic M, Bengtsson L (2019) Dynamic capabilities triggered by cloud sourcing: a stage-based model of business model innovation business model innovation. Rev Manag Sci. https://doi.org/10.5465/AMBPP.2019.15991abstract

Ostrom E (2005) Understanding institutional diversity. Princeton University Press, Princeton

Pan W, Altshuler Y, Pentland A (2012) Decoding social influence and the wisdom of the crowd in financial trading network. Paper presented at the International Conference on Privacy, Security, Risk and Trust, Amsterdam

Parker M (1995) Working together, working apart: Management culture in a manufacturing firms. Sociol Rev 3:519–547

Pratt MG (2009) From the editors: For the lack of a boilerplate: Tips on writing up (and reviewing) qualitative research. Acad Manag J 52(5):856–862

Reay T, Jones C (2016) Qualitatively capturing institutional logics. Strateg Organ 14(4):441–454

Rindova VP, Petkova AP, Kotha S (2007) Standing out: How new firms in emerging markets build reputation. Strateg Organ 5(1):31–70

Ross SA (1978) A simple approach to the valuation of risky streams. J Bus 51(3):453–475

Ross J, Staw BM (1993) Organizational escalation and exit: Lessons from the shoreham nuclear power plant. Acad Manag J 36(4):701–732

Saka A (2004) The cross-national diffusion of work systems: Translation of Japanese operations in the UK. Organ Stud 25(2):209–228

Sang KJC, Sitko R (2015) Qualitative data analysis approaches. In: O’Gorman K, MacIntosh R (eds) The, 2nd edn. Goodfellow Publishers, London, pp 140–154

Shane S (2000) Prior knowledge and the discovery of entrepreneurial opportunities. Organ Sci 11(4):448–469

Shleifer A, Vishny RW (1997) The limits of arbitrage. J Finance 52(1):35–55

Sinkovics N (2016) Enhancing the foundations for theorising through bibliometric mapping. Int Mark Rev 33(3):327–350

Sinkovics N (2018) Pattern matching in qualitative analysis. In: Cassel C, Cunliffe A, Grandy G (eds) The sage handbook of qualitative business and management research methods. SAGE Publications, Thousand Oaks, pp 468–485

Sinkovics N, Sinkovics RR, Yamin M (2014) The role of social value creation in business model formulation at the bottom of the pyramid–implications for MNEs? Int Bus Rev 23(4):692–707

Sinkovics N, Hoque SF, Sinkovics RR (2016) Rana Plaza collapse aftermath: are CSR compliance and auditing pressures effective? Acc Audit Account J 29(4):617–649

Sinkovics N, Choksy US, Sinkovics RR, Mudambi R (2019) Knowledge connectivity in an adverse context: Global value chains and Pakistani offshore service providers. Manag Int Rev 59(1):131–170

Soto-Simeone A, Kautonen T (2020) Senior entrepreneurship following unemployment: a social identity theory perspective. Rev Manag Sci. https://doi.org/10.1007/s11846-020-00395-z

Spurrell M, Araujo L, Proudlove N (2019) Capturing context: An exploration of service delivery networks in complex case management. Ind Mark Manag 76:1–11

Strauss A, Corbin J (1998) Basics of qualitative research: Techniques and procedures for developing grounded theory, 2nd edn. SAGE Publications, Thousand Oaks

Thornton PH, Ocasio W, Lounsbury M (2012) The institutional logics perspective: a new approach to culture, structure, and process. Oxford University Press, Oxford, USA

Tichy NM (1983) Managing strategic change: technical, political, and cultural dynamics. Wiley, New York.

Trochim WMK (1985) Pattern-matching, validity, and conceptualization in program-evaluation. Eval Rev 9(5):575–604

Trochim WMK (1989) Outcome pattern matching and program theory. Eval Program Plan 12(4):355–366

Trochim WMK, Donnelly JP (2008) Research methods knowledge base. Atomic Dog/Cengage Learning, Mason

Van de Ven AH, Huber GP (1990) Longitudinal field research methods for studying processes of organizational change. Organ Sci 1(3):213–219

Waring T, Wainwright D (2008) Issues and challenges in the use of template analysis: two comparative case studies from the field. Electron J Bus Res Methods 6(1):85–94

Yin RK (1994b) Discovering the future of the case study. Method in evaluation research. Eval Pract 15(3):283–290

Yin RK (1994a) Case study research and applications: Design and methods. SAGE Publications, Thousand Oaks

Yin RK (2009) Case study research: design and methods. SAGE Publications, Los Angeles

Yin RK, Moore GB (1988) Lessons on the utilization of research from nine case experiences in the natural hazards field. Knowl Soc 1(3):25–44

Zalewska-Kurek K, Harms R (2020) Managing autonomy in university–industry research: a case of collaborative ph.D. Projects in the Netherlands. RMS 14:393–416

Zardini A, Ricciardi F, Bullini Orlandi L, Rossignoli C (2020) Business networks as breeding grounds for entrepreneurial options: organizational implications. RMS 14(5):1029–1046

Download references

Not applicable.

Author information

Authors and affiliations.

Chair for Strategic Management and Organization, University of Bayreuth, Prieserstraße 2, 95444, Bayreuth, Germany

Ricarda B. Bouncken & Yixin Qiu

Management and International Business, University of Auckland, 12 Grafton Rd, Auckland Central, 1010, New Zealand

Noemi Sinkovics

Chair for Finance, Banking and Risk Management, Friedrich Schiller University of Jena, Carl-Zeiß-Straße 3, 07743, Jena, Germany

Wolfgang Kürsten

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yixin Qiu .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflicts of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Bouncken, R.B., Qiu, Y., Sinkovics, N. et al. Qualitative research: extending the range with flexible pattern matching. Rev Manag Sci 15 , 251–273 (2021). https://doi.org/10.1007/s11846-021-00451-2

Download citation

Received : 16 November 2020

Accepted : 29 January 2021

Published : 16 February 2021

Issue Date : February 2021

DOI : https://doi.org/10.1007/s11846-021-00451-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Flexible pattern matching
  • Qualitative methods
  • Grounded theory
  • Connecting theories and data
  • Qualitative rigor

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research

About Journal

American Journal of Qualitative Research (AJQR)  is a quarterly peer-reviewed academic journal that publishes qualitative research articles from a number of social science disciplines such as psychology, health science, sociology, criminology, education, political science, and administrative studies. The journal is an international and interdisciplinary focus and greatly welcomes papers from all countries. The journal offers an intellectual platform for researchers, practitioners, administrators, and policymakers to contribute and promote qualitative research and analysis.

ISSN: 2576-2141

Call for Papers- American Journal of Qualitative Research

American Journal of Qualitative Research (AJQR) welcomes original research articles and book reviews for its next issue. The AJQR is a quarterly and peer-reviewed journal published in February, May, August, and November.

We are seeking submissions for a forthcoming issue published in February 2024. The paper should be written in professional English. The length of 6000-10000 words is preferred. All manuscripts should be prepared in MS Word format and submitted online: https://www.editorialpark.com/ajqr

For any further information about the journal, please visit its website: https://www.ajqr.org

Submission Deadline: November 15, 2023

Announcement 

Dear AJQR Readers, 

Due to the high volume of submissions in the American Journal of Qualitative Research , the editorial board decided to publish quarterly since 2023.

Volume 8, Issue 1

Current issue.

Social distancing requirements resulted in many people working from home in the United Kingdom during the COVID-19 pandemic. The topic of working from home was often discussed in the media and online during the pandemic, but little was known about how quality of life (QOL) and remote working interfaced. The purpose of this study was to describe QOL while working from home during the COVID-19 pandemic. The novel topic, unique methodological approach of the General Online Qualitative Study ( D’Abundo & Franco, 2022a), and the strategic Social Distancing Sampling ( D’Abundo & Franco, 2022c) resulted in significant participation throughout the world (n = 709). The United Kingdom subset of participants (n = 234) is the focus of this article. This big qual, large qualitative study (n >100) included the principal investigator-developed, open-ended, online questionnaire entitled the “Quality of Life Home Workplace Questionnaire (QOLHWQ)” and demographic questions. Data were collected peak-pandemic from July to September 2020. Most participants cited increased QOL due to having more time with family/kids/partners/pets, a more comfortable work environment while being at home, and less commuting to work. The most cited issue associated with negative QOL was social isolation. As restrictions have been lifted and public health emergency declarations have been terminated during the post-peak era of the COVID-19 pandemic, the potential for future public health emergencies requiring social distancing still exists. To promote QOL and work-life balance for employees working remotely in the United Kingdom, stakeholders could develop social support networks and create effective planning initiatives to prevent social isolation and maximize the benefits of remote working experiences for both employees and organizations.

Keywords: qualitative research, quality of life, remote work, telework, United Kingdom, work from home.

(no abstract)

This essay reviews classic works on the philosophy of science and contemporary pedagogical guides to scientific inquiry in order to present a discussion of the three logics that underlie qualitative research in political science. The first logic, epistemology, relates to the essence of research as a scientific endeavor and is framed as a debate between positivist and interpretivist orientations within the discipline of political science. The second logic, ontology, relates to the approach that research takes to investigating the empirical world and is framed as a debate between positivist qualitative and quantitative orientations, which together constitute the vast majority of mainstream researchers within the discipline. The third logic, methodology, relates to the means by which research aspires to reach its scientific ends and is framed as a debate among positivist qualitative orientations. Additionally, the essay discusses the present state of qualitative research in the discipline of political science, reviews the various ways in which qualitative research is defined in the relevant literature, addresses the limitations and trade-offs that are inherently associated with the aforementioned logics of qualitative research, explores multimethod approaches to remedying these issues, and proposes avenues for acquiring further information on the topics discussed.

Keywords: qualitative research, epistemology, ontology, methodology

This paper examines the phenomenology of diagnostic crossover in eating disorders, the movement within or between feeding and eating disorder subtypes or diagnoses over time, in two young women who experienced multiple changes in eating disorder diagnosis over 5 years. Using interpretative phenomenological analysis, this study found that transitioning between different diagnostic labels, specifically between bulimia nervosa and anorexia nervosa binge/purge subtype, was experienced as disempowering, stigmatizing, and unhelpful. The findings in this study offer novel evidence that, from the perspective of individuals diagnosed with EDs, using BMI as an indicator of the presence, severity, or change of an ED may have adverse consequences for well-being and recovery and may lead to mischaracterization or misclassification of health status. The narratives discussed in this paper highlight the need for more person-centered practices in the context of diagnostic crossover. Including the perspectives of those with lived experience can help care providers working with individuals with eating disorders gain an in-depth understanding of the potential personal impact of diagnosis changing and inform discussions around developing person-focused diagnostic practices.

Keywords: feeding and eating disorders, bulimia nervosa, diagnostic labels, diagnostic crossover, illness narrative

Often among the first witnesses to child trauma, educators and therapists are on the frontline of an unfolding and multi-pronged occupational crisis. For educators, lack of support and secondary traumatic stress (STS) appear to be contributing to an epidemic in professional attrition. Similarly, therapists who do not prioritize self-care can feel depleted of energy and optimism. The purpose of this phenomenological study was to examine how bearing witness to the traumatic narratives of children impacts similar helping professionals. The study also sought to extrapolate the similarities and differences between compassion fatigue and secondary trauma across these two disciplines. Exploring the common factors and subjective individual experiences related to occupational stress across these two fields may foster a more complete picture of the delicate nature of working with traumatized children and the importance of successful self-care strategies. Utilizing Constructivist Self-Development Theory (CSDT) and focus group interviews, the study explores the significant risk of STS facing both educators and therapists.

Keywords: qualitative, secondary traumatic stress, self-care, child trauma, educators, therapists.

This study explored the lived experiences of residents of the Gulf Coast in the USA during Hurricane Katrina, which made landfall in August 2005 and caused insurmountable destruction throughout the area. A heuristic process and thematic analysis were employed to draw observations and conclusions about the lived experiences of each participant and make meaning through similar thoughts, feelings, and themes that emerged in the analysis of the data. Six themes emerged: (1) fear, (2) loss, (3) anger, (4) support, (5) spirituality, and (6) resilience. The results of this study allude to the possible psychological outcomes as a result of experiencing a traumatic event and provide an outline of what the psychological experience of trauma might entail. The current research suggests that preparedness and expectation are key to resilience and that people who feel that they have power over their situation fare better than those who do not.

Keywords: mass trauma, resilience, loss, natural disaster, mental health.

Women from rural, low-income backgrounds holding positions within the academy are the exception and not the rule. Most women faculty in the academy are from urban/suburban areas and middle- and upper-income family backgrounds. As women faculty who do not represent this norm, our primary goal with this article is to focus on the unique barriers we experienced as girls from rural, low-income areas in K-12 schools that influenced the possibilities for successfully transitioning to and engaging with higher education. We employed a qualitative duoethnographic and narrative research design to respond to the research questions, and we generated our data through semi-structured, critical, ethnographic dialogic conversations. Our duoethnographic-narrative analyses revealed six major themes: (1) independence and other   benefits of having a working-class mom; (2) crashing into middle-class norms and expectations; (3) lucking and falling into college; (4) fish out of water; (5) overcompensating, playing middle class, walking on eggshells, and pushing back; and (6) transitioning from a working-class kid to a working class academic, which we discuss in relation to our own educational attainment.

Keywords: rurality, working-class, educational attainment, duoethnography, higher education, women.

This article draws on the findings of a qualitative study that focused on the perspectives of four Indian American mothers of youth with developmental disabilities on the process of transitioning from school to post-school environments. Data were collected through in-depth ethnographic interviews. The findings indicate that in their efforts to support their youth with developmental disabilities, the mothers themselves navigate multiple transitions across countries, constructs, dreams, systems of schooling, and services. The mothers’ perspectives have to be understood against the larger context of their experiences as citizens of this country as well as members of the South Asian diaspora. The mothers’ views on services, their journey, their dreams for their youth, and their interpretation of the ideas anchored in current conversations on transition are continually evolving. Their attempts to maintain their resilience and their indigenous understandings while simultaneously negotiating their experiences in the United States with supporting their youth are discussed.  

Keywords: Indian-American mothers, transitioning, diaspora, disability, dreams.

This study explored the influence of yoga on practitioners’ lives ‘off the mat’ through a phenomenological lens. Central to the study was the lived experience of yoga in a purposive sample of self-identified New Zealand practitioners (n=38; 89.5% female; aged 18 to 65 years; 60.5% aged 36 to 55 years). The study’s aim was to explore whether habitual yoga practitioners experience any pro-health downstream effects of their practice ‘off the mat’ via their lived experience of yoga. A qualitative mixed methodology was applied via a phenomenological lens that explicitly acknowledged the researcher’s own experience of the research topic. Qualitative methods comprised an open-ended online survey for all participants (n=38), followed by in-depth semi-structured interviews (n=8) on a randomized subset. Quantitative methods included online outcome measures (health habits, self-efficacy, interoceptive awareness, and physical activity), practice component data (tenure, dose, yoga styles, yoga teacher status, meditation frequency), and socio-demographics. This paper highlights the qualitative findings emerging from participant narratives. Reported benefits of practice included the provision of a filter through which to engage with life and the experience of self-regulation and mindfulness ‘off the mat’. Practitioners experienced yoga as a self-sustaining positive resource via self-regulation guided by an embodied awareness. The key narrative to emerge was an attunement to embodiment through movement. Embodied movement can elicit self-regulatory pathways that support health behavior.

Keywords: embodiment, habit, interoception, mindfulness, movement practice, qualitative, self-regulation, yoga.

Historically and in the present day, Black women’s positionality in the U.S. has paradoxically situated them in a society where they are both intrinsically essential and treated as expendable. This positionality, known as gendered racism, manifests commonly in professional environments and results in myriad harms. In response, Black women have developed, honed, and practiced a range of coping styles to mitigate the insidious effects of gendered racism. While often effective in the short-term, these techniques frequently complicate Black women’s well-being. For Black female clinicians who experience gendered racism and work on the frontlines of community mental health, myriad bio-psycho-social-spiritual harms compound. This project provided an opportunity for Black female clinicians from across the U.S. to share their experiences during the dual pandemics of COVID-19 and anti-Black violence. I conducted in-depth interviews with clinicians (n=14) between the ages of 30 and 58. Using the Listening Guide voice-centered approach to data generation and analysis, I identified four voices to help answer this project’s central question: How do you experience being a Black female clinician in the U.S.? The voices of self, pride, vigilance, and mediating narrated the complex ways participants experienced their workplaces. This complexity seemed to be context-specific, depending on whether the clinicians worked in predominantly White workplaces (PWW), a mix of PWW and private practice, or private practice exclusively. Participants who worked only in PWW experienced the greatest stress, oppression, and burnout risk, while participants who worked exclusively in private practice reported more joy, more authenticity, and more job satisfaction. These findings have implications for mentoring, supporting, and retaining Black female clinicians.

Keywords: Black female clinicians, professional experiences, gendered racism, Listening Guide voice-centered approach.

The purpose of this article is to speak directly to the paucity of research regarding Dominican American women and identity narratives. To do so, this article uses the Listening Guide Method of Qualitative Inquiry (Gilligan, et al., 2006) to explore how 1.5 and second-generation Dominican American women narrated their experiences of individual identity within American cultural contexts and constructs. The results draw from the emergence of themes across six participant interviews and showed two distinct voices: The Voice of Cultural Explanation and the Tides of Dominican American Female Identity. Narrative examples from five participants are offered to illustrate where 1.5 and second-generation Dominican American women negotiate their identity narratives at the intersection of their Dominican and American selves. The article offers two conclusions. One, that participant women use the Voice of Cultural Explanation in order to discuss their identity as reflected within the broad cultural tensions of their daily lives. Two, that the Tides of Dominican American Female Identity are used to express strong emotions that manifest within their personal narratives as the unwanted distance from either the Dominican or American parts of their person.

Keywords: Dominican American, women, identity, the Listening Guide, narratives

IMAGES

  1. Research article critique sample that will show you how to write a

    scholarly articles on qualitative research design

  2. Qualitative research design

    scholarly articles on qualitative research design

  3. Overview diagram as used in Qualitative Research Resources Consultation

    scholarly articles on qualitative research design

  4. Understanding Qualitative Research: An In-Depth Study Guide

    scholarly articles on qualitative research design

  5. Explore four methods for collecting qualitative research

    scholarly articles on qualitative research design

  6. Research Methodology Qualitative Case Study for Info

    scholarly articles on qualitative research design

VIDEO

  1. part2: Types of Research Designs-Qualitative Research Designs|English

  2. Different types of Research Designs|Quantitative|Qualitative|English| part 1|

  3. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  4. Qualitative Research Designs

  5. Quantitative Research Designs

  6. Getting Used to Qualitative and Quantitative Journal Articles

COMMENTS

  1. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  2. What is Qualitative in Qualitative Research

    Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  4. Planning Qualitative Research: Design and Decision Making for New

    Therefore, the purpose of this paper is to provide a concise explanation of four common qualitative approaches, demon-strating how each approach is linked to specific types of data collection and analysis. The four qualitative approaches we include are case study, ethnography, narrative inquiry, and phenomenology.

  5. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  6. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  7. Qualitative Inquiry and Research Design: Choosing Among Five Approaches

    Abstract. Qualitative Inquiry and Research Design provides an overview of the five main traditions of qualitative research. The author explains the uniqueness of each approach and its applicability to different types of inquiry. Illustrative examples from public health and social science fields are provided.

  8. Designing Qualitative Studies

    Moreover, qualitative designing processes essentially accommodate the features of constructivist knowledge and understandings of the quality of qualitative research (see Chap. 9).Like almost all other aspects of naturalist qualitative inquiry, this view of design is close to the everyday-life meaning of designing and planning any endeavor.

  9. Full article: Thinking through and designing qualitative research

    Methodology. The methodology selected for this review was a 'focused mapping review and synthesis' because we focused on (a) a particular subject, (b) a defined time period, and (c) specific journals (Bradbury-Jones et al., Citation 2017; Grant & Booth, Citation 2009).The characteristics of a focused mapping review and synthesis methodology are well-suited for the present study given our ...

  10. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  11. An overview of the qualitative descriptive design within nursing research

    Qualitative Research in Psychology 3: 77-101. [Google Scholar] Caelli K, Ray L, Mill J. (2003) 'Clear as mud': Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods 2: 1-13. [Google Scholar] Chafe R. (2017) The Value of Qualitative Description in Health Services and Policy Research.

  12. Key Concepts in Qualitative Research Design

    Abstract. This chapter provides an outline of key concepts in qualitative research design for healthcare simulation. It explores three landmarks that provide orientation to researchers: (1) a defined purpose for the research; (2) an articulation of the researcher's worldview; and (3) an overarching approach to research design.

  13. Qualitative research methods: when to use them and ...

    How to judge qualitative research. Qualitative research is gaining increased momentum in the clinical setting and carries different criteria for evaluating its rigour or quality. Quantitative studies generally involve the systematic collection of data about a phenomenon, using standardized measures and statistical analysis.

  14. Qualitative Research: Sage Journals

    Qualitative Research is a peer-reviewed international journal that has been leading debates about qualitative methods for over 20 years. The journal provides a forum for the discussion and development of qualitative methods across disciplines, publishing high quality articles that contribute to the ways in which we think about and practice the craft of qualitative research.

  15. Pragmatic approaches to analyzing qualitative data for implementation

    Qualitative methods are critical for implementation science as they generate opportunities to examine complexity and include a diversity of perspectives. However, it can be a challenge to identify the approach that will provide the best fit for achieving a given set of practice-driven research needs. After all, implementation scientists must find a balance between speed and rigor, reliance on ...

  16. Qualitative Design Research Methods

    The Origins of Design-Based Research. Qualitative design-based research (DBR) first emerged in the learning sciences field among a group of scholars in the early 1990s, with the first articulation of DBR as a distinct methodological construct appearing in the work of Ann Brown and Allan Collins ().For learning scientists in the 1970s and 1980s, the traditional methodologies of laboratory ...

  17. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  18. Qualitative research: its value and applicability

    Summary. Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals ...

  19. Qualitative Methods in Health Care Research

    Qualitative Research. Diverse academic and non-academic disciplines utilize qualitative research as a method of inquiry to understand human behavior and experiences.[6,7] According to Munhall, "Qualitative research involves broadly stated questions about human experiences and realities, studied through sustained contact with the individual in ...

  20. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  21. Qualitative research: extending the range with flexible pattern

    Qualitative techniques are widely applied in business and management research (Cassell and Bishop 2019; Gioia et al. 2013; Strauss and Corbin 1998).The principal reason is that qualitative studies can provide rich insights that explain underlying mechanisms and processes (Grund and Walter 2015; Muhic and Bengtsson 2019).Table 1 lists some of the recent qualitative studies published in Review ...

  22. American Journal of Qualitative Research

    American Journal of Qualitative Research (AJQR) is a quarterly peer-reviewed academic journal that publishes qualitative research articles from a number of social science disciplines such as psychology, health science, sociology, criminology, education, political science, and administrative studies.The journal is an international and interdisciplinary focus and greatly welcomes papers from all ...

  23. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions. Advanced search. Find articles. with all of the words. with the exact phrase. with at least one of the words. without the ...