Logo for Éditions science et bien commun

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

QUALITATIVE METHODS

13 Case Studies

Valéry Ridde, Abdourahmane Coulibaly, and Lara Gautier

Case studies consist of an in-depth analysis of one or more cases, using a variety of methods and theoretical approaches. The choice of cases (single or multiple) studied is crucial. Case studies are particularly suitable for studying the emergence and processes involved in policy implementation and for contributing to theory-based evaluations.

Keywords: Qualitative methods, quantitative methods, mixed methods, case study, theoretical approaches, single/multiple cases, empirical triangulation, analytical generalisation

I. What does this method consist of?

Also used in anthropology, the case study approach has long been used in evaluation, where it is considered not as a method but as a research strategy (Yin 2018). By studying a policy in context and using multiple lines of evidence, the case study (single or multiple) seeks to answer ‘how’ and ‘why’ questions from a systems approach and with the support of theoretical approaches. Conducting a case study for a public policy evaluation follows a standard evaluation process: planning, drafting the protocol, preparing the field, collecting and analysing data, sharing results and making recommendations for policy improvement (Gagnon 2012). As with all evaluations, the choice of methods should follow the objectives and the evaluation question, not the other way around. A case study may thus mobilise qualitative, quantitative and different mixed methods designs.

The case study strategy is therefore appropriate when organising an evaluation of policy emergence, process, relevance or adaptation. It is often mobilised when evaluation teams have little or no control over the events and context that influence policy actions. This is often the case outside of experimental situations, which are rare in the field of public policy. It is therefore mostly recommended for understanding a contemporary, often complex, phenomenon organised in a real context.

The case study approach can be used to explain a public policy, describe it in depth or illustrate a specific situation, which can sometimes be original and enlightening for decision-making. The advantage of case studies is that they can be adapted to different situations where there are multiple variables of interest around a policy. It is also about being able to use multiple sources of data, both quantitative and qualitative, which allow for empirical triangulation. The case study strategy allows theoretical propositions and the state of scientific knowledge to guide data collection and analysis. It fits perfectly with, but is not limited to, theory-based evaluation approaches (see separate chapter on theory-based evaluation ).

There are a myriad of proposals for the types of case studies that are possible. Firstly, it is possible to use single/single case studies (involving one policy) or multiple case studies (several policies in the same organisational context or one policy in different contexts). Secondly, these cases can be studied holistically (the policy as a whole) or at different levels of analysis (the dimensions of the policy that the intervention theory will have specified or the particular regional contexts). The choice of case studies should be heuristic (to learn from the study) and strategic (to have data available within the available budget, to answer useful questions). A key criterion for case selection is to have sufficiently relevant information to understand the policy in depth and complexity. Case sampling should therefore be explicit, rigorous and transparent. The selection of case studies can thus be critical, unique, typical, revealing, instrumental, etc. This selection can also be carried out in collaboration between the research and policy teams to ensure that the choices are relevant and feasible. The selection can also be based on prior quantitative analyses to obtain the starting situation of the cases and, for example, choose cases that are very contrasting or very similar in their performance with regard to the policy being analysed.

Sometimes it can also be useful to have a diachronic approach in order to produce longitudinal case studies. For example, analysing a policy over time can reveal the influences of changes in the context or in the strategies of those implementing it, or of those benefiting from it. Starting with cases with similar initial conditions and then studying their evolution is referred to as ‘racing cases’ by Eisenhardt (Gehman et al. 2018).

When analysing the data, the case study approach requires, in addition to the usual analyses specific to the methods (content analysis, thematic analysis, descriptive or inferential statistics, etc.), to mobilise a replication logic. The idea is to compare, in a systematic and rigorous way, the empirical data and the theory, be it the theory of the policy intervention or a theoretical or conceptual framework used to understand the policy. This process is referred to by Yin as analytical generalisation. When several cases support the same theory, it is possible to suggest the presence of a replication logic ( Yin 2010).

Configurations can be heuristic tools for this analysis, whether they are organisational or rooted in critical realism (see separate chapter on realistic evaluation ). Furthermore, finding similar patterns, or situations, in different contexts strengthens the ability to generalise the results of case studies. Yin believes that analytical generalisation requires the construction of a very strong case that will be able to withstand the challenges of logical analysis. Thus, it is essential to specify this theoretical rationale at the outset of the case study, either by mobilising a theory or from the state of the art without it being entirely specific to the public policy being analysed. At the beginning of a case study, it is therefore necessary to remain at a relatively high conceptual level, at least higher than the policy under study. Secondly, the empirical results of the case study must show how they align (or not) with the theoretical argument at the outset. Finally, it will be necessary to discuss how this theoretical thinking, based on this particular policy, can also be applied to other situations and policies in the particular case study. The fact that, even at the beginning of the case study, a counter-argument (rival hypotheses) was also formulated, and that empirical evidence was sought during the data collection process (which refutes them), reinforces the validity of this process of analytical generalisation. Finally, the power of multiple case studies is that this analytical generalisation is strengthened when the results of one case are similar to those of other cases.

Some research teams even propose that case studies can lead to theory-building, especially when analysing complex objects such as public policies.

II. How is this method useful for policy evaluation?

Before deciding to embark on a case study approach, two preliminary questions should be asked which will determine the appropriateness of the approach:

Does the phenomenon I am interested in need the case(s) to be understandable? (e.g., Theory-building case studies)

Does the case(s) represent an empirical window that informs the analysis of the wider phenomenon?

Once one or the other has been answered positively, the evaluative questions can be defined:

Under what real-life conditions can public policy X, piloted in context A, be scaled up in contexts B, C, and D?

How did the controversy about public policy Y in context B emerge?

What are the success factors for the implementation of public policy X in context A?

How were public policies Y and Z implemented in context B?

Why did public policy X in context A and B fail, while it had positive effects in context C?

Why did public policy X implemented in context A fail, while public policy Y implemented in the same context A succeeded?

What is it about the characteristics of public policy Z implemented in contexts A, B, and C that informs μ theory-building case studies?

The case study can be used at any point in the evaluation process, ex ante (at the time of policy design), in itinere (during implementation), or ex post (e.g. to better understand the results produced).

III. An example of the use of this method in Burkina Faso

Simple and multiple longitudinal case studies were mobilised to study a public health financing policy in Burkina Faso (Ridde 2021).

The World Bank encouraged the government to test in a dozen districts a modality for financing health centres in addition to the state budget. The idea was to organise a performance-based payment system in which health centres and health professionals received additional funds based on the achievement of activity results. For example, for each delivery performed in the centre with a partographer, they received 3.2 euros to be shared between the structure and the staff, according to complex procedures and indicators. Verification and control processes were organised to ensure the reliability of payment claims.

To study the emergence of this new policy, we conducted a single case study (focusing on the policy) to better understand its origin, ideas, proposed solutions, people who proposed it, power issues, etc. We employed a literature review and 14 qualitative in-depth interviews with policy makers, funding agencies and experts on the subject. Using an analytical generalisation approach, we compared this emergence to understand whether what happened in Burkina Faso was also happening in Benin.

To study the implementation of the policy in Burkina Faso, we then used multiple longitudinal case studies. For reasons of time and budget, we selected three districts representing the diversity of situations in which the policy was implemented. Then, within each of these districts, we selected six cases from the primary health centres (about 30 per district) and one case that was the referral hospital (only one per district). The six cases were selected according to the three types of financing strategies that the policy wished to test, so two cases per type. We decided to select two cases with the greatest possible contrast within each of the three types: one very performant health centre and one not at all. Performance was calculated using a quantitative method (time series) on the basis of indicators of health centre attendance in the years preceding the policy. This etic analysis (from the external perspective) ranked all the health centres according to their order of performance to support case selection. The latter also benefited from the emic opinion (from the internal point of view) of local health system managers in order to take into account their own perception of the performance of the centres, beyond the quantitative approach which only gives a partial view of performance. Thus, for each of the seven cases selected per district (7×3 = 21), we used multiple sources of data to understand the challenges of policy implementation: analysis of documentation, formal qualitative interviews (between 114 and 215 per district) and informal interviews (between 26 and 168 per district), and observations of situations. A data collection grid was also used to measure the fidelity of policy implementation. In order to better understand the evolution of policy implementation, and in particular adaptations over time, three data collection moments were carried out over a 24-month period, thus following the longitudinal multiple case study approach.

Finally, these case studies have also been fruitful in studying, with a qualitative approach and a long immersion in the field, the unexpected consequences (positive or negative) of this policy. Although this dimension of the evaluation is still too little understood, its implementation in Burkina Faso has shown the relevance of this approach (Turcotte-Tremblay et al. 2017). Limiting oneself to the expected effects, which is often implied by an extreme focus on the sole theory of intervention developed by the teams that define the policy, reduces the heuristic scope of the evaluation. While successes are essential, challenges may also be necessary to improve public policies with the help of case studies.

For all these approaches, the analysis was carried out in a hybrid manner, both deductive (with respect to the intervention theory or a conceptual framework) and inductive (original empirical data). The comparison between cases, between districts and between countries allowed for an increase in abstraction in an analytical generalisation process.

IV. What are the criteria for judging the quality of the mobilisation of this method?

Judging the quality of a complex approach such as case studies requires a global vision, going beyond the specific but essential reflections of the usual methods (quantitative and qualitative). To this end, Yin (2018) proposes to study the quality of case studies in terms of four dimensions:

Construct validity (studying the expected policy and not something else): using multiple sources of evidence, describing and establishing a causal chain, involving stakeholders in the validation of the protocol and reports;

Internal validity (confidence in results): compare empirical data with each other and with theory, construct explanatory logics, account for competing and alternative hypotheses, use logical frameworks/theories of intervention;

External validity (ability to generalise results): use theories, use the logic of analytical replication;

Reliability (for the same case study, the same findings): use a policy study protocol, develop a case database.

V. What are the strengths and limitations of this method compared to others?

The main strength of the case study is its ability to ‘incorporate the unique characteristics of each case and to examine complex phenomena in their context’, i.e. in real-life conditions (Stiles 2013, 30).

The case study strategy, due to the abundance and variety of the corpus of data mobilised, and the research methods employed (qualitative, quantitative or mixed), most often allows for a rich description of the public policy(ies) being evaluated and the contexts of implementation. This is particularly true of single case studies, which allow for in-depth analysis. With regard to multiple case studies, the main advantage is that it allows for more potential variation, which increases the robustness of the explanation. The downside is that these strategies require a significant time commitment. Thus, the sheer volume of work can be problematic, especially if the deadlines set by the sponsors are short. In addition, if there are several evaluative questions, or a question that invites the linking of implementation issues to outcomes, then it may be necessary to consider combining the case study (which may focus on process analysis, for example) with another complementary research strategy, such as quasi-experimental approaches (Yin and Ridde, 2012). Finally, several biases may arise – the biased choice of case(s), low statistical power when conducting quantitative analyses. These biases may erode comparability across cases or contexts. The rich justification of the choice of cases (public policies) (Stake 1995) and the description of the context(s), as well as the process of analytical generalisation, described above, help to reduce the impact of these biases.

With regard to theory-building case studies, both advantages and disadvantages of the case study are identified (Stiles 2013). The case study strategy here consists of comparing different statements from theory with one or more observations. This can be done by describing the few cases in theoretical terms. Thus, although each detail can only be observed once, they can be very numerous and therefore useful for theory building. However, the same biases mentioned above are likely to occur (biased case selection, low statistical power). Confidence in individual statements may be eroded by these biases. On the other hand, as many statements are examined – reflecting a variety of contexts and therefore possible variations – the overall strengthening of confidence in the theory may be just as important as in a hypothesis testing study.

Some bibliographical references to go further

Gagnon, Yves-Chantal. 2012. L’étude de cas comme méthode de recherche . 2nd ed. Québec: Presses de l’Université du Québec.

Gehman, Joel. and Glaser, Vern L.. and Eisenhardt, Kathleen M.. and Gioia, Denny. and Langley, Ann. and Corley, Kevin G.. 2018. “Finding Theory–Method Fit: A Comparison of Three Qualitative Approaches to Theory Building.” Journal of Management Inquiry, 27(3): 284‑300. https://doi.org/10.1177/1056492617706029 .

Ridde, Valéry, éd. 2021. Vers une couverture sanitaire universelle en 2030? Éditions science et bien commun. Québec: Canada: Zenodo. https://doi.org/10.5281/ZENODO.5166925 .

Stake, Robert E. 1995. The Art of Case Study Research . Thousand Oaks, CA: SAGE Publications.

Stiles, William B. 2013. “Using Case Studies to Build Psychotherapeutic Theories.” Psychothérapies , 33(1): 29‑35. https://doi.org/10.3917/psys.131.0029 .

Turcotte-Tremblay, Anne-Marie. and Ali Gali-Gali, Idriss. and De Allegri, Manuela. and Ridde, Valéry. 2017. “The Unintended Consequences of Community Verifications for Performance-Based Financing in Burkina Faso.” Social Science & Medicine , 191: 226‑36. https://doi.org/10.1016/j.socscimed.2017.09.007 .

Yin, Robert K. 2010. “Analytic Generalization.” In Encyclopedia of Case Study Research , by Albert Mills, Gabrielle Durepos, and Elden Wiebe, 6. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc. https://doi.org/10.4135/9781412957397.n8 .

Yin, Robert K. 2018. Case study research and applications: design and methods . Sixth edition. Los Angeles: SAGE.

Policy Evaluation: Methods and Approaches Copyright © by Valéry Ridde, Abdourahmane Coulibaly, and Lara Gautier is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Open access
  • Published: 14 January 2019

Novel methods of qualitative analysis for health policy research

  • Mireya Martínez-García   ORCID: orcid.org/0000-0002-2876-8500 1   na1 ,
  • Maite Vallejo 1 ,
  • Enrique Hernández-Lemus 2 &
  • Jorge Alberto Álvarez-Díaz 3  

Health Research Policy and Systems volume  17 , Article number:  6 ( 2019 ) Cite this article

14k Accesses

6 Citations

14 Altmetric

Metrics details

Currently, thanks to the growing number of public database resources, most evidence on planning and management, healthcare institutions, policies and practices is becoming available to everyone. However, one of the limitations for the advancement of data and literature-driven research has been the lack of flexibility of the methodological resources used in qualitative research. There is a need to incorporate friendly, cheaper and faster tools for the systematic, unbiased analysis of large data corpora, in particular regarding the qualitative aspects of the information (often overlooked).

This article proposes a series of novel techniques, exemplified by the case of the role of Institutional Committees of Bioethics to (1) massively identify the documents relevant to a given issue, (2) extract the fundamental content, focusing on qualitative analysis, (3) synthesize the findings in the published literature, (4) categorize and visualize the evidence, and (5) analyse and report the results.

A critical study of the institutional role of public health policies and practices in Institutional Committees of Bioethics was used as an example application of the method. Interactive strategies were helpful to define and conceptualise variables, propose research questions and refine research interpretation. These methods are additional aids to systematic reviews, pre-coding schemes and construction of a priori diagrams to survey and analyse social science literature.

Conclusions

These novel methods have proven to facilitate the formulation and testing of hypotheses on the subjects to be studied. Such tools may allow important advances going from descriptive approaches to decision-making and even institutional assessment and policy redesign, by pragmatic reason of time and costs.

Peer Review reports

Complexities of policy analysis

Healthcare institutions are complex organisations whose procedures, activities and, ultimately, outcomes should be assessed constantly in order to optimise their functionality in ever-changing environments [ 1 ]. There are several ways to perform qualitative analysis of Health Care Institutions Policies and Practices (HCIPP), including ethnography, ethnomethodology, phenomenology, action research, grounded theory, critical discourse analysis, and evidence-based science, among others [ 2 ]. The Qualitative Research Methodology (QRM), for instance, uses data collected to discover or refine research questions because, usually, performance variables are not fully conceptualised or completely defined [ 3 , 4 ].

As with all research strategies, choosing the best QRM is vital to obtaining the desired results in HCIPP analysis. Computerized Qualitative Analysis of Discourse (CQAD), for instance, is used to extract and synthesise descriptions of search, selection, quality appraisal, analysis and synthesis methods. Additionally, evidence-based non-systematic literature reviews (NSLR), rapid reviews, scoping studies and research syntheses have gained wide acceptance in the QRM [ 5 ].

CQAD and NSLR have some degree of empirical support and classifying evidence of their epistemological strength; both converge in the analytic phase, sharing methodologies for decontextualising and recontextualising data, coding, sorting, identifying themes and relationships, and drawing conclusions [ 2 ]. At this stage, it is useful to assess the strengths and limitations of current approaches to policy analysis and to address how improvement can be achieved in this regard.

Advantages and disadvantages of traditional approaches

A literature search is a key step in carrying out a good reliable HCIPP research. It helps in formulating or refining a research question and planning the strategies of study [ 6 ]. Access to the most relevant articles, with maximum evidence, in a shorter time and with less cost is essential for HCIPP analysis.

Bioethics literature is vast. Researchers use CQAD and NSLR to examine patterns in documents in a replicable and systematic manner. On the one hand, CQAD is used to automate the classification and coding categories in texts or in preparing sets of texts for building up inferences. Additionally, lesser time and cost are advantages, in contrast to the manual limitative analysis of just measuring the number of words and lines. The larger disadvantage of CQAD is the dependence on subjective impressions of a reader [ 7 , 8 ].

On the other hand, two types of non-systematic review have been discussed in relation to bioethics literature, namely Introductory Reviews of Bioethics Literature and Critical Interpretive Reviews of Bioethics literature. These approaches have been quite popular recently since they are faster and easier to implement than systematic reviews of the literature [ 9 , 10 ]. However, some of them have brought scientific and methodological controversies about transparency, rigor, comprehensiveness and reproducibility. Further, these approaches have disadvantages related to the insufficiently focused review scope, diversity of terminology in order to identify all relevant publications and quality assessment [ 5 , 11 ].

Introducing novel methods

This article on novel methods of qualitative analysis is aimed towards policy-makers, bioethics health professionals and researchers. The model has been proposed by pragmatic reasons of time and costs. Many of the processes underlying institutional policies and practices have not been properly investigated; thus, there is a need to incorporate QRM frameworks for such research. Consequently, to address the phenomenon of institutional analysis and to account for its relationship to public health, a systematic model of critical analysis is proposed and exemplified by the case of the role of Institutional Committees of Bioethics (ICB).

Aims of this work and case study outline

As a case study to introduce our methodological proposal, we will analyse the case of HCIPP of ICB . In this subsection, we will provide some information regarding the choice of this case study and the foundations for its analysis.

The bioethical discourse in public policy establishes an important part of the practice in the public healthcare institutions as is the case of ICB. This is so, since recently, science and technology have been increasingly reassessed in ethical terms [ 12 – 14 ]. Ethics has become the decisive semantic form in which government discourses are carried out with greater political relevance, and has since become the dominant discourse [ 15 ]. As a branch of applied ethics, Bioethics has become the political medium for the creation of a moral economy where value commitments are made capable of legitimising the regulatory policies necessary to maintain public confidence in biomedical science and healthcare [ 16 ].

A growing number of studies have explored the role of ICB in various fields, from academic and biotechnological, to medical and legal [ 13 , 14 ]. The strategies for the study of the literary forms of governance of ICB have been separated by opinions, reports, guidelines and consensus statements, focused more on who does things, how and why they do them, yet in isolated form, even when CQAD and NSLR have been implemented [ 17 ]. With this scenario in mind, laying out a comprehensive method has been necessary to improve ICB policy and practice analyses.

A three-stage research design

To address the methodological approach already sketched, we will proceed along the following lines. First, the construction of a preliminary corpus with texts extracted from the Medical Literature Analysis and Retrieval System Online (MEDLINE) PubMed database (Stage I). At this stage, no biased decision as to the content of the corpus was made, except, of course, on the pertinence to the problem under study. Second, two textual exploration techniques were performed simultaneously, namely tracing of the corpus of Medical Subject Headings (MeSH) terms for the construction of a semantic network (Stage II) and an inspection of the corpus MEDLINE to identify a priori codes and categories by means of both manual and automated by CQAD (Stage III). Finally, the main findings were discussed in order to contribute to the generation of a systematic and unbiased methodological proposal to address this social phenomenon. The proposed research strategy is set out below (as it can be seen in Fig.  1 ).

figure 1

Flow diagram showing the steps followed in this work

First, build a preliminary corpus with the documents extracted from the MEDLINE PubMed database. Second, two text exploration processes were carried out in parallel, namely (1) an exploration of the corpus of MeSH terms for the construction of a semantic network and (2) an exploration of the content obtained from the PubMed corpus to identify preliminary codes and categories. Both by manual and automated processes (cytoscape and Atlas.ti softwares) were used in both cases. Third, two types of visualisations were obtained from the previous processes (semantic network and alluvial diagrams).

Stage I: Documental corpus identification

With the rapid expansion of scientific research, an effective search and the massive integration of new knowledge have become difficult. The development of methods and tools available to researchers has been one of the main lines of research in computer science. Some massive document search tools have been made more precise and a large part of the integrated graphic visualisation tools to show the relationships between authors, topics and appointments, among others, are now available. These innovative search and mass visualisation systems not only facilitate the systematisation of information, they can also help the social sciences researcher to develop a conceptual mapping to identify categories of analysis, as well as emerging categories. With this approach, the conceptualisation of the research problem can be improved, as well as enriching the abstractions and representations of the phenomena in question.

Given that the researchers in social sciences have a lot to read, it is essential not to spend much time searching for potential information. Therefore, it is recommended that these systems of massive document retrieval may be increasingly used, which at the same time allows a simple and rapid systematisation, categorisation and codification of the required knowledge. Several tools, such as the mapping technique, have been developed to graphically represent relationships between document knowledge through networks of concepts, also called semantic networks. These networks consist of nodes and links, wherein nodes represent concepts and links represent connections between documents [ 18 – 20 ].

In this work, a MEDLINE (PubMed) search was first performed to massively identify the documents that may contain terms related to the case study used as an example for the development of this methodology. The search terms entered in these engines were:

Institutionalization of bioethics and public health policy

Institutionalization of bioethics and public health

Institutionalization of bioethics

Bioethics committees and public health policy

Bioethics committees and public policy

Stage II: Semantic network curation

Once this corpus of information was obtained, the two referred processes of exploration of the texts were performed. Firstly, the use of ontology retrieved from the MeSH terms. The importance of this exploration process lies in the compartmentalisation of information, a property that allows the implementation of algorithmic approaches for its analysis, an extremely valuable resource for the massive mining of literature such as the one implemented in this work.

This stage consisted in the qualitative analysis of data extracted from PubMed MEDLINE. In recent times, network-based approaches (semantic or ontology-based networks) to understand complex social, political, biological and technological issues have been developed [ 21 ]. Such approaches are useful since they allow the researcher to have an unbiased, integrated view to discover associations and interactions between the relevant instances involved.

Connectivity maps are built so that source and target nodes are the core concepts in a given corpus and links between these correspond to co-existence of the concepts on a given database, the more instances of repeat co-occurrence, the stronger the link and hence the closer the connection between these concepts (given the underlying corpus, of course).

A previously validated Python code was used to design a reference structure and make way for network analysis with Cytoscape, an open source software platform, to analyse and visualise complex networks of interactions, in this case, semantic. All the source code for general text processing can be found at https://github.com/CSB-IG/literature/tree/master/text_processing . The calculation of the underlying literature-based measures can be found at https://github.com/CSB-IG/bibliometrics .

Once the structured file was processed in a network with the NetworkX Python library, some connectivity maps were built, so that the nodes represented the MeSH terms, and the links between these were the documents (PMID of each publication) that shared MeSH terms among them.

Stage III: Content exploration

The second analysis was performed through the manual selection text content technique, an approach to computer aided CQAD, implemented by the Atlas.ti software. CQAD is a systematic coding and categorisation approach used to explore large amounts of textual and discourse information to determine trends and patterns of words used, their frequency, relationships and structures [ 22 – 24 ]. Atlas.ti is a computer programme for the analysis of qualitative data that allows the import and encoding of textual data as a category that designates broadly semiotic elements [ 25 – 27 ].

All the data within the Atlas.ti tool were organised into a hermeneutic unit (a repository of pdf documents) and, from it, citations, codes and groups of codes were built. The citations were the places where the ideas were stored (a physical location) and the codes were the spaces to store the categories (a way of labelling certain aspects of the data and classifying the information). The Atlas.ti query functions were used to search for coding patterns in the project database.

We used a combined deductive and inductive strategy to construct codes and categories; the deductive approach as a priori defined categories based on a theory or framework and inductive approach as a posteriori built codes and categories [ 8 ].

The operationalisation includes the materialisation of the speeches. In the case of the example, the HCIPP analysis and its relation to the sociopolitical speeches of the ICB, the variables can materialise from an a priori category in the bio-power instruments such as a patronage, clientelism, simulation or authoritarianism. The next step was to adapt these conceptual categories to compare them with the corpus of texts and discourses; while this empirical process was being carried out, other pertinent categories emerged that completed the analysis.

Once the information sources of the database were selected in the Atlas.ti hermeneutic unit, an initial scan of collected information was carried out and the codes and categories were constructed, for the evidentiary stage, for which it was called the Protocol of codification (a table of codes and categories systematised by the software Atlas.ti). The code table and a category map were generated, which was visualised by means alluvial diagram (RAW Graphs) [ 28 ].

These strategies were based on the notion of discovering a possible covert bio-political structure behind an institutionalised symbolic order [ 29 ]. The symbolic struggles and power can determine these hidden orders that exist in all social reality, inherent in different fields of knowledge and capable of revealing the role (or roles) of a certain organisation but veiled by a system of politically correct discourses and endorsed by the scientific bodies in the corpus of related literature.

Stage I Results: documental corpus identification

The search terms entered in the search engine were (the search was made on January 3, 2018):

Institutionalization of bioethics and public health policy (4 results)

Institutionalization of bioethics and public health (18 results)

Institutionalization of bioethics (42 results)

Bioethics committees and public health policy (433 results)

Bioethics committees and public policy (666 results)

The documents were retrieved in plain text (txt) format, a corpus with 770 records was formed, after removing the duplicates ( n = 393), each one of these constituted, among other elements, an identifier of the MEDLINE (PubMed) database (PMID), title, summary, date of publication, name and place of ascription of the authors, as well as the country where the research work took place.

Stage II Results: semantic network analysis

To understand the structure of the network and the interrelationships between its elements, a brief analysis of the connectivity patterns was carried out, such as the number of nodes and the number of connections.

Figure  2 shows a complex network, composed of a single connected element, with a large number of nodes, there is a relatively large number (1996) of interrelated concepts connected by a vast amount (63,488) of interconnections, responsible for the overall conceptual richness of the underlying discourse. Being this a highly clustered graph (average clustering coefficient is 0.808, meaning that more than 80% of all possible triplets of related concepts are actually present in the network), most of the concepts behind the published literature on ICB are tightly connected.

figure 2

In this Global network a quite complex graph is visually appreciated, composed of a single connected element, with a large number of nodes (MeSH terms) (1996) as well as a large number of links (63,488)

Moreover, the network centralisation statistic equals 0.917, which is indicative that a relatively small number of concepts are responsible for the highly connected structure of the network. If we look at the actual data behind Fig.  2 (To look up for network topological structure and node/link statistics, please refer to Additional file  1 ), we can identify concepts such as Human, United States, Advisory Committees, and Public Policy (with two instances) that possess more than a thousand direct conceptual associations to other terms in this context.

Humans with 1891 connections results obvious since ethics is a human construct, the case of the term United States (1288 links) reflects the fact that an important corpus of research has been published by United States-based researchers and –more importantly– within the geopolitical context of American public health and policy. This fact has to be taken seriously into account when analysing public policy under different national contexts by realising that the public research corpora will be heavily biased towards the United States-like schemes.

The fact that Advisory Committees (1077 links) and Public Policy (with 907 links when considered as a central concept and 792 links as a secondary category) show so many links is also not surprising. What may result more surprising is the fact that important social and philosophical concepts such as Institutionalization/ethics, Communities/health services and Ethics, Medical/education are at the low end of the distribution, all of them with at most 5 links.

If we consider that, on average, each concept in this semantic network is associated to 63.315 other concepts (in the world corpora of published literature in the field as represented in this analysis), the important issues of institutionalisation of ethics, community outreach and medical education are severely disconnected from the main discussions in the current literature.

As the main objective of our work is to show the example of ICB and give an account of their relationship with public policies, we analysed the entire network in a context identified by a priori MeSH terms with the largest number of connections. In this way, we decided to build two subnets based on the following MeSH terms and their first neighbours: Government Regulation (GR-683 connections), Social Control and Formal (SCF-451 connections) see Figs.  3 and 4 . This is so, since, aside from the already commented –and somehow obvious– cases of Humans, United States, Advisory Committees and Public Policy terms, Government Regulation and Social Control are highly connected concepts central to understanding the role played by the ICB.

figure 3

This figure shows the subnet based on the term MeSH Government Regulation and its first neighbours

figure 4

This figure shows the subnet based on the term MeSH Social Control, Formal and its first neighbours

Moving on to discuss the Government Regulation sub-network, it is also a large (952 concepts/nodes and 44,091 associations/links) and quite clustered network (clustering coefficient equal to 0.741), with a high centralisation (0.883), which indicates that there are really important concepts (hubs) that link together with most other concepts.

Among such highly central terms, we can mention, aside from the already mentioned global hubs, Federal Government with 587 links, Bioethics (i.e. Bioethics as a secondary subject) with 478 links, as well as Advisory Committees (453 connections), Risk Assessment (441 connections) and Informed Consent (439 links). Interestingly, concepts such as *Government Agencies, Policy and Budget are all under-represented concepts with 10 or less connections in this sub-network, as compared with the average number of neighbours, which is 92.628.

In relation to the Social Control, Formal sub-network, it consists of 1133 concepts and 51,142 relations. With a clustering coefficient of 0.741, and a centralisation of 0.903, this network presented similar connectivity features as the already discussed networks, namely a densely interconnected graph with a small number of highly central concepts. In this case, emerging concepts are Social Values with 567 connections, Government Regulation with 555, and *Bioethical issues with 507 connections, whereas interesting under-represented terms are Health Services/*standards, Safety/*standards and Organisational Culture with 6 links each (versus links on average for this network).

The above described analysis represents just a glimpse of the vast amount of contextual information that can be derived from semantic network studies and will serve as a systematic method for educated hypothesis generations. Such hypotheses can be further pursued by following the tenets of social analysis of discourse and policy assessment, as well as other methods of analysis in the social sciences.

Stage III Results: content analysis codification protocol

One way to know the symbolic order was to establish the dimensions of analysis by categories and codes, see the first column of Fig.  5 . These were determined in at least six possible categories and each of them with multiple codes, which would correspond to the role of the ICB as an organisational model of a public policy.

figure 5

Model for dimensions of analysis. Columns: (1) Forms of institutionalisation, (2) Forms of governance, (3) Institutional structure, (4) Political discourse, (5) Mechanism power and (6) Symbolic role

The categories were reconstructed from the information collected and a priori analysed, using a content analysis technique according to the recommended procedures by Fairclough [ 30 ]. This analysis, also called text (discourse), focused on identifying the frequency with which certain data appeared for its subsequent synthesis and interpretation. According to the methodological structure of category-wise qualitative analysis, the data were deconstructed, and later gathered in a unit (analytical) structure that allowed identification of its elements (synthesis) [ 31 ].

The a priori categorisation (protocol of codification) was defined as follows:

Forms of institutionalisation (column 1): (1) Normative or advisory committees; (2) Committees of professional medical associations; (3) Committees of care and hospital ethics; and (4) Committees of ethics in research.

Forms of governance (column 2): ICB-Governance mechanisms and their elements have been recognised worldwide by the Universal Declaration of Bioethics (2005).

Institutional structure (column 3): These are the conditions that give legitimacy and consolidation to each ICB, as well as the form that in daily practice takes as a consultant or regulatory entity, whether in the decision of collective, substantive or political processes.

Political discourse (column 4): This content refers to the common object and a series of specific goals which are presented in the medical, scientific, technological, public, social and political fields.

Power mechanism (column 5): As a security device, it might be understood as the combination of knowledge-power-truth that reveals how legal, medical and political discourses can be translated into approved regulatory practices to exercise power not only over the bodies, but on the populations.

Symbolic role (column 6): This symbolic paper could be covered with a veiled power, or by a discursive power, a discourse capable of controlling some minds and in turn controlling some actions. At the beginning of the exploratory phase, the characterisation of some roles could represent the role of ICB as follows: Government elites, Power elite, Control mechanism, Intellectual and moral authority, Discussion forums and Passive actors [ 32 ].

Based on this model, it was intended to describe the process that would give way to the emergence of theoretical structures of this work; implicit in the material compiled and that would integrate it into a logical whole. A model capable of schematising the content of the information had already been constructed, essentially grouped into the following codes: categorisation, structuring, testing and theorising.

The final step of content analysis was the examination of the text/discourse of the ICB; they must be integrated into the relational framework to determine if they really act as government advisors that generate social value. Derived from the previous analysis, several details that gave consistency to the present exploratory work were revealed, for example, the role that public institutions have, as the expression of political forces through which societies propose to solve some of their collective problems. In this case, their role seems to be necessarily influenced by the rules and practices of the political system; however, in the vast majority of political systems, the democratic imperfection itself hinders the representation of institutions [ 33 ].

What do we learn?: Insights on the case study

Institutional text/discourse has an important contribution in social reality. Its analysis was used to approach this reality through a linguistic process, which was used to see beyond the organisational practices. The central question of this work is: what is the role of the ICB in public health policies? To answer this question, it was necessary to dig deep into the text/discourse in order to produce and reproduce the response. In fact, other categories emerged as well as other socially constructed behaviours such as the legitimacy and resistance of institutional actors.

With this analysis, the relevance involved in the identification of texts and discourses revealed social constructions associated with pre-established texts and discourses that predetermine institutional policies and practices with the intention of strengthening its legitimacy [ 23 ].

On the other hand, the mapping of the a priory categories has been used for several purposes, such as the display of a complex structure, the communication of complicated ideas and the demonstration of connections between ideas [ 34 ]. The role that ICB have on health policies and which could modify the social, political and health dynamics, could be systematised using the Atlas.ti software. Once a code table was available, a map of preliminary categories could then generate a visualisation format using an alluvial diagram, seen in Fig.  5 .

When these a priory categories are applied to a critical text/discourse analysis, it may be necessary to recode the information and to re-run the analysis with the new codes and categories to redefine the research hypothesis and its interpretation. The result of this reorganisation forced the modification of the code table generated a priori, so that the category map was also remodelled and, at the same time, it was visualised by means of an alluvial diagram (RAW Graphs), as seen in Fig.  6 .

figure 6

Alluvial diagram that shows the theoretical proposal and the emergent categories on the legitimisation and the institutional resistance: Columns: (1) Legitimisation, (2) Reduction of uncertainty, (3) Symbolic paper, (4) Resistance. In the column of codes of resistance, it can be seen that there are two that seem to be the most outstanding, namely political support and political will

Now, the main difference between Figs.  5 and 6 , was the replacement of some columns with emerging categories in order to reduce uncertainty resulting in the disaggregation into various codes that we interpreted as being based on the concept of simulation, being this a result of the applied methodology. A question arose as to how can this theoretical concept could be seen or applied to reality. The domination of capitalism demanded a bioethical reflection based on a social analysis. Identifying the uncertainty in social, biological and political systems became essential to understand the central role of the institutions, both to overcome this uncertainty and to propose strategies of struggle.

The institutionalisation of bioethics has been described as a response to a mixture of demands for emerging public concerns (including those about advances in technology and also unethical practices) and to change political contexts in which questions about the mass data or the value of life was debated and translated into principles of rules to guide public life [ 35 ]. The formalism and the attachment to legality have been part of a political text/discourse, but not of the constant practice of the system [ 36 ]. Hence, it conforms to a sort of simulation.

Consequently, a question arose regarding the reconsideration of the case study exemplified in this article, as follows: Is progress being made in this regard or is it simply the appearance of giving attention to these uncertainties, by masking an irritating problem with a promising text/discourse, and at the same time confined to being the object of a generalised political-social control? This question had a close relationship with the case study questions: what is the role of the ICB in public health policies? How does the existing institutional arrangement work? What faculties, scope and limitations of power, as well as the exercise thereof, have been granted to these institutions? And what political and social tendency does it show when exercising its authority over matters that can directly affect health and life?

Based on this approach, it could be argued that institutions cannot directly affect policy outcomes, except by their impact on policy-making processes, from which they are designed, approved and implemented by stakeholders. Through the decision process, the institutions influence the adopted policies, in particular on the capacity to maintain inter-temporal commitments, the quality of the implementation, and the stability and credibility of the policies [ 37 ].

The challenge for this type of analysis to reveal this was then focused on linking citations of the text/discourse in larger conglomerates and higher level of abstraction while disaggregating the different dimensions or categories. This section of the discussion gave an account of how to connect the institutional theory with the forms of text/discourse analysis from a linguistic perspective, i.e. how to differentiate the discourse from the action of the political players. However, the question arose as to what would be the best way to prove that this would be true. One of the strategies employed was to speculate whether the role of ICB had some features of clientelism, patrimonialism, patronage, simulation or authoritarianism, rather than to only analyse the way in which the dynamics between the institution and government structure affect the results of public policies. Additionally, we intended to assess if the issues that generated uncertainty within the framework of bio-politics were priorities for the existing institutional arrangement or simply seemed to give attention to these in terms of maintaining social control. Again, a form of political simulation.

Hence, in order to verify the role of the ICB to reduce uncertainty, the concept of simulation was taken as an example to speculate on how these bodies could respond to the issues of concern, both to society and to national policy, based on the hypothesis that, if these are not attending to, then a social, scientific and even bio-political lack of control on the international scene may occur. The concept of institutional simulation could be used to characterise the authoritarian system, as well as its liberal democracy.

Therefore, derived from a deeper, more rigorous and critical reading of the information obtained, we made an integration and proposed that a column could be added to the theoretical map originally considered regarding the tentative form in which the ICB address the issues of uncertainty that could affect health and life. At the same time, it was proposed that some other columns were no longer necessary for this stage. Thus, the visualisation of the first map of categories (Fig.  5 ), was modified in this second stage as a result of textual and discursive citations that emerged inductively from the information collected Fig.  6 . In the column of codes of resistance, it can be seen that there are two stances that seem to be the most outstanding, namely political support and political will.

Finally, by integrating all the information and the way in which it was reconstructed in the category maps, it was possible to identify, in the text/discourse of the representatives of the ICB that were intervened, a tendency to fight against a system of appearance or simulation.

Derived from this analysis, a map of categories that would contain some of the concepts proposed to prove the traits of resistance, as well as the possible connection with the other codes and categories that emerged in text/discourse analysis, was reconstructed.

How do we learn?: Some advantages and limitations of the proposed approach

Finally, we want to briefly discuss some of the advantages and limitations of the methodology just outlined and exemplified in the study of the role of ICB in public health policy. Namely, the use of computational literature retrieval and classification, the introduction of ontologies –in this case based on PubMed’s MeSH classifier keywords– to build semantic networks and the use of hybrid manual/automated methods for the critical analysis of discourse.

As we have already mentioned, the use of contemporary data science techniques, such as computer-aided semi-automated literature retrieval and classification, provides helpful results since it implies the elimination (or better, the reduction) of sampling biases, resulting from the tendency of researchers to look for information mostly from their preferred sources, some of them ideologically skewed.

There is also the advantage of increased focus, coming from the use of the ontological classification of the concepts, such that conceptual gaps are diminished. For instance, there may be concepts that are closely related but differently enunciated or named in varying cultural circles. The use of ontologies such as the one created by PubMed’s MeSH terms somehow anneals these differences by creating a common language.

Another advantage of the MeSH ontology is, of course, the fact that it allows us to build semantic networks on a global, non-biased way, as terms are linked not by a personal opinion but from a kind of scholarly agreement arising from a large body of peer-reviewed work. Network analysis gives rise to emergent features coming from somehow unexpected conceptual connections that, as in the case of the simulation hypothesis on the role of ICB, are not evident from the study of single instances.

Combining these advantages with the use of hybrid approaches to the critical analysis of discourse allow for unbiased, but still individualised (i.e. human) critiques of the literature in a way that makes evident how the objective and subjective elements of discourse analysis are being carried out. This is highly desirable in the analysis of public policy, and particularly useful in decision-making scenarios.

After enumerating the advantages of using the present approach, we should however mention that, as it is clear, there is no study free of limitations. One particular limitation of this approach is indeed the use of pre-determined ontologies, namely the MeSH system of classifiers. MeSH terms constitute a detailed and structured ontology, which is useful for automated text classification, as it was designed with this in mind. In this regard, specific concepts, relevant to healthcare policy issues, may not be appropriately rendered to a unique MeSH term. As a consequence, the specificity in our description may be partially compromised.

The documental corpus belongs completely to literature published and indexed in the PubMed/Medline database. By this mere fact, there are a number of publication biases introduced. One particularly relevant bias is given by over-representation of papers from the top publishing countries on the subject. Many of them are actually developed countries with their characteristic issues in healthcare policy, which may not reflect the different facets of policy-making and implementation at a worldwide level.

In this article, based on an exhaustive analysis of the literature, following the conceptual tenets of collective health, we developed a novel methodological approach to the problem of critical content analysis. This alternative, which combines novel methodologies of computational data and literature mining and semantic network analysis as well as hybrid manual/automated analysis of discourse, was proposed to study the role of the ICB, as well as some of its expressions in policies, as already discussed.

The challenge of analysing the role of committees in particular, as public bodies, is mainly due to the fact that they are highly dynamic entities. Although their actions can activate transcendental political processes for society, the vast majority of these are intangible and difficult to determine.

This novel approach has allowed us to identify ‘simulation’ as one possible rationale behind the formation of ICB, i.e. one of the reasons behind the creation of ICB may be giving the impression of attending an ethical necessity –to oversee and protect life, society and nature– with political purposes.

We can conclude then that in some cases ICB are formed to attend some bioethical issues to prevent disturbances of the social and institutional order, i.e. to preserve the status quo.

In this work, we introduce a novel, pragmatic approach for the progressive, systematic analysis and exploration of large information corpora. These tools are useful for the study of qualitative data to improve the performance of institutional assessment, and public policy redesign. Interactive strategies are also helpful to perform systematic revisions of the literature, for pattern generation and codification schemes, and for diagrammatic approaches to build models evidencing interactions among concepts and categories not defined a priori.

Abbreviations

Computerized qualitative analysis of discourse

Health care institutions policies and practices

Institutional committees of bioethics

Medical literature analysis and retrieval system online

Medical subject headings

Non-systematic literature reviews

Qualitative research methodology

Martínez-García M, Hernández-Lemus E. Health systems as complex systems. Am J Oper Res. 2013; 3(01):113.

Article   Google Scholar  

Starks H, Brown Trinidad S. Choose your method: A comparison of phenomenology, discourse analysis, and grounded theory. Qual Health Res. 2007; 17(10):1372–80.

Grinnell Jr RM, Unrau Y. Social Work Research and Evaluation: Quantitative and Qualitative Approaches. New York: Cengage Learning; 2005.

Google Scholar  

Sampieri RH, Collado CF, Lucio PB. Metodología de la investigación. Mexico City: McGraw-Hill; 2012.

Daigneault PM, Jacob S, Ouimet M. Using systematic review methods within a ph. d. dissertation in political science: challenges and lessons learned from practice. Int J Soc Res Methodol. 2014; 17(3):267–83.

Grewal A, Kataria H, Dhawan I. Literature search for research planning and identification of research problem. Indian J Anaesth. 2016; 60(9):635.

Bryman A. Social Research Methods. Oxford: Oxford university press; 2016.

Mertz M, Strech D, Kahrass H. What methods do reviews of normative ethics literature use for search, selection, analysis, and synthesis? in-depth results from a systematic review of reviews. Syst Rev. 2017; 6(1):261.

Mcdougall R. Systematic reviews in bioethics: types, challenges, and value. J Med Philos. 2013; 39(1):89–97.

McDougall R. Reviewing literature in bioethics research: Increasing rigour in non-systematic reviews. Bioethics. 2015; 29(7):523–8.

Hansen HF, Rieper O. The evidence movement: the development and consequences of methodologies in review practices. Evaluation. 2009; 15(2):141–63.

Kelly SE. Public bioethics and publics: consensus, boundaries, and participation in biomedical science policy. Sci Technol Hum Values. 2003; 28(3):339–64.

Johnson S. The Impact of Presidential Bioethics Commissions: An Assessment of Outcomes in Public Bioethics. Bioethics, vol. 6. Baltimore: Johns Hopkins University Press; 2006.

Johnson S. Multiple roles and successes in public bioethics: a response to the public forum critique of bioethics commissions. Kennedy Inst Ethics J. 2006; 16(2):173–88.

Bogner A, Menz W. How politics deals with expert dissent: The case of ethics councils. Sci Technol Hum Values. 2010; 35(6):888–914.

Salter B, Salter C. Bioethics and the global moral economy: the cultural politics of human embryonic stem cell science. Sci Technol Hum Values. 2007; 32(5):554–81.

Montgomery J. Bioethics as a governance practice. Health Care Anal. 2016; 24:1–21.

Gershenson C, Niazi MA. Multidisciplinary applications of complex networks modeling, simulation, visualization, and analysis. London: Complex Adaptive Systems Journal, Springer Nature; 2013.

Book   Google Scholar  

Hernández-Lemus E, Siqueiros-García JM. Information theoretical methods for complex network structure reconstruction. Complex Adapt Syst Model. 2013; 1(1):8.

Siqueiros-García JM, Hernández-Lemus E, García-Herrera R, Robina-Galatas A. Mapping the structure and dynamics of genomics-related mesh terms complex networks. PLoS ONE. 2014; 9(4):92639.

Martínez-García M, Salinas-Ortega M, Estrada-Arriaga I, Hernández-Lemus E, García-Herrera R, Vallejo M. A systematic approach to analyze the social determinants of cardiovascular disease. PLoS ONE. 2018; 13(1):0190960.

Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008; 62(1):107–15.

Phillips N, Lawrence TB, Hardy C. Discourse and institutions. Acad Manag Rev. 2004; 29(4):635–52.

Vaismoradi M, Turunen H, Bondas T. Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nurs Health Sci. 2013; 15(3):398–405.

Muhr T. Atlas.ti a prototype for the support of text interpretation. Qual Sociol. 1991; 14(4):349–71.

Smit B. Atlas.ti for qualitative data analysis. Perspect Educ. 2002; 20(3):65–75.

Hwang S. Utilizing qualitative data analysis software: A review of atlas.ti. Soc Sci Comput Rev. 2008; 26(4):519–27.

Mauri M, Elli T, Caviglia G, Uboldi G, Azzi M. Rawgraphs: A visualisation platform to create open outputs. In: Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter. New York: ACM: 2017. p. 28.

Collazos WP. La violencia simbólica: como reproducción biopolítica del poder. Rev Latinoam Bioética. 2009; 9(2):4.

Fairclough N. Critical discourse analysis as a method in social scientific research. Methods Crit Discourse Anal. 2001; 5:121–38.

Fairclough N. Discourse and text: Linguistic and intertextual analysis within discourse analysis. Discourse Soc. 1992; 3(2):193–217.

Japp KP, Saavedra ME. Actores políticos. Estud Sociol. 2008; 76:3–31.

Prats J. Instituciones y desarrollo en américa latina ¿un rol para la ética?Instituto Internacional de Gobernabilidad. 2002;17. https://www.uoc.edu/web/esp/art/uoc/prats0502/prats0502.html .

Revere D, Fuller SS, Bugni PF, Martin GM. An information extraction and representation system for rapid review of the biomedical literature. In: Medinfo. Amsterdam: IOS Press: 2004. p. 788–92. The Netherlands STM Publishing House.

Shapiro ZE. Bioethics in the law. Hastings Cent Rep. 2017; 47(1):1.

De Vries R, Dingwall R, Orfali K. The moral organization of the professions: Bioethics in the united states and france. Curr Sociol. 2009; 57(4):555–79.

Zurbriggen C. El institucionalismo centrado en los actores: una perspectiva analítica en el estudio de las políticas públicas. Rev Cien Política (Santiago). 2006; 26(1):67–83.

Download references

Acknowledgments

Not applicable.

This work was supported by CONACYT (grant no.179431/2012) [EHL], as well as by federal funding from the National Institute of Cardiology (Mexico) [MV] and the National Institute of Genomic Medicine (Mexico) [EHL]. [EHL] acknowledges additional support from the 2016 Marcos Moshinsky Chair in the Physical Sciences.

Availability of data and materials

Interactive network files with all statistical and topological analyses are included as Additional file  1 . This is a Cytoscape.cys session. In order to open/view/modify this file please use the freely available Cytoscape software platform, available at http://www.cytoscape.org/download.php

Author information

Mireya Mart\'{i}nez-Garc\'{i}a and Maite Vallejo are joint corresponding authors.

Authors and Affiliations

Sociomedical Research Department, National Institute of Cardiology, Juan Badiano 1, Mexico City, 14080, Mexico

Mireya Martínez-García & Maite Vallejo

Computational Genomics Division, National Institute of Genomic Medicine, Periférico Sur 4809, Mexico City, 14610, Mexico

Enrique Hernández-Lemus

Medical School, Autonomous Metropolitan University, Calzada del Hueso 1100, Mexico City, 04960, Mexico

Jorge Alberto Álvarez-Díaz

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: MM-G, MV, JAÁD. Data curation: MM-G, EH-L. Formal analysis: MMG, EH-L. Funding acquisition: MM-G, EH-L. Investigation: MM-G, EH-L, JAÁ-D. Methodology: MM-G, MV, EH-L. Software: EH-L. Supervision: MV, EH-L. Visualisation: MM-G, MV, EH-L. Writing original draft: MM-G, EH-L. Writing, review and editing: MM-G, MV, EH-L, JAÁ-D. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mireya Martínez-García .

Ethics declarations

Ethics approval and consent to participate.

This article does not contain any studies with human participants or animals performed by any of the authors. For this type of study formal consent is not required.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1.

Interactive network files . Interactive network files with all statistical and topological analyses. This is a Cytoscape.cys session. In order to open/view/modify this file please use the freely available Cytoscape software platform, available at http://www.cytoscape.org/download.php . (SIF 3413 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Martínez-García, M., Vallejo, M., Hernández-Lemus, E. et al. Novel methods of qualitative analysis for health policy research. Health Res Policy Sys 17 , 6 (2019). https://doi.org/10.1186/s12961-018-0404-z

Download citation

Received : 04 May 2018

Accepted : 09 December 2018

Published : 14 January 2019

DOI : https://doi.org/10.1186/s12961-018-0404-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Data mining
  • Complex networks
  • Healthcare economics and organisations
  • Social control policies
  • Health policy

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative methods for policy analysis case study research strategy

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Case Study – Methods, Examples and Guide

Case Study – Methods, Examples and Guide

Table of Contents

Case Study Research

A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.

It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.

Types of Case Study

Types and Methods of Case Study are as follows:

Single-Case Study

A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.

For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.

Multiple-Case Study

A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.

For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.

Exploratory Case Study

An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.

For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.

Descriptive Case Study

A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.

For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.

Instrumental Case Study

An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.

For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.

Case Study Data Collection Methods

Here are some common data collection methods for case studies:

Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.

Observations

Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.

Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.

Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.

Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.

How to conduct Case Study Research

Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:

  • Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
  • Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
  • Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
  • Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
  • Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
  • Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
  • Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.

Examples of Case Study

Here are some examples of case study research:

  • The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
  • The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
  • The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
  • The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
  • The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.

Application of Case Study

Case studies have a wide range of applications across various fields and industries. Here are some examples:

Business and Management

Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.

Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.

Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.

Social Sciences

Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.

Law and Ethics

Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.

Purpose of Case Study

The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.

The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.

Case studies can also serve other purposes, including:

  • Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
  • Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
  • Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
  • Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.

Advantages of Case Study Research

There are several advantages of case study research, including:

  • In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
  • Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
  • Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
  • Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
  • Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
  • Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.

Limitations of Case Study Research

There are several limitations of case study research, including:

  • Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
  • Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
  • Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
  • Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
  • Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
  • Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

Survey Research

Survey Research – Types, Methods, Examples

Book cover

  • © 2022

Agricultural Policy Analysis

Concepts and Tools for Emerging Economies

  • Jeevika Weerahewa 0 ,
  • Andrew Jacque 1

Professor, Department of Agricultural Economics and Business Management, University of Peradeniya, Peradeniya, Sri Lanka

You can also search for this editor in PubMed   Google Scholar

Policy Analyst, Technical Assistance to the Modernisation of Agriculture Programme Sri Lanka (TAMAP), Colombo, Sri Lanka

  • Presents standard tools of analysis commonly used by agricultural economists and planners
  • Includes a series of exercises and hence be used as a key resource in agriculture and related programs
  • Provides the theoretical underpinnings required and the context within which policies are implemented in developing countries

12k Accesses

5 Citations

  • Table of contents

About this book

Editors and affiliations, about the editors, bibliographic information.

  • Publish with us

Buying options

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

This is a preview of subscription content, log in via an institution to check for access.

Table of contents (20 chapters)

Front matter, policy analysis and the policy environment, overview of agricultural policy.

Jeevika Weerahewa

Public Policy: An Overview

  • H. M. Gunatilake

Agriculture and Economic Development

  • D. V. Pahan Prasada

Concepts, Approaches, and Measures for Policy Analysis

The international trading system.

  • Emalene Marcus-Burnett

Economic Concepts for Agricultural Policy Analysis

Andrew Jacque

Measuring Competitiveness of Agricultural Markets

  • Erandathie Pathiraja, Chatura Sewwandi Wijetunga, Sooriyakumar Krishnapillai

Qualitative Methods for Policy Analysis: Case Study Research Strategy

  • Sarath S. Kodithuwakku

Tools to Analyse Sectoral and Global Regulations

General equilibrium analysis of regional trade agreements.

  • Sumali Dissanayake

Analysing Trade Facilitation Using Gravity Models

  • Senal A. Weerasooriya

Analysing Marketing Policies Using Market Integration Models

  • Pradeepa Korale-Gedara

Partial Equilibrium Analysis of Agricultural Price Policies

Tools to analyse rural development programmes, cost-benefit analysis of irrigation projects.

  • Sunil Thrikawala, Christof Batzlen, Pradeepa Korale-Gedara

Choice Experiment Analysis of Non-market Values of Ecosystem Services

  • Sahan T. M. Dissanayake, Shamen P. Vidanage

Analysing Agriculture Extension Programmes Using Randomised Control Experiments

  • Wasantha Athukorala

Using Agricultural Production Functions to Analyse Land Tenure Reforms

  • Dilini Hemachandra

This book is centred around various interwoven topics which are fundamental to policy analysis in agriculture.  Key concepts and tools that are fundamental for the analysis of agricultural policies and programmes are presented.  Key concepts introduced include, the role of the state in a market economy with examples from the Sri Lankan and other developing economies, the international trade environment, and conceptual frameworks for analysing important domestic and international trade policies. It also highlights interconnections among agriculture, development, policy and illustrates the extent to which the agricultural sector contributes in achieving economic growth objectives, equity and equality objectives and environmental objectives. The book takes the readers through the nature of agricultural markets in developing countries, with special emphasis on Sri Lanka, and illustrates how the degree of competitiveness is measured at various market levels using multiple indices and methods. Several tools, with accompanying case studies, for the analysis of policies and programmes are detailed.  These tools include the GTAP model, gravity models, extended benefit cost analysis, and linear programming.  Tools and models are applied to the analysis of trade policies and agreements, marketing policies, environmental services, extension programmes, land tenure reforms and climate change adaptations.  Case studies in relation to the agri-food policy and strategy response to COVID-19 Pandemic are also covered. 

This book is of interest to public officials working in agricultural planning and agricultural policy, teachers, researchers, agro-economists, capacity builders and policymakers. Also the book serves as additional reading material for undergraduate and graduate students of agriculture, development studies, and environmental sciences. National and international agricultural scientists, policy makers will also find this to be a useful read.

  • Agricultural policies
  • Economic Theory and Applications
  • Econometric and Simulation Models
  • Policy environment
  • Practitioners’ book

Jeevika Weerahewa is the Senior Professor of Agricultural Economics attached to the Department of Agricultural Economics and Business Management, Faculty of Agriculture, University of Peradeniya, Sri Lanka. She obtained her BSc and MPhil from the University of Peradeniya and PhD from the University of Guleph, Canada. She has served as the Head of the Department of Agricultural Economics and Business Management, Faculty of Agriculture and Chairperson of the Board of Study in Agricultural Economics at the Postgraduate Institute of Agriculture, University of Peradeniya. Weerahewa serves as an Honorary Fellow at the Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Australia. She is the current chair of Sri Lanka Forum of University Economists. She is a Collaborator of the International Food Policy Research Institute, a Hewlett Fellow of the International Agricultural Trade Research Consortium, a Fellow of the Canadian Agricultural Trade Policy Research Network, and a recipient of an Endeavour Fellowship awarded by the Government of Australia.

Book Title : Agricultural Policy Analysis

Book Subtitle : Concepts and Tools for Emerging Economies

Editors : Jeevika Weerahewa, Andrew Jacque

DOI : https://doi.org/10.1007/978-981-16-3284-6

Publisher : Springer Singapore

eBook Packages : Biomedical and Life Sciences , Biomedical and Life Sciences (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022

Hardcover ISBN : 978-981-16-3283-9 Published: 10 April 2022

Softcover ISBN : 978-981-16-3286-0 Published: 11 April 2023

eBook ISBN : 978-981-16-3284-6 Published: 09 April 2022

Edition Number : 1

Number of Pages : XXXIII, 478

Number of Illustrations : 1 b/w illustrations

Topics : Agriculture , Agricultural Economics , Plant Sciences , Sustainable Development

Policies and ethics

  • Find a journal
  • Track your research
  • Search Menu
  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About Journal of Public Administration Research and Theory
  • About the Public Management Research Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, discussion and conclusion, acknowledgements.

  • < Previous

Case Study Design and Analysis as a Complementary Empirical Strategy to Econometric Analysis in the Study of Public Agencies: Deploying Mutually Supportive Mixed Methods

  • Article contents
  • Figures & tables
  • Supplementary Data

Dan Honig, Case Study Design and Analysis as a Complementary Empirical Strategy to Econometric Analysis in the Study of Public Agencies: Deploying Mutually Supportive Mixed Methods, Journal of Public Administration Research and Theory , Volume 29, Issue 2, April 2019, Pages 299–317, https://doi.org/10.1093/jopart/muy049

  • Permissions Icon Permissions

There is little methodological guidance regarding how to best integrate qualitative observational case study data and quantitative large-N observational data in the study of public agencies in a mutually supportive way. There are a broad range of potential applications of mutually supportive mixed methods, which can be of help whenever one tool of inquiry (e.g., econometric analysis) suffers from weaknesses (e.g., omitted variables, measurement techniques which may not be unbiased, the inability to estimate important quantities of interest) to which another tool of inquiry (e.g., process tracing of case studies) does not. To demonstrate the broad relevance of mutually supportive mixed methods in public management scholarship, this article focuses on qualitative case studies as a way of addressing an econometric challenge of particular relevance to the field: accounting for the fixed features of units (e.g., agencies, or departments) in multiunit studies. The article’s central points are illustrated using mixed method data on foreign aid agency management practice and agency performance outcomes.

The literature on case selection and methods is increasingly complex, as befits a maturing methodological subfield. As Pavone (2017) notes a recent synthesis of case study selection methods derived “no less than five distinct ‘types’ (representative, anomalous, most-similar, crucial, and most-different) and eighteen ‘subtypes’ of cases, each with its own logic of case selection ( Pavone 2017 , 2 describing Gerring and Cojocaru 2016 ).” 1 This article is not meant to add to this complexity, but rather to focus on a specific application of qualitative case studies: as complementary to quantitative methods in making a causal argument.

Qualitative methodologists (e.g., Brady and Collier 2004 ; George and Bennett 2005 ; Levy 2008 ) have argued that qualitative scholarship has moved beyond the “single logic of inference” made famous by King, Keohane, and Verba (1994) . That said, mixed methods scholars who wish to appeal to a broad range of scholars, including those less familiar with qualitative methods, are often called upon to wrestle with the logic of positivist quantitative analysis. Scholars steeped in quantitative epistemology may also wish to incorporate qualitative methods into their work, and struggle with how to do so effectively.

This article is thus oriented toward scholars whose epistemic compasses, as Mahoney (2010 , 140) puts it in describing the appeal of seminal pieces by Lieberman (2005) and Gerring (2007) , “share with KKV [ King, Keohane, and Verba 1994 ] a statistically oriented approach to social science.” While not all public management scholars focus their research on causal claims, a great many do. This feels appropriate for a field that has often been described as a “design science” (e.g., Barzelay and Thompson 2010 ; Meier and O’Toole 2013 ; Shangraw, Crow, and Overman 1989 ). As Barzelay and Thompson (2010 , 296) put it, “Designing practical interventions is largely a matter of combining known features in new ways.” For findings to be of future practical use, a user needs to be able to predict the likely outcomes to result from—that is, be caused by—design choices. This article focuses on how to employ case studies in ways complementary to the causal logic of quantitative work, with particular attention to bolstering causal inference in contexts where collinearity between terms bounds the usefulness of econometric analysis.

Case study empirics in a mixed methods context are often conceived as providing a more fine-grained interrogation of claims made in quantitative work. In his book Delegation in the Regulatory State , Gilardi (2008) conducts a primarily quantitative inquiry, but concludes by examining the establishment of the German energy regulator as an example of an “interesting case.” Drawing on Lieberman (2005) , Gilardi argues that qualitative cases can be chosen based on the conclusions of the quantitative analysis. For Gilardi, it appears qualitative and quantitative analyses are conceived of as complementary, but in a particular sense: they are layered upon one another (with appropriate linkages between the qualitative and quantitative “layers,” of course) to present a more complete picture than would otherwise be available. The qualitative data does not, in this illustrative case, actually do any of the primary hypothesis testing.

There are, however, a variety of situations in which econometric analysis is possible, but is incomplete. In these situations, qualitative exploration is not merely about “adding layers.” It is also about “filling holes,” in what I term a mutually supportive mixed methods strategy. I define a mutually supportive mixed methods approach as one where the design and/or analysis techniques employ both qualitative and quantitative empirics through a single logic of inference. A mutually supportive mixed methods approach has much in common with Seawright’s (2016) “integrative multi-method research,” which focuses on research design, with special attention to optimal case selection. Seawright describes his book as “the first systematic guide to designing multi-method research,” suggesting a relatively uncharted terrain of methodological integration.

I share with Nowell and Albrecht and Mele and Belardinelli, both in this symposium, the view that qualitative approaches can be conceived of as deductive complements to econometric analysis. As Nowell and Albrecht (2018) put it, “qualitative and quantitative methods are complementary buckets of tools.” Public management as a field is increasingly open to the use of mixed methods, as Mele and Belardinelli (2018) note. Mele and Belardinelli highlight the importance of the integration or “interwovenness” of mixed methods (e.g., by citing Biesenbender and Heritier 2014 ; Johnson, Onwuegbuzie, and Turner 2007 ; Morse 2010 ; Tashakkori and Creswell 2007 ; Tashakkori and Teddlie 2003 ), and conclude in part by calling for “a tighter combination of the findings obtained through separate research processes.” This article can be read as a deeper dive into why and when a particular kind of combination of findings—mutually supportive mixed methods—may be appropriate, and how a scholar interested in deploying such methods might begin to do so.

This article primarily develops the integration of quantitative and qualitative data via mutually supportive mixed methods in the context of a specific inferential challenge of potentially broad applicability to the study of public agencies: collinearity between slow-moving or time invariant features of agencies, contexts, and/or measurement strategies and unit-level fixed effects. This is but one of many potential instances in which mutually supportive mixed methods can be useful over and above any use of mixed methods—for example, the use of case studies to explore causal mechanisms. The careful integration of quantitative and qualitative empirics into a single logic of inquiry has methodological benefits whenever one tool of inquiry—econometric analysis, for example—has flaws that another tool of inquiry does not. This includes, but is not limited to, issues around omitted variables, measurement strategies, the ability to measure important quantities of interest, and exploring heterogeneous mechanisms. The discussion in Discussion and Conclusion section will further explore the range of potential uses of mutually supportive mixed methods.

This article first introduces a general context, in which mutually supportive mixed methods will prove useful to scholars of public agencies: when trying to account for fixed or slow-moving features of analytic units in multi-unit studies via a general model ( Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape section). The article then instantiates the challenge and discusses solutions in the context of aid agencies and the success of foreign assistance efforts, drawing on my previously published (2018, 2019) work on this topic ( Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section). In this work, the quantitative analysis fails not only to elucidate causal mechanisms but also to estimate a critical substantive relationship of interest. This article then draws broader methodological lessons ( The Connective Tissue of Mutually Supportive Mixed Methods: Using Parallel Quantitative and Qualitative Approaches section) from the case in Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section. I then attempt illustrate the potentially broad applicability of mutually supportive mixed methods and how mutually supportive mixed methods may differ from conventional uses of mixed methods in the field using highly cited published work from the Journal of Public Administration Research and Theory ( Mutually Supportive Mixed Methods in the Study of Public Agencies: An Examination of JPART Articles section) before turning to a broader Discussion and Conclusion section.

Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape

One central concern of public management scholarship is the relationship between management practices and outcomes such as employee motivation or organizational performance. These relationships are typically contingent. An identical feature—for example, a performance management system—may have very different associations with performance in different organizations. O’Toole and Meier (2014) introduce a simple general model of context’s interaction with agency management on performance of the form

This article adapts the form of model slightly, taking O to be organizational performance, M management practice of a given agency, C specific features of the broader context in which these actions take place or a given agency sits, and X a vector of control variables. 2

The logic of this model is simple and general: actions and features of agencies have an impact on performance. So, too, do features of the context in which agencies operate. However, context and agency features interact. As a result, there is no single strictly dominant agency design or management practice, with the right “tools for the job” of delivering public value a function of the nature of the job itself and where the job is located.

A quantitative, positivist scholar will at this juncture likely be thinking that variation within M and C will be necessary to properly estimate the full model in a context where the researcher wishes to give the analysis a causal flavor. This variation might be cross-sectional or time-series. Better yet, we could examine both cross-sectional and time-series variation, which would strengthen causal claims regarding how a change in M , or a change in C , affects O . Using panel data to estimate the model for agency i in context j would yield a model of the general form

Using panel (cross-sectional time series) data introduces the question of fixed effects. It is highly unlikely that the vector of controls X will fully capture all of the ways features of agency i or context j might impact performance O . To ensure time-invariant features of i and j are not introducing omitted variable bias, fixed effects at the i and j levels are appropriate; to ensure common temporal shocks (e.g., a global recession) are not biasing estimands, time-fixed effects may also be useful. This jointly yields:

Critical to straightforward estimation of this model is that M and C are time variant; if M i,t or C j,t have no temporal variation they will be collinear with fixed effects at the i or j level. This will make either β 1 or β 2 (or both, where neither M or C have temporal variation) unestimable. 3

The simplest solution is to ensure that the features of M and C that are estimated have intertemporal variation. This is often the case: if M is a performance management system that has been introduced in the middle of the time period the data covers, then fixed effects at agency level ( i ) will simply (and appropriately) allow a straightforward intra-organizational comparison of what occurred before and after the performance management system’s introduction. Similarly, if the theoretically interesting feature of C is the party to lead the national government, fixed effects at country level ( j ) will allow comparison of the time under each party’s leadership. If the researcher’s interest is in simply controlling for time invariant features, these features will be absorbed by fixed effects.

However, there are many features of agencies and context that may be important to model but are unlikely to be time variant. On the agency level, features might include the formal structure of the agency; the year of founding of the agency; whether the agency is an independent agency; whether the head of an agency is a cabinet member; the existence in the agency of a particular feature (e.g., an internal audit function); the agency’s level of centralization; and so on. On the country level (as a particular instantiation of context) features might include whether the state is unitary or there are shared powers; the degree of federalism in the country; the legal tradition of the state; the state’s status as a developed or developing country; and so on. There are also the myriad contexts in which a given feature of agencies (e.g., their level of professionalism) or contexts (e.g., societal social capital) is time variant but collected by survey. Creating a panel using surveys will require multiple administrations, and may not be feasible due to financial, logistical, or other constraints.

To estimate a full model when M is time varying and C is time invariant for a given agency ( i ) requires estimating not just the coefficient β 3 on the interaction term MC but also β 1 and β 2 . The same is true for a case where M is time invariant (either in practice or in estimation) and C is time variant. Although the β can still be estimated for the time variant term, and for any interaction terms that include a time variant term, the β cannot be estimated for the term which is time invariant when fixed effects at the same level ( C or M ) are used. Thus, an econometric analysis can estimate marginal, but is not conventionally seen as being able to estimate aggregate substantive, effects. 4

The following section discusses this problem and solutions in the context of a particular application: aid agencies’ management practices and their performance.

Aid Agencies’ Management Practices, Project Performance, and Fixed Effects

In a recent book ( Honig 2018 ) and a related article ( Honig 2019 ), I investigate the causal effect of greater or lesser field agent control ( M ) on performance ( O ). These studies examine the practices of foreign aid agencies (e.g., the US Agency for International Development and the World Bank) that give field agents greater or lesser control over the design, revision, and day-to-day management of foreign aid interventions.

Tension between field agent and management control is common to international development projects, as agents in the field report to an organizational headquarters often many thousands of miles away. An aid agency project aims to improve health systems in Burkina Faso. Field staff might merely be implementing components of a project, with higher level decisions taken in the agency’s headquarters in a developed-world capital. Alternately, field agents might guide the design of activities, or revising projects in light of changing local circumstances.

An Econometric Hole: Unestimable Quantities of Interest

The argument in Honig (2018) and ( 2019 ) explicitly conceives of greater or lesser field agency control as an agency management practice ( M ) that interacts with features of the broader context ( C ). I hypothesize that the returns to giving foreign aid’s street-level bureaucrats ( Lipsky 1980 ) greater control will have increasing returns to performance when recipient country environments are more unpredictable. Uncodifiable information about context [tacit knowledge in the sense of Polanyi (1966) , or soft information in the sense of Stein (2002) ] to which only field agents have access will be in higher demand in more unpredictable environments, and is more likely to be gathered and used when field agents have greater control ( Aghion Tirole 1997 ).

To test this theory, I assembled a database of over 14,000 discrete development projects from nine aid agencies. 5 These projects have Likert-type outcome scores assigned by the aid agencies themselves, allowing multivariate (ordinary least squares and ordered logit) regression models to be fit to the data. Collectively, the projects span over 40 years and 178 recipient countries. I was thus able to exploit the intertemporal nature of the data to control for time, agency, and recipient country fixed effects. This ensures that fixed features of agencies, common temporal shocks, and fixed features of recipient countries are not biasing the results. In the terms of the econometric model outlined in Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape section, my model can be described as:

The measure of C , country unpredictability, draws from the country-level panel data in the Polity IV State Fragility Index ( Center for System Peace 2014 ). The measure of field agent control M , however, is a time-invariant survey measure. 6

The quantity of primary interest—management practice—is collinear to agency fixed effects. But agency fixed effects are critical for this analysis. As Honig (2018) notes, while project success is a Likert-type measure of holistic project performance, there is no reason to believe that each agency assesses project success using a parallel scale. Although these agencies’ measures of success all incorporate a common OECD Development Assistance Committee conceptual standard, they vary in a wide number of dimensions; there is no reason to believe that a given agency’s rating of four on a six-point scale is equivalent to another agency’s score of 4. 7 An agency fixed effect absorbs any fixed agency-level evaluation bias; thus econometric models can be fit looking at the relative performance of a given project relative to that own agency’s other projects. Even without this dependent variable measurement challenge, the use of agency fixed effects is likely appropriate. The nine agencies examined differ in the formal status of the agency (e.g., line ministry versus independent agencies), governance structures of the agencies themselves and their countries of origin, and indeed whether they are multilateral agencies or government units, among a host of other fixed differences. Use of agency fixed effects also absorbs these heterogeneous time-invariant agency-level characteristics, ensuring these features are not sources of omitted variable bias.

Use of agency fixed effects, however, puts us squarely facing the challenge described in Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape section: the agency fixed effect also makes it very difficult to directly assess the effect of M , which also varies at the agency level and is, as measured in these studies, time-invariant. I thus can, and do, fit models that generate estimates of β 2 and β 3 . However, these models cannot estimate β 1 , given the collinearity of the measure of M to agency fixed effects.

The analysis finds that β 3 is positive and statistically significant, suggesting that there are increasing returns to field agent control as a given aid recipient country becomes more unpredictable. Although this marginal effect may be of interest, it leaves open the question of whether greater field agent control is actually better for any given project. Figures 1 – 3 illustrate this problem graphically. 8

The marginal effects of differential levels of field agent control can be estimated using the sum of β 2 and β 3 . We can observe the differential slopes of project performance by level of C , recipient country environmental unpredictability. But I cannot estimate the relative levels of the marginal effects plot lines, as indicated by the vertical black arrow superimposed onto the marginal effects plot in figure 1 (which is meant to indicate that the relative vertical positions of the lines are not estimated; that, e.g., the line that appears to be on the bottom may in fact be above the line that appears above it).

Graphical Estimate of the Interaction of Environmental Unpredictability and Field Agent Control in the Absence of β1

Graphical Estimate of the Interaction of Environmental Unpredictability and Field Agent Control in the Absence of β 1

The information in figure 1 is re-expressed in figure 2a and 2b , which show the same two lines as present in figure 1 , but using a normalized agency-specific z -score as the outcome variable. The slope of the two lines (for high levels of field agent control in figure 2a , and low levels of field agent control in figure 2b ) differs. But as they are both plotted against normalized values of each agency’s own predicted success, there is no way to know whether, for example, a z -score of .5 in figure 2a is in actual, real-world terms of project success higher or lower than, for example, a z -score of .2 in figure 2b .

Figure 1 Disaggregated for High Field Agent Control (Left, Figure 2a) and Low Field Agent Control (Right, Figure 2b)

Figure 1 Disaggregated for High Field Agent Control (Left, Figure 2a ) and Low Field Agent Control (Right, Figure 2b )

Greater or Less Field Agent Control May Strictly Dominate, or There may be a Conditional Net Relationship (We Cannot Know Based on the Quantitative Empirics)

Greater or Less Field Agent Control May Strictly Dominate, or There may be a Conditional Net Relationship (We Cannot Know Based on the Quantitative Empirics)

As a result of these econometric limitations, this analysis does little to clarify the substantive effect of field agent control. Figure 3 demonstrates the stylized possibilities by holding the slope of the estimates for both low and high field agent control constant, but arbitrarily varying the intercept of the high field agent control line in the plot.

It is possible that higher levels of field agent control strictly dominate lower levels (possibility 1 of figure 3 ): that greater levels of field agent control are associated with better aid project performance in all environments, with increasing returns in more unpredictable environments. It is also possible that lower levels of field agent control strictly dominate higher levels (possibility 3 of figure 3 ): that even as lower levels of control are associated with declining of project effectiveness in contexts of greater environmental unpredictability, this decline still leaves less field agent control the more effective strategy in even the most unpredictable contexts. It is also possible that the lines cross (possibility 2 of figure 3 ): that lower levels of field agent control are superior in more predictable environments, and higher levels of field agent control are superior in less predictable environments. The three stylized possibilities depicted in figure 3 are identical in the slopes they give the “high level of field agent control” and “low level of field agent control” organizations, implying the same β 3 in the econometric model. But this leaves open the central question: what is the right management strategy M , and does the “right-ness” of this strategy depend on context C ? Which strategy is better? If the answer is conditional, where exactly the optimal strategy “flips” cannot be estimated econometrically.

Mutually Supportive Mixed Methods Part 1: Choosing Case Studies to Complement Econometric Weaknesses

I turn to case studies to complement the econometric analysis. Although I describe the case studies as exploring mechanisms in a way consistent with process tracing ( Bennett and Checkel 2015 ; Blatter and Blume 2008 ; Hall 2013 ), Honig (2018 , 2019 ) make clear that a critical role of the case studies is to allow a direct comparison of relative project success of different levels of field agent control ( M ). The case studies are used to directly observe the slow-moving variable (agencies’ level of field agent control M ) that was collinear to fixed effects in the large-N analysis, thus precluding an estimate of β 1 .

As such, the case studies were chosen along what might be best described as a “similar enough” case selection strategy ( Nielsen 2016 ). This is a cousin of the “most similar” selection strategy (e.g., Seawright and Gerring 2008 , though the concept dates at least to Mill 1843 ) which attempts to hold constant all factors other than level of field agent control, but recognizes that not all information is knowable and estimable to truly choose the absolute maximally similar cases. I examine pairs of cases from two organizations where the quantitative measure used for M varies substantially in level of field agent control—the United States Agency for International Development (USAID) and UK Department for International Development (DFID). I choose cases where USAID and DFID attempt to accomplish similar goals in the same country over the same time period.

As described previously, however, a critical question remains: whether one management strategy M (more/less field agent control) clearly dominates the other. To investigate further, I examine pairs of USAID and DFID cases in countries of different levels of environmental unpredictability C . I examine four pairs of cases, or eight case studies in Honig (2018) . Two pairs of cases occur in Liberia in 2000s, a relatively high unpredictability environment; two pairs occur in South Africa in 2000s, a relatively low unpredictability environment.

I also vary one additional feature of context C in case selection: the degree to which the task domain of the aid project is tractable to external measurement, or the project’s external verifiability. This is a feature of context that cannot be accurately measured econometrically. Thus, while project external verifiability features in the theory—I argue the less tractable a given project to performance measures, the greater the returns to field agent control—project verifiability plays only a limited role in the 2019 quantitative analysis. 9 A full schematic of the case selection strategy is presented in figure 4 . 10

Schematic of Selected Cases

Schematic of Selected Cases

These case selection methods have something in common with mainstream nested analysis ( Lieberman 2005 ). Case selection is informed by quantitative parameters. However, unlike in most applications of nested analysis, it is not the results of econometric analysis that inform case selection. Case selection is done to maximize variation in the independent variables M and C so as to allow the qualitative analysis to complement econometric analysis, given the limitations of the econometric model. It is the level and variation of independent variable regressors, rather than the results of econometric analysis regressions that inform case selection.

Mutually Supportive Mixed Methods Part 2: Conducting Case Analysis to Complement Econometric Limitations

Ensuring case studies complement econometric analysis does not end at the case selection stage. In any qualitative case study, researchers must choose to focus on specific elements of the case rather than others. When case studies are used to complement econometric analysis, the choice of how to construct cases can, and should, be informed by the limitations of the econometric analysis they are meant to complement.

Each case study in Honig (2018) explores the design, implementation, and revision of the development projects it examines. In each of the four case study pairs, the success of USAID and DFID projects are directly compared as well. Process tracing links a given project’s success to M and C . I also take pains to establish in each case pair the differing levels of field agent control M in the projects. I trace how M affects project success in each case, focusing on critical junctures in the design, implementation, and revision of projects.

The case studies thus provide qualitative estimation of the net effect of M and C in interaction—that is, the qualitative equivalent of the sum of β 1 , β 2 , and β 3 . The cases can provide suggestive, small-N evidence on whether high or low levels of M strictly dominate one another, or rather if more field agent control M is associated with greater project success in some contexts C , and lower levels of project success in other contexts C .

By way of brief illustration, one case pair examines USAID and DFID efforts to strengthen South African municipal fiscal management—the ability of selected South African local governments to manage their budgets and expenditures. The two foreign aid agencies managed their parallel projects very differently, and had very different results.

USAID’s project aimed to help municipalities deliver public services more effectively by transferring knowledge to municipal staff. A training plan was centrally developed with modules including municipal accounting, billing systems, and debt management. On a designated day, a trainer would arrive and hold a session on a given topic. The trainings were easily monitored, and measurable. Whether the trainings were actually effective for the people in the seats, however, was less clear. A leader of the USAID project suggested indicators were chosen “because [they were] easier to count … but the numbers didn’t tell about the impact.” Another project leader suggested the USAID project “might have not made the most dent or impact.” One of the trainers reported he didn’t “think [the trainings] contributed much.”

By contrast, DFID’s project strategy centered around embedding advisors in local municipalities. The advisers resided in the communities for extended periods of time, building skills and systems on an ongoing basis, and set the specific goals against which they reported. Where USAID’s project focused on quantifiable metrics like “all staff trained,” DFID’s asked advisers to “implement their work plans and report on progress.” 11 Effectively, advisers’ judgments led the project. DFID not only condoned this strategy; they explicitly designed the project to incorporate it. As full-time residents for the long term (2–3 years), DFID project advisors were often—though not always—able to find a way to positively influence municipal systems. In interviews, both beneficiaries and project staff reported that advisors achieved some shifts in municipal practices. As one implementer put it, DFID’s reporting was “more content-rich; it was not a numbers game.”

The research directly compares project success in each case pair and the relationship between management practice M and project success in each case. I determine that DFID’s project was by no means an overwhelming success. But it was substantially more successful than USAID’s. USAID and DFID implemented programs with similar goals, through similar contracting structures. USAID’s project had very little field agent control; DFID’s had a great deal. This difference in field agent control is linked at length to field agent success ( Honig 2018 ).

Each individual case pair can say little about the overall context C . However, by comparing across case pairs, the qualitative empirics can provide leverage on environmental unpredictability C ’s role in mediating the relationship between field agent control M and success O. The case analysis ultimately concludes that in three of the four case pairs, greater field agent control is a factor in the relatively greater success of projects. In the case pair where theory predicted the returns to field agent control likely to be lowest—in the most stable environment (South Africa) with the most externally verifiable projects (delivery of drugs for HIV/AIDS)—greater field agent control is a factor in the relatively lower success of DFID’s project as compared with USAID’s. The case pairs thus collectively suggest that the relationship between M and O is indeed conditioned on environmental unpredictability C .

The case studies explore mechanisms, but with attention to the weaknesses of the econometric analysis. As the quantitative analysis does not allow direct estimation of comparative project success, given the agency-specific nature of the measurement regime, the qualitative cases establish these levels of comparative project success directly. In finding that in all four case pairs USAID and DFID differ on M as predicted (with DFID having much greater levels of field agent control than USAID), the cases also serve to reduce concerns that the agency-level measurement of management practice M in the quantitative analysis is introducing bias in being insufficiently sensitive to comparative intra-organizational variation in field agent control M. The clear links between management practice M and project success suggest that it is indeed field agent control M , and not some other agency-level features which co-vary with agent control, that plays a causal role in project success and failure.

The Connective Tissue of Mutually Supportive Mixed Methods: Using Parallel Quantitative and Qualitative Approaches

The discussion thus far has framed qualitative analysis as filling inferential holes of the quantitative strategy. The inverse is also true: the quantitative analysis undergirds weaknesses in the qualitative case design. Indeed, Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section could have proceeded in precisely the inverse way—by beginning with the qualitative logic of causal inference, suggesting its weaknesses, and then finding econometric complements. As I acknowledge, the qualitative cases are intentionally chosen to maximize variation. It is possible that the functional form of the relationship between field agent control and outcomes is not linear, but a purely qualitative examination might erroneously ignore, for example, a parabolic relationship due to not sampling the middle of the distribution. This is but the tip of the iceberg of potential problems with causal inference that rightly concern qualitative methodologists. The cases could be outliers in a variety of unintended ways, and thus not provide accurate systematic data. The large-N analysis helps to “fill the holes” left by the qualitative analysis.

Inferential challenges are not method-specific; the limitations above are not quantitative limitations to be addressed with qualitative data, nor are they qualitative limitations to be addressed with quantitative data. A primarily quantitative scholar might view the case studies in Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section as estimates of net marginal effects that allow simultaneous estimation of β 1 , β 2 , and β 3 . The case study pairs suggest that it is possibility 2 of figure 3 , not possibilities 1 or 3, that depicts the correct stylized relationship; that the best strategy M depends on C . This quantitative scholar might conclude that the case studies provide an important complementary source of evidence to the “primary” quantitative empirics. A primarily qualitative scholar might begin by conceiving of the case studies as providing strong suggestive evidence that field agent control is an important component of good development project outcomes in the case study projects, with the impacts of this management practice conditioned by the unpredictability of recipient country environments and the verifiability of tasks. They might see the quantitative analysis as suggesting that case selection and analysis methods and features of the agencies chosen are not driving the findings, strengthening the claim of the qualitative cases to broader generalizability. This qualitative scholar might conclude that the large-N econometrics provide a complementary source of evidence to the “primary” qualitative empirics. Both the primarily quantitative and primarily qualitative scholars might agree that the existence of both sets of empirics allows us to update (in the Bayesian sense) our priors more confidently, even as they disagree on which component of the empirical strategy provides more useful information.

In the example in Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section, the quantitative data could not tell us whether field agent control was substantially driving differences in project outcomes, and so the qualitative case studies were designed to directly examine this link. But we could imagine a scenario where qualitative data illustrated one step in a causal chain, but not the outcome of ultimate interest. Perhaps qualitative case studies, drawing on interviews conducted both before and after the introduction of performance evaluations in a given agency, convincingly demonstrate that introducing new evaluation methods improves employee attitudes about their work. The researcher, however, cannot determine whether improvements in employee engagement lead to better client outcomes. In this case, large-N data linking evaluation system change-induced differences in employee engagement survey scores to client outcomes might “fill the hole” by estimating the size of the substantive effect as well as the proportion of the variance (the R 2 ) explained by the econometric model. Both qualitative and quantitative empirical strategies may yield unsatisfying answers as to substantive significance; both qualitative and quantitative strategies may be useful in complementing these weaknesses via carefully co-designed, mutually supportive mixed methods.

Whether one begins by examining the case studies or the large-N analysis, at the heart of the common inferential challenge is the need to leverage both within-and between-unit variation in making strong empirical claims. One way of describing the fixed-effects strategy in the quantitative analysis is as shifting the inquiry to within-unit variation.

The nature of the dependent variable in the econometric analysis presented in Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section—a holistic measure of project performance generated by the agency itself—makes the need for agency-level fixed effects particularly clear, to remove any bias in the measure. But this bias is in some sense just a clearer version of what is nearly always true when scholars of public management and public administration look at multiple units: multiple administrative units differ in a wide variety of fixed, unobservable ways. A study of Ministries of Finance from OECD countries must cope with differences in administrative tradition, governance systems, and workplace culture, to name but a few of many (econometrically) often unobservable features. But the problem does not substantially abate when looking within a country; multiple agencies in the same country may also differ in internal structure, workplace culture, worker motivation, and a host of other variables. Indeed, even study of the effects of principal public service motivation on performance in a single school district must come with different student populations, parent–teacher associations, informal rules of governance, and much more.

The challenge scholars of public agencies set themselves yields many, many instances where the diversity of the units under study calls out for some way to absorb fixed differences between the units—be they schools, teams, divisions, schools, hospitals, larger agencies, or countries. Agency fixed effects allow the researcher to ensure that the unobserved (fixed) features of the units are not driving results. 12

When I employ fixed effects at the agency level, I shift the analysis to within-agency differences in performance. But this is not the only margin on which fixed effects may be useful. By using fixed effects at the recipient country j (context C ) level, I shift the analysis from one that examines, for example, the differences between country A and country B, to focusing only on changes within country A, and within country B. Taken together, the analysis thus looks at within-agency differences in project performance as within-recipient country unpredictability rises or falls. I use within-agency, within-recipient variation to make between-agency claims; see, for example, β 3 in the model in Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section, or the differing slopes of figures 1–3 . 13

This empirical strategy has clear parallels in qualitative research design. Leveraging within-case differences in case pairs, I make causal claims about the relationship of variables examined in each case to the between-case differences (or in Mahoney 2007 framing, cross-case comparisons) in project outcomes. Figure 4 ’s case design schematic, then, has intuitive parallels with the fixed effects models that underlie figure 1–3 ’s marginal effects plots. Using within-agency variation of success to illustrate between-agency differences in the relationship between management practices, the case study pairs and large-N analysis jointly provide the basis for a mutually supportive mixed methods conclusion. Namely, both top-down management control and greater field control have their appropriate environments, with unpredictability conditioning which is the superior strategy. This mutually supportive mixed methods conclusion is facilitated by the common inferential logic on which the quantitative and qualitative empirics rely: leveraging differential within-unit variation to make between-unit comparisons.

Designing qualitative and quantitative strategies so they rely on the same logic of inference but are mutually supportive requires co-design of quantitative and qualitative strategies. My choice of agencies that vary widely in level of M (field agent control) in countries that vary widely in level of C (country unpredictability) is informed by both a parallel logic of inference as the quantitative empirics and a recognition of the limitations of those empirics. The choice to examine pairs of cases from different agencies using a “most similar” case strategy is driven by the inferential logic of leveraging “within” variation to make “between” comparisons.

The parallel strategies of causal inference in the qualitative and quantitative analysis are the analytic connective tissue that allows the limitations of the qualitative analysis to be supported by those of the quantitative analysis, and those of the quantitative analysis to be supported by the qualitative analysis. To build this type of connective tissue requires researchers to abstract away from the particular context of their qualitative and quantitative strategies, to interrogate the logic of inference in each case, and ensure the quantitative and qualitative empirical strategies are mutually supportive of one another.

Mutually supportive mixed methods do not allow any given method to transcend their limitations. The case studies are a small-N sample, subject to their standard limitations; a scholar who does not believe qualitative cases can provide useful information is unlikely to be convinced to additionally update their priors in response to the case data. The quantitative data is observational, subject to the standard limitations of quantitative analysis and observational data. A scholar who believes we can learn little from large-N observational data is unlikely to be swayed by the analysis. But for, I believe, the great majority of scholars who would agree that large-N econometric analysis, process tracing ( Bennett and Checkel 2015 ; Blatter and Blume 2008 ; Hall 2013 ), and controlled case comparisons ( Mahoney 2007 ; Slater and Ziblatt 2013 ) can all provide useful evidence for updating one’s priors, mutually supportive mixed methods can help develop a stronger combined picture than any one method can provide in isolation. The strength of mutually supportive mixed methods lies in the diversity of each method’s weaknesses. A reason for being dubious about one part of the mutually supportive mixed methods analysis often does not apply to other components of the analysis. It is just this heterogeneity in methods’ relative weaknesses that allows mixed methods to be, properly designed, mutually supportive.

Mutually Supportive Mixed Methods in the Study of Public Agencies: An Examination of JPART Articles

This article has focused on only one of the many possible inferential challenges researchers face. There are a variety of situations in which mutually supportive mixed methods might be helpful in addressing challenges, and Seawright (2016) attempts to provide some structure to the variety of general problems researchers may face. In the O’Toole and Meier (2014) article from which the general model introduced in Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape section is adopted, the authors provide a “Public Management Context Matrix” ( table 1 ). This matrix lists important causal variables that could be classified as part of political, environmental, and/or internal context. In the first, political, category of variables are the slow-moving at best and time-invariant at worst “Separation of Powers” and “Federalism”. The same can be said of all four environmental variables (complexity, turbulence, munificence, and social capital) and all three internal agency variables (goals, centralization, and professionalization). This suggests the centrality of slow-moving or time-invariant variables to the study of public agencies.

The Potential for Case Studies to Complement Existing Econometric Analyses in JPART’s Public “Highly Cited Articles” Archive

This section aims to briefly explore how mixed methods have already been used in the study of public agencies, and how mutually supportive mixed methods might be more generally used in the face of these challenges. I first examine highly cited Journal of Public Administration Research and Theory (JPART) articles that do not use mixed methods to discuss how the broader use of mutually supportive mixed methods might further strengthen the inquiry at the core of these articles. I then turn to existing mixed methods work in JPART’s pages to explore current methodological practice, and suggest what mutually supportive mixed methods framings might add to the current state-of-the-art on the use of mixed methods in the discipline.

To suggest the broad potential applicability of case studies as a mutually supportive method in the field, I examined all the articles included in the “highly cited articles” section of JPART’s website. 14 Described on the journal’s website as “a selection of five highly cited articles from recent years,” these articles presumably combine methodological rigor with a focus on substantive topics of interest to the core of the journal’s scholarly community. The implicitly high standard of rigor in these standout articles makes this sample a “high bar” test for the relevance of mutually supportive mixed methods; nonetheless I believe that use of case studies as a mutually supportive method to econometric analysis could in principle have further strengthened the empirics of some of these articles, which are summarized in table 1 .

This article has argued that the collinearity of slow moving or time-invariant features sometimes has consequences for estimating substantive quantities of interest in the field. It also has argued that this collinearity has the potential to make attribution of a given effect to particular mechanisms more difficult. Table 1 reviews each of the five articles and summarizes the extent to which each has a slow moving or time invariant feature, and whether mutually supportive mixed methods (MSMM in the table) as outlined in this article might have further strengthened the central claims of the article, had it been employed. As each of the five articles is a quantitative econometric exploration of the topic, this section focuses on the ability of qualitative data to complement the econometrics.

Of the five studies included in table 1 , two ( Jilke, Van Ryzen, and Van de Walle 2016 ; Marvel 2016 ) involve randomized one-shot survey experiments conducted through the online platform M-Turk. Although we could imagine case studies or other qualitative data that might address the broader subject of both articles, there is no obvious way for qualitative data to integrate with the existing experiments. This is not because there is no way for experiments to be part of a mutually supportive mixed methods design. In-depth examination of purposively selected cases might, for example, help explain heterogeneity in treatment effects. In Jilke et al., are consumers of some occupational or (drawing from Jilke et al.’s table 1A) sociodemographic groups more or less susceptible to choice overload? If so, mutually supportive mixed method inquiry might have helped explore what explained heterogeneity. Perhaps wealthier people are less susceptible to choice overload, and in interviews and survey data this seems to be due to greater prior exposure to contexts with a great many technical choices.

However, neither experimental study shows a central concern with anything other than the overall mean treatment effect. As such, qualitative data likely has little to offer these articles, though it is of course possible that given a different empirical strategy the author(s) might have broadened the range of estimands they considered to be of theoretical interest. This highlights that it is not primarily the match merely between methods that determines the relevance of mutually supportive mixed methods; methodological choices are endogenous to what the researcher wishes to accomplish. A variety of methods, including mutually supportive mixed methods, could be used to explore any broad topic area. But which are appropriate in a given study depends on the particular questions the research wishes to ask.

The remaining three nonexperimental articles ( Ennser-Jednastik 2016 ; Favero, Meier, and O’Toole 2016 ; Ingold & Leifeld 2016 ) all involve survey, observational administrative data, and/or panel data. In all three cases, I believe it is possible that carefully designed case studies fitting the same logic of causal inquiry as the existing econometric analysis might have further bolstered the central claim of this article. That is, mutually supportive mixed methods might have helped the authors strengthen the central claims they wish to make. In parallel to the central illustration in Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape and Aid Agencies’ Management Practices, Project Performance, and Fixed Effects sections, this is in part because in each of these three articles at least one key variable is slow-moving or time-invariant.

In Ennser-Jednastik (2016) ’s exploration of the impact of legal independence on the politicization of appointments to regulatory agencies the key independent variable—the Gilardi measure of formal independence used by the author—is time-invariant. 15 The author controls for many potential confounds (agency resources, agency age, rule of law, etc.), as well as country-level fixed effects, and models the data using mixed-effects models. However, collinearity between legal independence and unit-level fixed effects precludes the inclusion of agency-level fixed effects, which would absorb all possible fixed features of agencies other than their legal independence. As such, qualitative case studies of agencies at the extremes of the key independent variable (those with very low and very high levels of legal independence) might have allowed additional exploration of the link between legal independence and appointment decisions. This might provide additional confidence that it was the legal independence of the agencies rather than potential agency-level confounds that were driving results, as well as strengthening the claim that the theorized mechanism (appointment of co-partisans to ensure independent agencies carry out the party’s desired policy) was in fact operative.

The data Ingold and Leifeld (2016) use in their primary analysis of the determinants of perceived influence is largely drawn from a time-invariant survey (though an additional analysis of the one case for which it is available exploits longitudinal data from two survey waves). The use of a single survey wave for both the dependent and a key independent variable increases the potential threat to validity of omitted variable bias, in addition to introducing common source bias. 16 Their analysis is already at the case level. Qualitative case data about the process of how actors came to be perceived as influential would provide additional empirical leverage on their question of central interest, allowing estimates of how changes in institutional roles and/or structural positions led to changes in perceived influence. Qualitative case data might also link perceived influence to actual impact (if, e.g., a figure influential on their measure clearly carried the day at the end of a contentious policy debate), a link of the causal chain for which the authors currently rely on theory.

Favero, Meier, and O’Toole (2016) go some way to trying to address the inferential challenges posed by their measure of internal management practice and its relationship with student success. The authors use panel data, and address potential mismeasurement of their key dependent variable (via a “halo effect” where management practice is rated better by staff at higher performing schools independent of its “true” level). They also control for the previous year’s performance in a given school, lessening the risk that omitted variables such as more qualified teachers (the composition of a school’s teachers being unlikely to change radically from one year to the next) are leading both to better student performance and better internal management practices. As a final robustness check, the authors employ a model which includes fixed effects to “control for any possible omitted variable bias” (footnote 7, p. 336).

Favero, Meier, and O’Toole (2016) conclude there is “little doubt … that the positive effects of management in the New York City school system are real.” Although the existing econometric strategy is certainly strong, the article’s focus on causal inference suggests that the authors might themselves agree that were even richer, more nuanced data available for management practices and student achievement, using that data would have been even more convincing. Carefully selected case studies (e.g., of schools where student performance radically improved or fell) might have provided an additional temporal dimension to the analysis, helping to provide further confidence that the causal mechanisms work as the authors theorize. This examination might have also provided qualitative evidence of first differences; of the marginal impact on education provision provided by marginal changes in management practice. Such data would have strengthened even further the article’s central claim that management practices play a causal role in student achievement.

In three of the five articles in JPART’s “most cited” archive—and all the nonexperimental articles in the archive—case study empirics might have further strengthened the authors’ quantitative empirical strategy in making the core claims pursued by the authors. This is not to suggest these articles are, at present, insufficiently rigorous; indeed, their inclusion in the JPART archive suggests much the opposite. Nor does it suggest the three articles for case studies might have served as a mutually supportive mixed method are weaker than the two for which it is not the case. It does suggest the frequency and centrality of slow-moving and time-invariant variables of significance to the field, and the broad potential for case studies to serve as a mutually supportive mixed method in the field.

Further empirical exploration of these articles using case studies would certainly be costly in both time and money; many, perhaps even most, researchers would conclude the econometric rigor of each of these five articles sufficient. The claim here is not that all work in the field must include mutually supportive mixed methods; it is that empirical work can benefit from careful case study design which complements econometric analysis. This claim applies, at least in principle, to some of JPART’s strongest recent articles as determined by the field via citations and the journal itself via inclusion in a special curated section of highly cited articles made freely available to the public as exemplars.

Of course, the notion of mixed methods work in the study of public agencies is not novel. In an attempt to further illustrate what may be useful in mutually supportive mixed methods, as opposed to simply the use of mixed methods more broadly, I turn to the five most cited mixed methods articles I was able to identify in JPART in the past 15 years, which are listed in table 2 . 17

Highly Cited Mixed-Methods Work in JPART, 2003–2018

These pieces illustrate both the varied nature and empirical rigor of existing mixed methods work in the study of public agencies; however, of the five, only one (arguably Witko 2011 ) employs mutually supportive mixed methods, thus fully exploiting the potential of the methods employed to maximize empirical leverage. In the interests of space, I will not discuss all five of these articles in depth, focusing on a pair ( Soss et al. 2011 ; Witko 2011 ) of articles that provide a useful contrast, and broadly parallel the types of inferential challenges on which Time-Invariant or Slow-Moving Features of Agencies and Contexts: A Source of Holes in the Public Management Landscape and Aid Agencies’ Management Practices, Project Performance, and Fixed Effects sections focused.

The most cited piece, Soss et al. (2011) , in many ways parallels the O’Toole and Meier (2014) model—a management practice M (NPM-inspired performance management practices) interacts with differential contexts C in Florida (e.g., liberal versus conservative regions) in producing O (sanctioning behavior) using both in depth interviews and econometric analysis. The qualitative data first informs the hypotheses and illustrates the context, then is used to explore the mechanisms underlying the qualitative findings. Although this use of mixed methods is more effective than either qualitative or quantitative methods would be in isolation, it nonetheless may not fully exploit the qualitative data, which might also be thought of as hypothesis testing. Although the interviews do not have quite the fullness of case studies, one could imagine examining whether, for example, the authors’ “ideology hypothesis” (negative performance feedback increasingly stimulates sanctioning behavior in more politically conservative areas) holds in the qualitative data. The quantitative analysis, in turn, could include fixed effects by agency and examine how (in, e.g., a small modification of Soss et al. 2011 table 3) changes in performance feedback might differentially affect monthly closure rates while controlling for fixed features of agencies. Were the authors to have wished to use mutually supportive mixed methods, the qualitative “mechanism” and quantitative “primary effect” analyses could have been combined even more explicitly and tightly, with the qualitative analysis helping to directly address some of the quantitative limitations and both methods speaking to mechanisms and primary effects.

Witko 2011 explores whether campaign contributions have an effect on contracting processes. Witko’s case studies and empirical analysis both focus on exploring the variation in how contributions affect contracting. The case studies focus more on mechanisms, and seem to have been selected for their demonstration of different causal pathways. But in contrast to Soss, Fording, and Schram (2011) , both the case studies and econometric analysis are part of a single analytic logic, and are both framed as hypothesis testing in ways that might be seen as mutually reinforcing. The case studies explore primary effects—political influence on contracting—and the complex, and varied, ways that politics played a role in the contracting processes. The same could be said of the quantitative analysis, which explores both the primary effect and the conditions on which it depends. There may be ways Witko might have more tightly link the empirical logic of these mutually supportive mixed methods; Witko integrates the findings from the two methods only very briefly in the conclusion, and the abstract is framed entirely around the quantitative results. That said, this piece might be said to have used mutually supportive mixed methods. This allows the piece to more fully leverage both qualitative and quantitative data, with both data sources informing the analysis of primary effects and causal pathways and mechanisms.

The highly cited pieces in both tables 1 and 2 are clearly all exceptional pieces of scholarship. But only one ( Witko 2011 ) even arguably seems to embody the spirit of mutually supportive mixed methods; that is, bringing qualitative and quantitative data to bear in a way that is mutually reinforcing across the methods used. This is not to suggest these articles are ineffective; I only mean here to illustrate additional ways that these pieces might have been further strengthened, and thus the ways in which future scholars might fruitfully adopt mutually supporting mixed methods.

Rare is the empirical study—qualitative or quantitative—that faces no inferential challenges. Challenges are a function of the data-generating environment, the tractability of the objects of inquiry to direct observation and/or manipulation, and the causal density of the context ( Woolcock 2013 ), among myriad other factors. Perhaps the only universal statement that can be made about the nature of inferential challenges is that, unhelpfully for prescribing particular responses, the inferential challenges of a given empirical strategy are deeply contextual and often affect only a very narrow slice of the empirical work with which the researcher is familiar.

What, then, is a researcher to do? The first step, perhaps, is to step away from considering the limitations of the data to a more abstract consideration of the flavor of empirical challenge with which the researcher is grappling. One way of identifying the inferential weaknesses of one’s own empirical strategy is to borrow from the toolkit of experimental economists, who sometimes speak of the “God experiment”; the experiment that would allow perfect identification and causal inference, were it possible to manipulate all the relevant features of the environment.

The often-unexamined intuition behind the God experiment is the notion that we can learn about our own strategy’s limitations, and work to address them in the design phase by considering the “breach” between the perfect and the possible. Step 1 of a general mutually supportive mixed methods design strategy, then, might be to consider what the perfect data (quantitative and/or qualitative) to test a given theory might be. Step 2 would be to think about what data is available, or what qualitative and quantitative empirical investigation might make available.

In some instances, researchers will determine that the best empirical strategy to address these challenges is entirely quantitative, or entirely qualitative. In many instances, however, both qualitative and quantitative empirics will be illuminating. One example is when a key quantity of interest is slow moving or time-invariant, leading to collinearity in econometric analysis and thus difficulty in determining quantities of interest. These quantities of interest may relate to substantive significance of findings or to mechanisms of action; or, as in the case of Aid Agencies’ Management Practices, Project Performance, and Fixed Effects section, may impact interpretation of both substantive significance and mechanisms. This collinearity is a problem of particular relevance to the econometric study of public agencies, given the slow-moving or time-invariant nature of both many measurement strategies (e.g., surveys) and many important features of both agencies themselves and their broader contexts.

This collinearity/fixed-effects challenge is but the tip of the iceberg of contexts where mutually supportive mixed methods may be useful, albeit a tip particularly visible and sharp in the study of public agencies. Mutually supportive mixed methods can help address omitted variable bias and measurement error, where measurement strategies differentially “omit” variables or use different measurement techniques. Mutually supportive mixed methods can simultaneously explore variation in mechanisms and aggregate effects, by integrating qualitative and quantitative data in contexts where one method of inquiry (often econometric analysis of quantitative data) is better suited to exploring the variance in effects and another method (often process-tracing of case studies selected to maximize variation in the independent variables). Similarly, where important independent variables cannot be quantified, mutually supportive mixed methods can use controlled case comparisons to provide analytic leverage. Mutually supportive mixed methods can add a temporal dimension to data (via, e.g., historical case studies), turning a quantitative cross-sectional analysis into a mixed methods “panel” with the attendant benefits for examining how changes over time in independent variables have affected dependent variables.

The design of mutually supportive mixed methods strategies ought to be informed by the nature of each method’s weaknesses in a given context and the logic of common inferential strategies, but need not focus exclusively on that challenge. A case study can, for example, confirm systematic results and explore mechanisms, leveraging the greater nuance that may not be available in a large-N study but a case analysis (like process tracing) approach may provide. An econometric analysis can confirm that the dynamics in play in case studies hold more generally and explore systematic tendencies that may not be apparent in a small-N qualitative study, leveraging the larger set of observations to draw more nuanced systematic conclusions. Although there may well be trade-offs in practice, in theory filling holes does not preclude complementary analyses from also adding layers, exploring mechanisms, or performing any other analytic function.

What makes mutually supportive mixed methods different from any other use of mixed methods? One way in which mutually supportive mixed methods differs from the logic of, for example, nested analysis (e.g., Lieberman 2005 ) is that the “small-N” qualitative work is not merely a fine-grained look at mechanisms from “large-N” analysis, but rather the small- and large-N analyses can be conceived of as mutually supportive hypothesis testing. Mutually supportive mixed method also implies a purposive selection of cases endogenous to variation in independent variables in the quantitative analysis, to allow for complementary hypothesis testing to quantitative inquiry. More generally, mutually supportive mixed methods allow for—indeed, demand—simultaneous design of qualitative and quantitative empirical strategies, rather than conceiving of the former as endogenous to the results of the latter. In this sense, the logic of mutually supportive mixed methods has much in common with Humphreys and Jacobs (2015) , as it conceives of qualitative and quantitative evidence as complementary data whose optimal proportion depends on the nature of the data environment and inferential challenge. 18

Although there are particular types of methodological difficulties more common in the study of public agencies, it is not that public administration and management are unique in benefiting from mutually supportive mixed methods. It is rather that public administration and management are not an exception from the general case, and the study of public agencies could benefit from greater use of mutually supportive mixed method empirical strategies. Many disciplines face challenges in integrating qualitative and quantitative methods, and seek to gain greater benefit from mixed methods work. From biostatistics, Rosenbaum and Silber (2001) argue for what might be thought of as the inverse of the Lieberman (2005) nested analysis approach—using “thick description” to improve matching strategies for quantitative analysis. 19 In development economics, Blattman et al. (2016) argue for a form of what I would term mutually supportive mixed methods, integrating qualitative and quantitative data to develop more informed survey results in a single logic of inference to better validate survey responses. Michael Woolcock, an international development scholar with roots in sociology, argues that mutually supportive mixed methods are critical to addressing internal and external validity concerns in understanding the effect of development projects ( Woolcock 2013 , 2018 ).

We have no idea how many studies of public agencies have never been attempted, or placed in the proverbial file drawer at the concept stage, because scholars found econometric challenges insurmountable. As public administration moves toward large-N quantitative research (e.g., Boyne et al. 2006 ; Lynn, Heinrich, and Hill 2001 ; Walker, Boyne, and Brewer 2010 ) while also taking seriously management context (e.g., Andrews, Beynon, and McDermott 2016 ; Bullock, Stritch, and Rainey 2015 ; Meier et al. 2015 ; O’Toole and Meier 2014 ) the research design issues explored in this article will become more common. As the study of public management and public administration becomes more econometrically rigorous, the need to address the kinds of causal inference problems on which this article focuses will likely become more acute.

Econometric analysis is sometimes conceived of as coincident with rigorous. But as more and more public management scholars take an econometric turn direction, both the strength and sophistication on the one hand, and the potential limitations on the other, of econometric analysis may become more apparent. Econometric analysis is an incredibly powerful tool, but it is nonetheless only a single tool in a toolkit, rather than the toolkit itself. If public administration and public management scholars respond to the field’s methodological shifts by abandoning qualitative work, or using qualitative work merely as an additional layer to quantitative analysis, the field may lose access to a great deal of important work and empirical settings. There are alternatives to consigning research ideas for which the best possible quantitative empirical strategies are less than fully satisfactory to the rubbish bin.

Where a researcher determines that quantitative and qualitative case study methods might be mutually helpful in examining the question and quantities of interest, it is critical to integrate these two parts of the empirical strategy. Integration involves carefully thinking through the logic of causal inference of both parts of the holistic empirical strategy: determining where the holes in each part of the strategy lie, and determining how the other part of the empirical strategy might best fill the hole. Rare will be the researcher who can come to the best strategy on first draft. Iteration and collaboration are critical in crafting a well-integrated, mutually supportive mixed methods strategy. One implication of this approach, then, is to invest greater time at the outset in considering a range of qualitative and quantitative empirical approaches than may be conventional in many corners of the academy.

Scholars who have starkly divergent underlying models of what causation is (ontologies) and how we might learn about what causes what (epistemologies) are not likely to embrace each other’s methods. The approach described in this article is of most help to qualitative and quantitative scholars who agree on epistemic and ontological matters, not a means of resolving tensions between scholars who do not agree. That said, ontological and epistemological divides exist within communities of primarily qualitative and primarily quantitative scholars, not merely between them. Many scholars recognize the benefit of multiple kinds of empirical strategies, even if they believe the methods they use to be of primary usefulness. Mutually supportive mixed methods hold the promise of allowing communities of scholars to focus on what unites them rather than what divides them. Although not the primary intent of this article, it is possible that mutually supportive mixed methods can help bridge ossified methodological stalemates.

In some ways, this article’s main thrust is simple: when the purpose of one method of empirical enquiry is “filling holes” in another addition to “adding layers” to that analysis, how one proceeds depends on the nature of the hole to be filled. When qualitative case study research is “filling holes” in large-N observational analysis, then, what cases a researcher chooses and how cases are constructed and analyzed depends on the particulars of the hole. On case selection, this requires going beyond simply considering the differences between “most similar” and “most different” strategies, or typical versus extreme cases, etc. On case selection, it implies a particular kind of “prior stratification” ( Seawright and Gerring 2008 ): a stratification endogenous to the particular econometric problem one faces. On case analysis, the choice implies an analytic approach that is determined in concert with a quantitative analytic approach. The same holds in the reverse, where quantitative large-N analysis is conceived of as complementary to qualitative case studies.

This “two-way street” between methods necessitates co-creation of a single integrated empirical strategy involving multiple empirical strands. Mixed methods have normally been imagined to coexist like layers of wallpaper, building on top of the other. This article argues they can coexist more like blending paint: the mixing needs to occur ex-ante, before the paint is applied to the wall. The mixing is likely also to be iterated, with the desired hue achieved by careful examination and revision of research design before the first brush stroke—quantitative or qualitative—is made. In mixed methods work, case studies need not merely be a helpful addition, providing colorful illustration to econometric work. Qualitative and quantitative analyses can be co-designed in concert, with the whole greater than the sum of the parts.

Mutually supportive mixed methods are not a “method for all seasons,” any more than is econometric analysis, or any other method. But there are a potentially large number of contexts in which mutually mixed methods make sense in the study of public agencies. For the field as a whole, adding mutually supportive mixed methods to the toolkit will expand both the breadth and rigor of analysis.

Many thanks to James Bisbee, Peter Hall, Rich Nielsen, Tom Pavone, David Steinberg, and Michael Woolcock for their helpful comments, and to Anthony Bertelli, Peter Hall, Michael Woolcock, and others for their encouragement to take up this project. I am also very thankful to Alison Decker for her research assistance, and to the JPART symposium editors and anonymous reviewers, whose contributions much improved this work.

This is arguably an elaboration of Seawright and Gerring (2008) ’s typology, which includes typical, diverse, extreme, deviant, influential, most similar, and most different cases.

This differs slightly from the original. In O’Toole and Meier’s original formulation, C is a vector describing the context in whole; in this article, C is a specific contextual feature. Similarly M is, in this article, a specific management practice, rather than a vector of managerial actions. This adaptation also reverses β 3 and β 4 relative to the original, to put the terms of primary focus ( β 1,2,3 ) in front. O and X are unchanged from the original, as is the functional form.

One solution, here, would be to employ a multilevel random effects model; however, there will be many situations where multilevel models either cannot be estimated or are inappropriate (e.g., because the need for unit fixed effects to control for what would otherwise be a source of bias is clear). This is dealt with in greater detail in The Connective Tissue of Mutually Supportive Mixed Methods: Using Parallel Quantitative and Qualitative Approaches section.

Plumper and Troeger (2007) put forward a three-stage procedure for, as they put it, “the estimation of time-invariant and rarely changing variables in panel data models with unit effects.” The qualitative empirical strategy described later can be read as a complement to their vector decomposition model, with the degree to which we ought update our priors (in a Bayesian sense) given addition weight by the combination of qualitative and vector decomposition empirical strategies.

The nine agencies in the database are the Asian Development Bank (AsDB), the UK’s Department for International Development (DFID), the European Commission (EC), the German Society for International Cooperation (GiZ), the Global Fund for Aids, TB, and Malaria (GFATM), the the International Fund for Agricultural Development (IFAD), the German Development Bank (KfW), the Japanese International Cooperation Agency (JICA), and the World Bank (WB).

Although the measure is in fact from multiple waves of the Paris Declaration Monitoring Surveys ( OECD 2012 ), these surveys are quite proximate in time (2004, 2007, 2010); I conceive of them as multiple measures of the same construct rather than a true time series.

These dimensions include, but are not limited to, structural differences (e.g., independent agencies versus ministries) and cultural differences, for example, the tendency to bias evaluations upwards.

Adapted from Honig (2019) , figure 2.

Honig (2018 , chapter 6) provides some suggestive quantitative analysis by sector, but makes clear that sectors are largely poor proxies for external verifiability.

Adapted from Honig (2018) , figure 1.2.

See Honig (2018 , 2019 ) for citations for each of these quotes (some of which are anonymous) and a full list of interviewees. These case study paragraphs, and perhaps a few other sentences in this article, have minor textual overlap with Honig (2018) ; used with permission of the publisher.

Agency fixed effects are not the only way to addresses unobserved differences between units into analysis; random effects models can also partially account for these structural differences, and while these models cannot fully eliminate potential bias, they have other advantages. Clark and Linzer (2015) ably discuss the tradeoffs; random effects models use partial pooling, thus raising the possibility of some bias but reducing variance in expectation. The answer as to which side of the “bias-variance tradeoff” to choose rests in part on how likely it is that bias will be introduced by pooling. In cases whee the key independent variable is at the unit (e.g., agency) level, as is a great deal of the potential omitted variable bias this seems the greater threat, and fixed (rather than random) effects thus the methodological default. Clark and Linzer interestingly note as one of their “practical considerations” for using random effects that “it is very common for a researcher to want to include in the specification an important covariate of interest that does not vary within units” (p. 403). My view is that mutually supportive mixed methods can provide estimates of “covariates of interest” without sacrificing the clear risk of bias that often pertains in the empirical study of public agencies. That said, there are certainly settings in the empirical study of public agencies where concerns about variance ought rightly outweigh those regarding bias, for example, when units are relatively homogenous and are a small sample representing a larger whole (e.g., teams of police officers who are paired at random, in an observational study of a large police department). In these contexts, random effects might well be an appropriate way of dealing with multiunit (e.g., multiagency) data.

To be clear, this is not a new or novel method, but rather perfectly conventional in panel data econometric analysis.

Available at https://academic.oup.com/jpart/pages/Impact_Factor . Articles as of mid-March 2018.

By coincidence, this measure is developed in Gilardi (2008) , the book discussed in this article’s introduction.

Common source bias is related to, but distinct from, broader omitted variable bias. Well-explored in the public management and broader management literature (e.g., Favero and Bullock 2015 ; Meier and O’Toole 2013 ; Richardson, Simmering, and Sturman et al. 2009 ), common source bias is a form of measurement error, stemming from a single “draw” measuring multiple variables, rather than independent draws; to draw from Meier and O’Toole (2013) , we might face overestimation in both organizational performance and management quality. But even if both variables are accurately measured, an omitted variable is also more likely when a single survey is taken. Even if both performance and management quality are accurately measured, they might both be temporarily high due to a temporary renewed focus on management quality, or recent attrition that led those most dissatisfied with management to exit and thus not participate in the survey, etc. Use of case studies as a mutually supportive mixed method can address both omitted variable and common source bias.

Many thanks to Alison Decker for her work compiling this list. Citation count as determined by Google scholar in June 2018. Table 2 includes any article we identified that uses both qualitative and quantitative analysis, not merely articles that use econometric analysis and/or case studies. We did not examine every article in JPART in the past 15 years, only those which were picked up by a search string (e.g., “mixed methods” or “multiple methods” or the joint presence of “qualitative” and “quantitative.” As such these are the five most cited articles we identified, as it is possible we failed to identify the complete universe of mixed methods pieces.

This in turn has echoes of Adcock and Collier (2001) on shared measurement validity standards for qualitative and quantitative research.

That is, both Rosenbaum and Silber and Lieberman conceive of the analysis as iterative, but the former uses qualitative small-N research to inform the design of quantitative large-N research, while the latter uses quantitative large-N research to inform the design of qualitative small-N research.

Adcock , Robert , and David Collier . 2001 . Measurement validity: A shared standard for qualitative and quantitative research . American Political Science Review 95 : 529 – 46 .

Google Scholar

Aghion , Philippe , and J. Tirole . 1997 . Formal and real authority in organizations . Journal of Political Economy 105 : 1 – 29 .

Andrews , Rhys , Malcolm J. Beynon , and Aoife M. McDermott . 2016 . Organizational capability in the public sector: A configurational approach . Journal of Public Administration Research and Theory 26 : 239 – 58 .

Barzelay , Michael , and Fred Thompson . 2010 . Back to the future: Making public administration a design science . Public Administration Review 70 : S295 – 7 .

Bennett , Andrew , and Jeffrey Checkel . 2015 . Process tracing . Cambridge, UK : Cambridge Univ. Press .

Google Preview

Biesenbender , Sophie , and Adrienne Héritier . 2014 . Mixed-methods designs in comparative public policy research: The dismantling of pension policies . In Comparative policy studies , 237 – 64 . ed. Engeli , Isabelle , Christine Rothmayr Allison . London : Palgrave Macmillan .

Blatter , Joachim , and Till Blume . 2008 . Co-variation and casual process tracing revisited: Clarifying new directions for causal inference and generalization in case study methodology . Qualitative Methods 6 : 29 – 34 .

Blattman , Christopher , Julian Jamison , Tricia Koroknay-Palicz , Katherine Rodrigues , and Margaret Sheridan . 2016 . Measuring the measurement error: A method to qualitatively validate survey data . Journal of Development Economics 120 : 99 – 112 .

Boyne , George A. , Kenneth J. Meier , Laurence J. O’Toole , Jr. , and Richard M. Walker , eds. 2006 . Public service performance: Perspectives on measurement and management . Cambridge, UK : Cambridge Univ. Press .

Brady , Henry E. , and David Collier . 2004 . Rethinking social inquiry . New York : Rowman & Littlefield .

Bullock , Justin B. , Justin M. Stritch , and Hal G. Rainey . 2015 . International comparison of public and private employees’ work motives, attitudes, and perceived rewards . Public Administration Review 75 : 479 – 89 .

Center for Systemic Peace . 2014 . State Fragility Index . Vienna : Center for Systemic Peace .

Clark , Tom S. and Drew A. Linzer . 2015 . Should I use fixed or random effects ? Political Science Research Methods 3 : 399 – 408 .

Ennser-Jedenastik , Laurenz . 2016 . The politicization of regulatory agencies: Between partisan influence and formal independence . Journal of Public Administration Research and Theory 26 : 507 – 18 .

Favero , Nathan , and Justin B. Bullock . 2015 . How (not) to solve the problem: An evaluation of scholarly responses to common source bias . Journal of Public Administration Research and Theory 25 : 285 – 308 .

Favero , Nathan , Kenneth J. Meier , and Laurence J. O’Toole , Jr . 2016 . Goals, trust, participation, and feedback: Linking internal management with performance outcomes . Journal of Public Administration Research and Theory 26 : 327 – 43 .

George , A. , and Andrew Bennett . 2005 . Case studies and theory development in the social sciences . Cambridge, MA : MIT Press .

Gerlak , Andrea , and Tanya Heikkila . 2011 . Building a theory of learning in collaboratives: Evidence from the everglades restoration program . Journal of Public Administration Research and Methods 21 : 619 – 44 .

Gerring , John . 2007 . Case study research: Principles and practices . Cambridge : Cambridge Univ. Press .

Gerring , John , and Lee Cojocaru . 2016 . Selecting Cases for Intensive Analysis: A Diversity of Goals and Methods . Sociological Methods & Research 45 : 3 392 – 423 .

Gilardi , Fabrizio . 2008 . Delegation in the regulatory state: Independent regulatory agencies in action . Northampton, MA : Edward Elgar .

Greene , Jennifer C . 2007 . Mixed methods in social inquiry , vol 9 . San Francisco : John Wiley & Sons .

Hall , Peter A . 2013 . Tracing the progress of process tracing . European Political Science 12 : 20 – 30 .

Honig , Dan . 2018 . Navigation by judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work . New York : Oxford University Press .

Honig , Dan . Forthcoming 2019 . “ When Reporting Undermines Performance: The Costs of Politically Constrained Organizational Autonomy in Foreign Aid Implementation .” International Organization .

Humphreys , Macartan , and Alan Jacobs . 2015 . Mixing methods: A Bayesian approach . American Political Science Review 109 : 653 – 73 .

Ingold , Karin , and Philip Leifeld . 2016 . Structural and institutional determinants of influence reputation: A comparison of collaborative and adversarial policy networks in decision making and implementation . Journal of Public Administration Research and Theory 26 : 1 – 18 .

Jilke , Sebastian , Gregg G. Van Ryzin , and Steven Van de Walle . 2016 . Responses to decline in marketized public services: An experimental evaluation of choice overload . Journal of Public Administration Research and Theory 26 : 421 – 32 .

Johnson , R. Bourke , Anthony J. Onwuegbuzie , and Lisa A. Turner . 2007 . Toward a definition of mixed methods research . Journal of Mixed Methods Research 1 : 112 – 33 .

King , Gary , Robert Keohane and Sidney Verba . 1994 . Designing social inquiry . Princeton, NJ : Princeton Univ. Press .

Levy , Jack . 2008 . Case studies: Types, designs, and logics of inference . Conflict Management and Peace Science 25 : 1 – 18 .

Lieberman , Evan . 2005 . Nested analysis as a mixed-method strategy for comparative research . American Political Science Review 99 : 435 – 52 .

Lipsky , Michael . 1980 . Street-Level bureaucracy: Dilemmas of the individual in public service . New York : Russell Sage Foundation .

Lynn , Laurence E. , Jr. , Carolyn J. Heinrich , and Carolyn J. Hill . 2001 . Improving governance: A new logic for empirical research . Washington, DC : Georgetown Univ. Press .

Mahoney , James . 2007 . Qualitative methodology and comparative politics . Comparative Political Studies 40 : 122 – 44 .

Mahoney , James . 2010 . After KKV . World Politics 62 : 120 – 47 .

Marvel , John D . 2016 . Unconscious bias in citizens’ evaluations of public sector performance . Journal of Public Administration Research and Theory 26 : 143 – 58 .

Meier , Kenneth , Simon Camar Andersen , Laurence J. O’Toole , Jr. , Nathan Favero , and Soren C. Winter . 2015 . Taking managerial context seriously: Public management and performance in U.S. and Denmark schools . International Public Management Journal 18 : 130 – 50 .

Meier , Kenneth , and Laurence J. O’Toole , Jr . 2013 . Subjective organizational performance and measurement error: Common source bias and spurious relationships . Journal of Public Administration Research and Theory 23 : 429 – 56 .

Mele , Valentina , and Paolo Belardinelli . Forthcoming. 2018 . Mixed methods in public administration research: Selecting, sequencing and connecting . Journal of Public Administration Research and Theory .

Mill , John Stuart . 1843 . A system of logic: ratiocinative and inductive: being a connected view of the principles of evidence and the methods of scientific investigation . London : John W. Parker, West Strand .

Morse , Janice M . 2010 . Sampling in grounded theory . In The SAGE handbook of grounded theory , ed. Bryant , Antony and Kathy Charmaz . 229 – 44 .

Nielsen , Richard A . 2016 . Case selection via matching . Sociological Methods and Research 45 : 569 – 97 .

Nohrstedt , Daniel . 2009 . Do advocacy coalitions matter? Crisis and change in Swedish nuclear energy policy . Journal of Public Administration Research and Methods 20 : 309 – 33 .

Nowell , Branda , and Kate Albrecht . Forthcoming. 2018 . A reviewer’s guide to qualitative rigor . Journal of Public Administration Research and Theory .

OECD . 2012 . Aid effectiveness 2011: Progress in implementing the Paris declaration . Paris : OECD Publishing .

O’Toole , Laurence , Jr. , and Kenneth J. Meier . 2014 . Public management, context, and performance: In quest of a more general theory . Journal of Public Administration Research and Methods 25 : 237 – 56 .

Pavone , Tommaso . 2017 . Selecting Cases for Comparative Sequential Analysis: Novel Uses for Old Methods . In forthcoming chapter in the case for case studies . eds. Woolcock , Michael , Jennifer Widner , and Daniel Ortega-Nieto . New York, NY : Cambridge University Press .

Plumper , Thomas , and Vera E. Troeger . 2007 . Efficient estimation of time-invariant and rarely changing variables in finite sample panel analyses with unit fixed effects . Political Analysis 15 : 124 – 39 .

Polanyi , Michael . 1966 . The tacit dimension . Univ. of Chicago Press .

Raab , Jörg , Remco S. Mannak , and Bart Cambré . 2013 . Combining structure, governance, and context: A configurational approach to network effectiveness . Journal of Public Administration Research and Methods 25 : 479 – 511 .

Richardson , Hettie A. , Marcia J. Simmering , and Michael C. Sturman . 2009 . A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance . Organizational Research Methods 12 : 762 – 800 .

Rosenbaum , Paul R. , and Jeffrey H. Silber . 2001 . Matching and thick description in an observational study of mortality after surgery . Biostatistics 2 : 217 – 32 .

Seawright , Jason . 2016 . Multi-method social science: Combining qualitative and quantitative tools . Cambridge, UK : Cambridge Univ. Press .

Seawright , Jason , and John Gerring . 2008 . Case selection techniques in case study research: A menu of qualitative and quantitative options . Political Research Quarterly 61 : 294 – 308 .

Shangraw , Ralph F Jr. , Michael M. Crow , and E. Sam Overman . 1989 . Public administration as a design science . Public Administration Review 49 : 153 – 60 .

Slater , Dan , and Daniel Zilblatt . 2013 . The Enduring Indispensability of the Controlled Comparison . Comparative Political Studies 46 : 1301 – 27 .

Soss , Joe , Richard Fording , and Sanford F. Schram . 2011 . The organization of discipline: From performance management to perversity and punishment . Journal of Public Administration Research and Theory 21 : 203 – 32 .

Stein , JC . 2002 . Information production and capital allocation: Decentralized versus hierarchical firms . The Journal of Finance 57 : 1891 – 21 .

Tashakkori , Abbas and Charles Teddlie . 2003 . Major issues and controversies in the use of mixed methods in the social and behavioral sciences . In Handbook of Mixed Methods in Social and Behavioral Research , 3 – 50 . London : Sage Publishing .

Tashakkori , Abbas , and John W. Creswell . 2007 . Exploring the nature of research questions in mixed methods research . Journal of Mixed Methods Research 1 : 207 – 11 .

Walker , Richard M. , George A. Boyne , and Gene A. Brewer . 2010 . Public management and performance: Research directions . Cambridge, UK : Cambridge Univ. Press .

Witko , Christopher . 2011 . Campaign contributions, access, and government contracting . Journal of Public Administration Research and Methods 21 : 761 – 78 .

Woolcock , Michael . 2013 . Using case studies to explore the external validity of ‘complex’ development interventions . Evaluation 19 : 229 – 48 .

Woolcock , Michael . Forthcoming. 2018 . Reasons for using mixed methods in the evaluation of complex projects . In Philosophy and Interdisciplinary Social Science: A Dialogue , ed. Michiru Nagatsu and Attilia Ruzzene. London : Bloomsbury Academic .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1477-9803
  • Print ISSN 1053-1858
  • Copyright © 2024 Public Management Research Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

IMAGES

  1. 6 Types of Qualitative Research Methods

    qualitative methods for policy analysis case study research strategy

  2. Understanding Qualitative Research: An In-Depth Study Guide

    qualitative methods for policy analysis case study research strategy

  3. case study method qualitative research

    qualitative methods for policy analysis case study research strategy

  4. case study method of qualitative research

    qualitative methods for policy analysis case study research strategy

  5. Qualitative Research Methods: An Introduction

    qualitative methods for policy analysis case study research strategy

  6. Qualitative Research

    qualitative methods for policy analysis case study research strategy

VIDEO

  1. 2023 PhD Research Methods: Qualitative Research and PhD Journey

  2. Qualitative Approach

  3. Exploring Research Methodologies in the Social Sciences (4 Minutes)

  4. Qualitative and Quantitative Research Design

  5. Case Study Research

  6. 12

COMMENTS

  1. Qualitative Methods for Policy Analysis: Case Study Research Strategy

    Researchers unfamiliar with the case study strategy sometimes harbour the misconception that it is a means of producing narratives/stories. This is mainly due to a lack of rigour and systematic procedure in many case studies (Yin 2003).The case study methodology is considered as a broad umbrella research strategy that can accommodate several methods (Hartley 1994, p. 209; Hartley 2004).

  2. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  3. Qualitative Methods for Policy Analysis: Case Study Research Strategy

    Qualitative Methods for Policy Analysis: Case Study Research Strategy. April 2022. DOI: 10.1007/978-981-16-3284-6_7. In book: Agricultural Policy Analysis. Authors: Sarath S Kodithuwakku ...

  4. The Qualitative Case Study Research Strategy as Applied on a Rural

    In some case study research, the interview method was the sole method of data collection as reported by Piekkari et al. (2009) where about 72% of the 135 qualitative case study articles analysed had the interview as the sole method of data collection. This demonstrates the prominence of the interview method of data collection in qualitative ...

  5. Leveraging the insights of depth: A staged strategy for building

    However, it is important to remember that qualitative case study research is iterative; analysis is done as researchers build their narratives, not just afterward (Stryker 1996). And while rigorous case studies sustain a focus on clearly defined research questions as with any research design, they also require a degree of flexibility as the ...

  6. Case Study Method: A Step-by-Step Guide for Business Researchers

    The authors' decision to conduct case study research with qualitative methods was based on various reasons. Firstly, the nature of problem under investigation required an in-depth exploration of the phenomenon. Exploration helped to dig deep into participants' thoughts to understand how value cocreation process was taking place.

  7. Case Studies

    The case study strategy, due to the abundance and variety of the corpus of data mobilised, and the research methods employed (qualitative, quantitative or mixed), most often allows for a rich description of the public policy(ies) being evaluated and the contexts of implementation.

  8. Case Study Research: In-Depth Understanding in Context

    Abstract. This chapter explores case study as a major approach to research and evaluation. After first noting various contexts in which case studies are commonly used, the chapter focuses on case study research directly Strengths and potential problematic issues are outlined and then key phases of the process.

  9. 23 Case Study Research: In-Depth Understanding in Context

    This chapter explores case study as a major approach to research and evaluation using primarily qualitative methods, as well as documentary sources, contemporaneous or historical. However, this is not the only way in which case study can be conceived. No one has a monopoly on the term. While sharing a focus on the singular in a particular context, case study has a wide variety of uses, not all ...

  10. Novel methods of qualitative analysis for health policy research

    This article on novel methods of qualitative analysis is aimed towards policy-makers, bioethics health professionals and researchers. ... As a case study to introduce our methodological proposal, ... M., Hernández-Lemus, E. et al. Novel methods of qualitative analysis for health policy research. Health Res Policy Sys 17, 6 (2019). https://doi ...

  11. (PDF) Case-Oriented Qualitative Research Strategy to Understand Policy

    The present study has touched on some significant factors that relate to the making of internal and external policies. PDF | On Mar 24, 2019, Shuvra Chowdhury published Case-Oriented Qualitative ...

  12. Qualitative comparative analysis and applied public policy analysis

    Standing between quantitative and qualitative research, in principle they help balance the breadth of analysis provided by quantitative data with the depth of case study knowledge provided by qualitative analysis. The challenge of mixing depth and breadth has always been a particularly acute one for policy based research. ... The full potential ...

  13. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  14. Case Selection for Case‐Study Analysis: Qualitative and Quantitative

    The article then draws attention to two ambiguities in case-selection strategies in case-study research. The first concerns the admixture of several case-selection strategies. The second concerns the changing status of a case as a study proceeds. Some case studies follow only one strategy of case selection.

  15. The Qualitative Case Study Research Strategy as Applied on a Rural

    The main research strategy that I used in my doctoral study on the design and delivery of rural enterprise business support programmes was the case study strategy in line with the inter-pretivism research philosophy that I chose to underpin the study. The case study strategy is listed among some of the most com-

  16. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  17. Agricultural Policy Analysis

    Qualitative Methods for Policy Analysis: Case Study Research Strategy. Sarath S. Kodithuwakku; Pages 179-193. ... Case studies in relation to the agri-food policy and strategy response to COVID-19 Pandemic are also covered. ... She is a Collaborator of the International Food Policy Research Institute, a Hewlett Fellow of the International ...

  18. Professor

    Qualitative Methods for Policy Analysis: Case Study Research Strategy. ... Many policy researchers are predisposed to use either quantitative or qualitative research methods regardless of the ...

  19. Case Study Design and Analysis as a Complementary Empirical Strategy to

    Introduction. The literature on case selection and methods is increasingly complex, as befits a maturing methodological subfield. As Pavone (2017) notes a recent synthesis of case study selection methods derived "no less than five distinct 'types' (representative, anomalous, most-similar, crucial, and most-different) and eighteen 'subtypes' of cases, each with its own logic of case ...

  20. Toward Developing a Framework for Conducting Case Study Research

    The definition above is an example of an all-inclusive descriptive definition of case study research represented by Yin (2003).According to the definition of case study research, there is no doubt that this research strategy is one of the most powerful methods used by researchers to realize both practical and theoretical aims.