• Research article
  • Open access
  • Published: 26 January 2017

Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework

  • Mandeep Sekhon   ORCID: orcid.org/0000-0002-5109-9536 1 ,
  • Martin Cartwright 1 &
  • Jill J. Francis 1  

BMC Health Services Research volume  17 , Article number:  88 ( 2017 ) Cite this article

183k Accesses

1135 Citations

131 Altmetric

Metrics details

It is increasingly acknowledged that ‘acceptability’ should be considered when designing, evaluating and implementing healthcare interventions. However, the published literature offers little guidance on how to define or assess acceptability. The purpose of this study was to develop a multi-construct theoretical framework of acceptability of healthcare interventions that can be applied to assess prospective (i.e. anticipated) and retrospective (i.e. experienced) acceptability from the perspective of intervention delivers and recipients.

Two methods were used to select the component constructs of acceptability. 1) An overview of reviews was conducted to identify systematic reviews that claim to define, theorise or measure acceptability of healthcare interventions. 2) Principles of inductive and deductive reasoning were applied to theorise the concept of acceptability and develop a theoretical framework. Steps included (1) defining acceptability; (2) describing its properties and scope and (3) identifying component constructs and empirical indicators.

From the 43 reviews included in the overview, none explicitly theorised or defined acceptability. Measures used to assess acceptability focused on behaviour (e.g. dropout rates) (23 reviews), affect (i.e. feelings) (5 reviews), cognition (i.e. perceptions) (7 reviews) or a combination of these (8 reviews).

From the methods described above we propose a definition: Acceptability is a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention. The theoretical framework of acceptability (TFA) consists of seven component constructs: affective attitude, burden, perceived effectiveness, ethicality, intervention coherence, opportunity costs, and self-efficacy.

Despite frequent claims that healthcare interventions have assessed acceptability, it is evident that acceptability research could be more robust. The proposed definition of acceptability and the TFA can inform assessment tools and evaluations of the acceptability of new or existing interventions.

Peer Review reports

Acceptability has become a key consideration in the design, evaluation and implementation of healthcare interventions. Many healthcare interventions are complex in nature; for example, they can consist of several interacting components, or may be delivered at different levels within a healthcare organisation [ 1 ]. Intervention developers are faced with the challenge of designing effective healthcare interventions to guarantee the best clinical outcomes achievable with the resources available [ 2 , 3 ]. Acceptability is a necessary but not sufficient condition for effectiveness of an intervention. Successful implementation depends on the acceptability of the intervention to both intervention deliverers (e.g. patients, researchers or healthcare professionals) and recipients (e.g. patients or healthcare professionals) [ 4 , 5 ]. From the patient’s perspective, the content, context and quality of care received may all have implications for acceptability. If an intervention is considered acceptable, patients are more likely to adhere to treatment recommendations and to benefit from improved clinical outcomes [ 6 , 7 ]. From the perspective of healthcare professionals, if the delivery of a particular intervention to patients is considered to have low acceptability, the intervention may not be delivered as intended (by intervention designers), which may have an impact on the overall effectiveness of the intervention [ 8 , 9 ].

In the United Kingdom, the Medical Research Council (MRC) has published three guidance documents for researchers and research funders in relation to appropriate methods for designing and evaluating complex interventions [ 10 – 12 ]. The number of references to acceptability has increased with each guidance publication which reflects the growing importance of this construct. The 2000 MRC guidance document makes no reference to acceptability, whereas the 2015 guidance refers to acceptability 14 times but lacks a definition and fails to provide clear instructions on how to assess acceptability.

The 2015 guidance focuses on conducting process evaluations of complex interventions. It offers examples of how patients’ acceptability may be assessed quantitatively, by administering measures of acceptability or satisfaction, and qualitatively, by asking probing questions focused on understanding how they are interacting with the intervention [ 12 ]. Nevertheless, it fails to offer a definition of acceptability or specific materials for operationalising it. Without a shared understanding of what acceptability refers to it is unclear how intervention developers are to assess acceptability for those receiving and delivering healthcare interventions.

Attempts to define acceptability

Defining acceptability is not a straightforward matter. Definitions within the healthcare literature vary considerably highlighting the ambiguity of the concept. Specific examples of definitions include the terms ‘treatment acceptability’ [ 13 – 15 ] and ‘social acceptability’ [ 16 – 18 ]. These terms indicate that acceptability can be considered from an individual perspective but may also reflect a more collectively shared judgement about the nature of an intervention.

Stainszewska and colleagues (2010) argue that social acceptability refers to “patients’ assessment of the acceptability, suitability, adequacy or effectiveness of care and treatment” ([ 18 ], p.312). However, this definition is partly circular as it states that social acceptability entails acceptability. These authors also omit any guidance on how to measure patients’ assessment of care and treatment.

Sidani et al., (2009) propose that treatment acceptability is dependent on patients’ attitude towards treatment options and their judgement of perceived acceptability prior to participating in an intervention. Factors that influence patients’ perceived acceptability include the intervention’s “ appropriateness in addressing the clinical problem, suitability to individual life style, convenience and effectiveness in managing the clinical problem” ([ 14 ], p.421). Whilst this conceptualisation of treatment acceptability can account for patients’ decisions in terms of wishing to complete treatments and willingness to participate in an intervention, it implies a static evaluation of acceptability. Others argue that perceptions of acceptability may change with actual experience of the intervention [ 19 ]. For example, the process of participating in an intervention, the content of the intervention, and the perceived or actual effectiveness of the intervention, are likely to influence patients’ perceptions of acceptability.

Theorising acceptability

The inconsistency in defining concepts can impede the development of valid assessment instruments [ 20 ]. Theorising the concept of acceptability would provide the foundations needed to develop assessment tools of acceptability.

Within the disciplines of health psychology, health services research and implementation science the application of theory is recognised as enhancing the development, evaluation and implementation of complex interventions [ 10 , 11 , 21 – 25 ]. Rimer and Glanz (2005) explain “a theory presents a systematic way of understanding events or situations. It is a set of concepts, definitions, and propositions that explain or predict these events or situations by illustrating the relationship between variables” ([ 26 ] p.4).

We argue that theorising the construct of acceptability will lead to a better understanding of: (1) what acceptability is (or is proposed to be) (specifically whether acceptability is a unitary or multi-component construct); (2) if acceptability is a multi-component construct, what its components are (or are proposed to be); (3) how acceptability as a construct is proposed to relate to other factors, such as intervention engagement or adherence; and (4) how it can be measured.

Aims and objectives

The aim of this article is to describe the inductive (empirical) and deductive (theoretical) methods applied to develop a comprehensive theoretical framework of acceptability. This is presented in two sequential studies. The objective of the first study was to review current practice and complete an overview of systematic reviews identifying how the acceptability of healthcare interventions has been defined, operationalised and theorised. The objective of the second study was to supplement evidence from study 1 with a deductive approach to propose component constructs in the theoretical framework of acceptability.

Study 1: Overview of reviews

Preliminary scoping searches identified no existing systematic review focused solely on the acceptability of healthcare interventions. However, systematic reviews were identified which considered the acceptability of healthcare and non-healthcare interventions alongside other factors such as effectiveness [ 27 ] efficacy [ 28 ] and tolerability [ 29 ]. We therefore decided to conduct an overview of systematic reviews of healthcare interventions that have included a focus on acceptability, alongside other factors (e.g. effectiveness, feasibility).

Search strategy

Systematic Reviews published from May 2000 (the 2000 MRC guidance was published in April 2000) to February 2016 were retrieved through a single systematic literature search conducted in two phases (i.e. the initial phase 1 search was conducted in February 2014 and this was updated in phase 2 February 2016). There were two search strategies applied to both phase 1 and phase 2 searches. The first strategy was applied to the Cochrane Database of Systematic Reviews (CDSR), based on the appearance of the truncated term “acceptab*” in article titles. The second search involved applying the relevant systematic review filter (Additional file 1 ) to the search engines OVID (Medline, Embase) and EBSCO Host (PsycINFO), and combining the review filter with the appearance of the term “acceptab*” in article titles. By searching for “acceptab*” within the article title only (rather than within the abstract or text), we also ensured that only reviews focused on acceptability as a key variable would be identified. Only reviews published in English were included as the research question specifically considered the word “acceptability”; this word may have different shades of meaning when translated into other languages, which may in turn affect the definition and measurement issues under investigation.

Screening of citations

Duplicates were removed in Endnote. All abstracts were reviewed by a single researcher (MS) against the inclusion and exclusion criteria (Table  1 ). To assess reliability of the screening process, another researcher (MC) independently reviewed 10% of the abstracts. There was 100% agreement on the abstracts included for full text review.

Full text review and data extraction

One researcher (MS) retrieved all full text papers that met the inclusion criteria and extracted data using an extraction form. Two additional researchers (JF and MC) independently reviewed 10% of the included systematic reviews. The researchers extracted information on how acceptability had been defined, whether acceptability had been theorised, and when and how acceptability had been assessed. There were no disagreements in data extraction.

Assessment of quality

No quality assessment tool was applied as it is possible that poor quality systematic reviews would include information relevant to addressing the study aims and objectives.

Definitions of acceptability: consensus group exercises

To identify how acceptability has been defined one researcher (MS) extracted definitions from each of the systematic reviews. Where definitions of acceptability were unclear, a reasonable level of inference was used in order to identify an implicit definition where review authors imply their understanding of acceptability whilst not directly proposing a definition of acceptability (see results section for example of inferences).

To check reliability of the coding of extracted text reflecting implicit or explicit definitions seven research psychologists (including the three authors) were asked to classify the extracted text into the following categories: (1) Conceptual Definition (i.e. an abstract statement of what acceptability is); (2) Operational Definition (i.e. a concrete statement of how acceptability is measured); (3) Uncertain; and (4) No Definition. The consensus group was allowed to select one or more options that they considered applicable to each definition. All definitions from the included systematic review papers were extracted, tabulated and presented to the group, together with definitions of “conceptual” and “operational”. Explanations of these categories are presented in Table  2 . One researcher (MS) facilitated a short discussion at the beginning of the task to ensure participants understood the “conceptual” and “operational” definitions. The review authors subsequently repeated the same exercise for extracted definitions from the updated phase 2 search.

No quantitative synthesis was conducted. All extracted data were analysed by applying the thematic synthesis approach [ 30 ].

Study 2: Development of a theoretical framework of acceptability

The methods applied to develop theory are not always described systematically in the healthcare and psychology literature [ 31 ]. Broadly, the most common approaches are data driven (bottom up/ inductive) and theory driven (top down/ deductive) processes [ 32 – 34 ]. The data driven process focuses on observations from empirical data to form theory, whereas the theory driven process works on the premise of applying existing theory in an effort to understand data. The process of theorising is enhanced when inductive and deductive processes are combined [ 35 , 36 ]. To theorise the concept of acceptability, we applied both inductive and deductive processes by taking a similar approach described by Hox [ 33 ].

Hox proposed that, in order to theorise, researchers must (1) decide on the concept for measurement; (2) define the concept; (3) describe the properties and scope of the concept (and how it differs from other concepts); and (4) identify the empirical indicators and subdomains (i.e. constructs) of the concept. We describe below how steps 1-4 were applied in developing a theoretical framework of acceptability.

Step 1: Concept for measurement

We first agreed on the limits of the construct to be theorised: acceptability of healthcare interventions.

Step 2: Defining the concept

To define the concept of acceptability we reviewed the results of the overview of reviews, specifically the conceptual and operational definitions identified by both consensus group exercises and the variables reported in the behavioural and self-report measures (identified from the included systematic reviews). Qualitatively synthesising these definitions, we proposed the following conceptual definition of acceptability:

A multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention.

This definition incorporates the component constructs of acceptability (cognitive and emotional responses) and also provides a hypothesis (cognitive and emotional responses are likely to influence behavioural engagement with the intervention). This working definition of acceptability can be operationalised for the purpose of measurement.

Step 3: Describing the properties and scope of the concept

Based on the conceptual definition we identified the properties and scope of the construct of acceptability using inductive and deductive methods to determine which constructs best represented the core empirical indicators of acceptability.

Inductive methods

The application of inductive methods involved reviewing the empirical data that emerged from the overview of reviews. First, variables identified in the consensus group task to define acceptability, and the variables reported in the observed behavioural measures and self-report measures of acceptability, were grouped together according to similarity. Next, we considered what construct label best described each of the variable groupings. For example, the variables of “attitudinal measures”, and “attitudes towards the intervention (how patients felt about the intervention)” was assigned the construct label “affective attitude”. Figure  1 presents our conceptual definition and component constructs of acceptability, offering examples of the variables they incorporate. This forms our preliminary theoretical framework of acceptability, TFA (v1).

The theoretical framework of acceptability (v1). Note: In bold font are the labels we assigned to represent the examples of the variables applied to operationalise and assess acceptability based on the results from the overview (italic font). Note* Addition of the two control constructs emerging deductively from existing theoretical models

Deductive methods

The deductive process was conducted iteratively using the following three steps:

(1) We considered whether the coverage of the preliminary TFA (v1) could usefully be extended by reviewing the identified component constructs of acceptability against our conceptual definition of acceptability and the results of the overview of reviews.

(2) We considered a range of theories and frameworks from the health psychology and behaviour change literatures that have been applied to predict, explain or change health related behaviour.

(3) We reviewed the constructs from these theories and frameworks for their applicability to the TFA. Examples of theories and frameworks discussed include the Theory of Planned Behaviour (TPB) [ 37 ] (e.g. the construct of Perceived Behavioural Control) and the Theoretical Domains Framework (TDF) [ 38 ] (e.g. the constructs within the Beliefs About Capabilities domain). We discussed whether including additional constructs would add value to the framework in assessing acceptability, specifically if the additional constructs could be measured as cognitive and / or emotional responses to the intervention. The TPB and the TDF focus on beliefs about performing a behaviour whereas the TFA reflects a broader set of beliefs about the value of a healthcare intervention. We concluded that there was a more relevant theory that provides better fit with the TFA, the Common Sense Model (CSM) of self-regulation of health and illness [ 37 ]. The CSM focuses on beliefs about a health threat and coping procedures that might control the threat. This approach is thus consistent with the focus of the TFA on acceptability of healthcare interventions. The CSM proposes that, in response to a perceived health threat, individuals spontaneously generate five kinds of cognitive representation of the illness based around identity (i.e. associated symptoms), timeline, cause, control/cure, and consequences. Moss-Morris and colleagues [ 38 ] distinguished between personal control (i.e. the extent to which an individual perceives one is able to control one’s symptoms or cure the disease) and treatment control (i.e. the extent to which the individual believes the treatment will be effective in curing the illness). The third step in the deductive process resulted in the inclusion of both treatment control and personal control as additional constructs within the TFA (v1) (Fig.  1 ). With these additions the framework appeared to include a parsimonious set of constructs that provided good coverage of acceptability as defined.

Step 4: Identifying the empirical indicators for the concept’s constructs

Having identified the component constructs of acceptability, we identified or wrote formal operational definitions for each of the constructs within the TFA (v1). This was done to check that the constructs were conceptually distinctive. We first searched the psychological literature for definitions. If a clear definition for a construct was not available in the psychological literature, standard English language dictionaries and other relevant disciplines (e.g. health economic literature for a definition of “opportunity costs”) were searched. For each construct, a minimum of two definitions were identified. Extracted definitions for the component constructs were required to be adaptable to refer directly to “the intervention” (see results section for examples). This process resulted in revisions to the TFA (v1) and the development of the revised TFA (v2).

Characteristics of included reviews

The databases searches identified 1930 references, with 1637 remaining after de-duplication. After screening titles and abstracts, 53 full texts were retrieved for further examination. Of these, ten articles were excluded for the following reasons: seven articles focused on children’s and adolescents’ acceptability of the intervention, one could not be obtained in English, one article focused on social validity of treatment measures in education psychology, and one article focused on the psychometric properties of exercise tests. Thus, a total of 43 publications were included in this overview (Additional file 2 ). The breakdown of the search process for phase 1 and phase 2 is represented in Fig.  2 .

PRISMA diagram of included papers for searches completed in February 2014 and 2016

The methodological quality of individual studies was assessed in 29 (67%) of the 43 reviews. The Cochrane Tool of Quality Assessment was applied most frequently [ 39 ] (18 reviews: 62%). Other assessments tools applied included the Jadad Scale [ 40 ] (three reviews: 10%), the Critical Appraisal Skills Programme (CASP) guidelines [ 41 ] (three reviews: 10%), CONSORT guidelines [ 41 ] (two reviews: 6%); Grade scale [ 42 ] (one review: 3%), Effective Public Health Practice Project (EPHPP) quality assessment tool [ 43 ] (one review: 3%) and United States Preventive Services Task Force grading system [ 44 ] (one review: 3%).

Assessment of acceptability

Twenty-three (55%) reviews assessed acceptability using various objective measures of behaviour as indicators of acceptability: dropout rates, all-cause discontinuation, reason for discontinuation and withdrawal rates (Additional file 3 ). Twelve (26%) of the reviews reported that they assessed acceptability using self-report measures, which included responses to hypothetical scenarios, satisfaction measures, attitudinal measures, reports of individuals on their perceptions of, and experiences with, the intervention, and opened-ended interview questions (Additional file 4 ). None of the reviews specified a threshold criterion, i.e., the number of participants that needed to withdraw /discontinue treatment, for the intervention to be considered unacceptable.

Eight (19%) reviews assessed acceptability using both objective measures of behaviour and self-reported measures. These included two reviews measuring adherence and satisfaction [ 45 , 46 ], three reviews focusing on dropout rates, take-up rates, reasons for discontinuation and a satisfaction measure [ 47 – 49 ] one review combining the time taken for wound healing alongside a measure of satisfaction and comfort [ 29 ], and two reviews using semi-structured interviews to explore participant experience of the intervention alongside intervention take-up rates [ 50 , 51 ].

We also extracted data on the time at which studies in each of the reviews assessed acceptability relative to the delivery of the intervention (Additional file 5 ). Two of the reviews (5%) assessed acceptability pre-intervention, which involved participants agreeing to take part in screening for a brief alcohol intervention [ 52 ] and willingness to participate in HIV self–testing [ 53 ]. Seven (16%) of the reviews assessed acceptability during the intervention delivery period, while 17 (40%) assessed acceptability post-intervention. Fourteen reviews (33%) did not report when acceptability was measured, and in three (7%) of the reviews it was unclear when acceptability was measured. Within these three reviews, it was unclear whether interpretations of intervention acceptability were based on anticipated (i.e. prospective) acceptability or experienced (i.e. concurrent or retrospective) acceptability.

Use of theory

There was no mention of theory in relation to acceptability in any of these 43 reviews. None of the review authors proposed any link between their definitions (when present) and assessments of acceptability and existing theory or theoretical models (i.e. scientific and citable theories/models). Moreover, none of the reviews proposed any link between implicit theories and their definitions and assessments of acceptability, or theory emerging during the studies reported in the systematic reviews. No links were proposed because, by definition, an implicit theory is not articulated.

Definitions of acceptability: consensus group exercise

Extracted definitions of acceptability required a minimum of four of seven judges to endorse it as representing either an operational or conceptual definition. From the 29 extracts of text (phase 1 search results), the expert group identified 17 of the extracts as being operational definitions. Operational definitions included measureable factors such as dropout rates, all cause discontinuation, treatment discontinuation and measures of satisfaction. Some reviews indicated that acceptability was measured according to a number of indicators, such as effectiveness and side effects. The remaining 12 extracted definitions were not reliably classified as either operational or conceptual and were disregarded. For the 14 extracted definitions based on the phase 2 search results, two endorsements (from three judges) was required for a definition to be considered as operational or conceptual. Seven definitions were considered operational definitions of acceptability, three definitions were identified as conceptual and four extracts were not reliably classified as either. Conceptual definitions included: “acceptability, or how the recipients of (or those delivering the intervention) perceive and react to it” ([ 49 ] p. 2) “…patients reported being more willing to be involved” ([ 54 ] p. 2535) and “women were asked if they were well satisfied, unsatisfied or indifferent or had no response” with the intervention ([ 55 ] p. 504).

Study 2: Theoretical framework of acceptability

The process of identifying or writing explicit definitions for each of the proposed constructs in the theoretical framework of acceptability resulted in revisions to the TFA (v1) and the development of the revised TFA (v2) as we came to recognise inherent redundancy and overlap. Figure  3 presents the TFA (v2) comprising seven component constructs.

The theoretical framework of acceptability (v2) comprising seven component constructs. Note: The seven component constructs are presented alphabetically with their anticipated definitions. The extent to which they may cluster or influence each of the temporal assessments of acceptability is an empirical question

The inclusion of affective attitude as a construct in the TFA (v2) is in line with the findings of the overview of reviews, in which measures of attitude have been used to assess acceptability of healthcare interventions. Affective attitude is defined as “how an individual feels about taking part in an intervention”. The definition for burden was influenced by the Oxford dictionary definition, which defines burden as a “heavy load”. We define burden as “the perceived amount of effort that is required to participate in the intervention”. The TFA construct of burden focuses on the burden associated with participating in the intervention (e.g. participation requires too much time or expense, or too much cognitive effort, indicating the burden is too great) rather than the individual’s confidence in engaging in the intervention (see definition of self–efficacy below).

Opportunity costs are defined as “the extent to which benefits, profits, or values must be given up to engage in an intervention”, taken from the health economics literature. We changed the construct label of “ethical consequences” to “ethicality”, based on the Oxford dictionary definition of ethical, defined as “morally good or correct”. In the TFA (v2) ethicality is defined as “the extent to which the intervention has good fit with an individual’s value system”.

On reviewing the control items within the Illness Perception Questionnaire –Revised (IPQ-R), we realised all items focus on an individual’s perceived control of the illness for example, “there is a lot I can do to control my symptoms” ([ 56 ], p. 5). These items did not reflect the construct of personal control as we intended. We therefore considered how the relationship between confidence and personal control has been defined. Within the psychology literature the construct of self-efficacy has been defined in relation to confidence. Numerous authors have proposed that self-efficacy reflects confidence in the ability to exert control over one's own motivation, behaviour, and social environment [ 57 ]. We therefore considered a body of literature that groups control constructs together [ 38 ]. Self-efficacy is often operationalised as an individual’s confidence in his or her capability of performing a behaviour [ 58 , 59 ]. In TFA (v2) we define the construct as “the participant’s confidence that they can perform the behaviour(s) required to participate in the intervention”.

The construct “intention” was removed from TFA (v2). This decision was taken upon a review of the extracted definitions of intention against our conceptual definition of acceptability. The Theory of Planned Behaviour [ 37 ] definition of intention states, “Intentions are assumed to capture the motivational factors that influence a behaviour; they are indications of how hard people are willing to try, of how much of an effort they are planning to exert, in order to perform the behaviour” ([ 37 ], p. 181). We propose that all other constructs within the TFA (v2) could be predictors of intention (e.g. willingness to participate in an intervention). If acceptability (assessed by measuring the component constructs in the TFA) is proposed to be a predictor of intention (to engage in the intervention), to avoid circularity it is important to retain a distinction between acceptability and intention.

We reviewed the definitions of the component constructs in TFA (v2) against our conceptual definition of acceptability to consider whether we were overlooking any important constructs that could further enhance the framework of acceptability. Drawing on our knowledge of health psychology theory we discussed how perceptions of acceptability may be influenced by participants’ and healthcare professionals’ understanding of a healthcare intervention and how it works in relation to the problem it targets. As a result, we propose an additional construct that we labelled “intervention coherence”. Our definition for this construct was informed by reviewing the illness perceptions literature. Moss-Morris et al., defined “illness coherence” as “the extent to which a patient’s illness representation provided a coherent understanding of the illness” (p. 2 [ 56 ]). Applying this definition within the TFA (v2), the construct of intervention coherence reflects an individual’s understanding of the perceived level of ‘fit’ between the components of the intervention and the intended aim of the intervention. We define intervention coherence as “the extent to which the participant understands the intervention, and how the intervention works”. Intervention coherence thus represents the face validity of the intervention to the recipient or deliverer.

Next we considered the applicability and relevance of the construct label “experience” for inclusion in the TFA (v2). Four of the constructs (affective attitude, burden, opportunity costs and perceived effectiveness) could include a definition that referred to acceptability of the intervention as experienced (Additional file 6 ) (e.g. opportunity costs- the benefits, profits, or values that were given up to engage in the intervention) as well as a definition that referred to the intervention as anticipated (as defined above). In TFA (v1) ‘experience’ was being used to distinguish between components of acceptability measured pre- or post-exposure to the intervention. In this sense experience is best understood as a characteristic of the assessment context rather than a distinct construct in its own right. We therefore did not include ‘experience’ as a separate construct in the TFA (v2). However, the distinction between anticipated and experienced acceptability is a key feature of the TFA (v2). We propose that acceptability can be assessed from two temporal perspectives (i.e. prospective/ forward-looking; retrospective / backward-looking) and at three different time points in relation to the intervention delivery period. The time points are (1) pre-intervention delivery (i.e. prior to any exposure to the intervention), (2) during intervention delivery (i.e. concurrent assessment of acceptability; when there has been some degree of exposure to the intervention and further exposure is planned), and (3) post-intervention delivery (i.e. following completion of the intervention or at the end of the intervention delivery period when no further exposure is planned). This feature of the TFA is in line with the findings of the overview of reviews in which review authors had described the time at which acceptability was assessed as pre–intervention, during the intervention and post-intervention.

We have presented the development of a theoretical framework of acceptability that can be used to guide the assessment of acceptability from the perspectives of intervention deliverers and recipients, prospectively and retrospectively. We propose that acceptability is a multi-faceted construct, represented by seven component constructs: affective attitude, burden, perceived effectiveness, ethicality, intervention coherence, opportunity costs, and self-efficacy.

Overview of reviews

To our knowledge, this overview represents the first systematic approach to identifying how the acceptability of healthcare interventions has been defined, theorised and assessed. Most definitions offered within the systematic reviews focused on operational definitions of acceptability. For instance, number of dropouts, treatment discontinuation and other measurable variables such as side effects, satisfaction and uptake rates were used to infer the review authors’ definitions of acceptability. Measures applied in the reviews were mainly measures of observed behaviour. Whilst the use of measures of observed behaviour does give an indication of how many participants initially agree to participate in a trial versus how many actually complete the intervention, often reasons for discontinuation or withdrawal are not reported. There are several reasons why patients withdraw their participation that may or may not be associated with acceptability of the intervention. For example, a participant may believe the intervention itself is acceptable, however they may disengage with the intervention if they believe that the treatment has sufficiently ameliorated or cured their condition and is no longer required.

In the overview, only eight of 43 reviews combined observed behavioural and self-report measures in their assessments of acceptability. A combination of self–report measures and observed behaviour measures applied together may provide a clearer evaluation of intervention acceptability.

The overview shows that acceptability has sometimes been confounded with the construct of satisfaction. This is evident from the reviews that claim to have assessed acceptability using measures of satisfaction. However, while satisfaction with a treatment or intervention can only be assessed retrospectively, acceptability of a treatment or intervention can be assessed either prospectively or retrospectively. We therefore propose that acceptability is different to satisfaction as individuals can report (anticipated) acceptability prior to engaging in an intervention. We argue that acceptability can be and should be assessed prior to engaging in an intervention.

There is evidence that acceptability can be assessed prior to engaging in an intervention [ 14 ]. Sidani and colleagues [ 14 ] propose that there are several factors that can influence participants’ perceptions of the acceptability of the intervention prior to participating in the intervention, which they refer to as treatment acceptability. Factors such as participants’ attitudes towards the intervention, appropriateness, suitability, convenience and perceived effectiveness of the intervention have been considered as indicators of treatment acceptability.

Theoretical framework of acceptability

The overview of reviews revealed no evidence of the development or application of theory as the basis for either operational or conceptual definitions of acceptability. This is surprising given that acceptability is not simply an attribute of an intervention but is rather a subjective evaluation made by individuals who experience (or expect to experience) or deliver (or expect to deliver) an intervention. The results of the overview highlight the need for a clear, consensual definition of acceptability. We therefore sought to theorise the concept of acceptability in order to understand what acceptability is (or is proposed to be) and what its components are (or are proposed to be).

The distinction between prospective and retrospective acceptability is a key feature of the TFA, and reflective of the overview of review results, which showed that acceptability has been assessed, before, during and after intervention delivery. We contend that prior to experiencing an intervention both patients and healthcare professionals can form judgements about whether they expect the intervention to be acceptable or unacceptable. These judgements may be based on the information provided about the intervention, or other factors outlined by Sidani et al., [ 14 ] in their conceptualisation of treatment acceptability. Assessment of anticipated acceptability prior to participation can highlight which aspects of the intervention could be modified to increase acceptability, and thus participation.

Researchers need to be clear about the purpose of acceptability assessments at different time points (i.e. pre-, during or post-intervention) and the stated purpose should be aligned to the temporal perspective adopted (i.e. prospective or retrospective acceptability). For example, when evaluating acceptability during the intervention delivery period (i.e. concurrent assessment) researchers have the option of assessing the experienced acceptability up to this point in time or assessing the anticipated acceptability in the future. Different temporal perspectives change the purpose of the acceptability assessment and may change the evaluation, e.g. when assessed during the intervention delivery period an intervention that is initially difficult to adjust to may have low experienced acceptability but high anticipated acceptability. Similarly post-intervention assessments of acceptability may focus on experienced acceptability based on participants’ experience of the intervention from initiation through to completion, or on anticipated acceptability based on participants’ views of what it would be like to continue with the intervention on an on-going basis .(e.g. as part of routine care). These issues are outside the scope of this article but we will elaborate further in a separate publication presenting our measures of the TFA (v2) constructs.

Limitations

Although we have aimed to be systematic throughout the process, certain limitations should be acknowledged. The overview of reviews included systematic review papers that claimed to assess the acceptability of an intervention. It is possible that some papers were not identified by the search strategy as some restrictions were put in place to make the overview feasible. Nonetheless, the overview does provide a useful synthesis of how acceptability of healthcare interventions has been defined, assessed and theorised in systematic reviews of the effectiveness of healthcare interventions. In particular, the review highlights a distinct need to advance acceptability research.

A key objective of this paper was to describe the procedures by which the TFA were developed. Often methods applied to theorising are not clearly articulated or reported within literature [ 31 ]. We have been transparent in reporting the methods we applied to develop the TFA. Our work in theorising the concept of acceptability follows the process outlined by Hox [ 33 ]. However, the theorising process was also iterative as we continuously reviewed the results from the overview of reviews when making revisions from TFA (v1) to TFA (v2). We carefully considered the constructs in both TFA (v1) and TFA (v2) and how they represented our conceptual definition of acceptability. We also relied on and applied our own knowledge of health psychology theories in order to define the constructs. Given the large number of theories and models that contain an even larger number of constructs that are potentially relevant to acceptability this deductive process should be viewed as inevitably selective and therefore open to bias.

Implications: The use of the TFA

We propose the TFA will be helpful in assessing the acceptability of healthcare interventions within the development, piloting and feasibility, outcome and process evaluation and implementation phases described by the MRC guidance on complex interventions [ 1 , 12 ]. Table  3 outlines how the TFA can be applied qualitatively and quantitatively to assess acceptability in the different stages of the MRC intervention development and evaluation cycle.

The development phase of an intervention requires researchers to identify or develop a theory of change (e.g. what changes are expected and how they will be achieved) and to model processes and outcomes (e.g. using analogue studies and other evidence to identify the specific outcomes and appropriate measures) [ 1 ]. Explicit consideration of the acceptability of the intervention, facilitated by the TFA, at this stage would help intervention designers make informed decisions about the form, content and delivery mode of the proposed intervention components.

The MRC framework suggests that acceptability should be assessed in the feasibility phase [ 1 ]. The TFA will help intervention designers to operationalise this construct and guide the methods used to evaluate it, e.g. by adapting a generic TFA questionnaire or an interview schedule that we have developed (to be published separately). A pilot study often represents the first attempt to deliver the intervention and the TFA can be used at this stage to determine whether anticipated acceptability, for deliverers and recipients of the intervention, corresponds to their experienced acceptability. Necessary changes to aspects of the intervention (e.g. if recruitment was lower or attrition higher than expected) could be considered in light of experienced acceptability.

In the context of a definitive randomised controlled trial the TFA can be applied within a process evaluation to assess anticipated and experienced acceptability of the intervention to people receiving and/or delivering the healthcare intervention at different stages of intervention delivery. Findings may provide insights into reasons for low participant retention and implications for the fidelity of both delivery and receipt of the intervention [ 60 ]. High rates of participant dropout in trials may be associated with the burden of participating in research (e.g. filling out long follow–up questionnaires) and do not always reflect problems with acceptability of the intervention under investigation [ 61 , 62 ]. Insights about acceptability from process evaluations may inform the interpretation of trial findings (e.g. where the primary outcomes were not as expected, a TFA assessment may indicate whether this is attributable to low acceptability leading to low engagement, or an ineffective intervention).

The TFA can also be applied to assess acceptability in the implementation phase when an intervention is scaled-up for wider rollout in ‘real world’ healthcare settings (e.g. patient engagement with a new service being offered as part of routine care).

The acceptability of healthcare interventions to intervention deliverers and recipients is an important issue to consider in the development, evaluation and implementation phases of healthcare interventions. The theoretical framework of acceptability is innovative and provides conceptually distinct constructs that are proposed to capture key dimensions of acceptability. We have used the framework to develop quantitative (questionnaire items) and qualitative (topic guide) instruments for assessing the acceptability of complex interventions [ 63 ] (to be published separately). We offer the proposed multi-construct Theoretical Framework of Acceptability to healthcare researchers, to advance the science and practice of acceptability assessment for healthcare interventions.

MRC U. Developing and evaluating complex interventions: new guidance. London: Medical Research Council; 2008.

Google Scholar  

Say RE, Thomson R. The importance of patient preferences in treatment decisions—challenges for doctors. BMJ. 2003;327(7414):542–5.

Article   PubMed   PubMed Central   Google Scholar  

Torgerson D, Ryan M, Donaldson C. Effective Health Care bulletins: are they efficient? Qual Health Care. 1995;4(1):48.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Diepeveen S, Ling T, Suhrcke M, Roland M, Marteau TM. Public acceptability of government intervention to change health-related behaviours: a systematic review and narrative synthesis. BMC Public Health. 2013;13(1):756.

Stok FM, de Ridder DT, de Vet E, Nureeva L, Luszczynska A, Wardle J, Gaspar T, de Wit JB. Hungry for an intervention? Adolescents’ ratings of acceptability of eating-related intervention strategies. BMC Public Health. 2016;16(1):1.

Fisher P, McCarney R, Hasford C, Vickers A. Evaluation of specific and non-specific effects in homeopathy: feasibility study for a randomised trial. Homeopathy. 2006;95(4):215–22.

Article   PubMed   Google Scholar  

Hommel KA, Hente E, Herzer M, Ingerski LM, Denson LA. Telehealth behavioral treatment for medication nonadherence: a pilot and feasibility study. Eur J Gastroenterol Hepatol. 2013;25(4):469.

Borrelli B, Sepinwall D, Ernst D, Bellg AJ, Czajkowski S, Breger R, DeFrancesco C, Levesque C, Sharp DL, Ogedegbe G. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. J Consult Clin Psychol. 2005;73(5):852.

Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health Ment Health Serv Res. 2009;36(1):24–34.

Article   Google Scholar  

Medical Research Council (Great Britain), Health Services and Public Health Research Board. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694–6.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

Becker CB, Darius E, Schaumberg K. An analog study of patient preferences for exposure versus alternative treatments for posttraumatic stress disorder. Behav Res Ther. 2007;45(12):2861–73.

Sidani S, Epstein DR, Bootzin RR, Moritz P, Miranda J. Assessment of preferences for treatment: validation of a measure. Res Nurs Health. 2009;32(4):419.

Tarrier N, Liversidge T, Gregg L. The acceptability and preference for the psychological treatment of PTSD. Behav Res Ther. 2006;44(11):1643–56.

Dillip A, Alba S, Mshana C, Hetzel MW, Lengeler C, Mayumana I, Schulze A, Mshinda H, Weiss MG, Obrist B. Acceptability–a neglected dimension of access to health care: findings from a study on childhood convulsions in rural Tanzania. BMC Health Serv Res. 2012;12(1):113.

DOLL R. Surveillance and monitoring. Int J Epidemiol. 1974;3(4):305–14.

Article   CAS   PubMed   Google Scholar  

Staniszewska S, Crowe S, Badenoch D, Edwards C, Savage J, Norman W. The PRIME project: developing a patient evidence‐base. Health Expect. 2010;13(3):312–22.

PubMed   PubMed Central   Google Scholar  

Andrykowski MA, Manne SL. Are psychological interventions effective and accepted by cancer patients? I. Standards and levels of evidence. Ann Behav Med. 2006;32(2):93–7.

Bollen KA. Structural equations with latent variables. 1989.

Book   Google Scholar  

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38. doi: 10.1136/bmjqs-2014-003627 .

Giles EL, Sniehotta FF, McColl E, Adams J. Acceptability of financial incentives and penalties for encouraging uptake of healthy behaviours: focus groups. BMC Public Health. 2015;15(1):1.

Article   CAS   Google Scholar  

ICEBeRG. Designing theoretically-informed implementation interventions. 2006.

Michie S, Prestwich A. Are interventions theory-based? Development of a theory coding scheme. Health Psychol. 2010;29(1):1.

Rimer BK, Glanz K. Theory at a glance: a guide for health promotion practice. 2005.

Berlim MT, McGirr A, Van den Eynde F, Fleck MPA, Giacobbe P. Effectiveness and acceptability of deep brain stimulation (DBS) of the subgenual cingulate cortex for treatment-resistant depression: a systematic review and exploratory meta-analysis. J Affect Disord. 2014;159:31–8.

Cipriani A, Furukawa TA, Salanti G, Geddes JR, Higgins JPT, Churchill R, Watanabe N, Nakagawa A, Omori IM, McGuire H, et al. Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis. Lancet. 2009;373:746–58.

Kedge EM. A systematic review to investigate the effectiveness and acceptability of interventions for moist desquamation in radiotherapy patients. Radiography. 2009;15:247–57.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Carpiano RM, Daley DM. A guide and glossary on postpositivist theory building for population health. J Epidemiol Community Health. 2006;60(7):564–70.

Epstein LH. Integrating theoretical approaches to promote physical activity. Am J Prev Med. 1998;15(4):257–65.

Hox JJ. From theoretical concept to survey question. 1997): Survey Measurement and Process Quality. New York ua: Wiley; 1997. p. 45–69.

Locke EA. Theory building, replication, and behavioral priming Where do we need to go from here? Perspect Psychol Sci. 2015;10(3):408–14.

Thompson JD. On Building an Administrative Science. Adm Sci Q. 1956;1(1):102–11.

Weick KE. Drop Your Tools: An Allegory for Organizational Studies. Adm Sci Q. 1996;41(2):301–13.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Higgins JPT, Green SP, Wiley Online Library EBS, Cochrane C, Wiley I. Cochrane handbook for systematic reviews of interventions. Hoboken; Chichester: Wiley-Blackwell; 2008.

Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJM, Gavaghan DJ, McQuay HJ. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12.

Moher D, Schulz KF, Altman DG, Group C. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357(9263):1191–4.

Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, Hill S, Liberati A, O’Connell D, Oxman AD, Phillips B. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res. 2004;4(1):38.

Armijo‐Olivo S, Stiles CR, Hagen NA, Biondo PD, Cummings GG. Assessment of study quality for systematic reviews: a comparison of the Cochrane Collaboration Risk of Bias Tool and the Effective Public Health Practice Project Quality Assessment Tool: methodological research. J Eval Clin Pract. 2012;18(1):12–8.

Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, Atkins D, Preventive MWGTU, Force ST. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001;20(3):21–35.

Andrews G, Cuijpers P, Craske MG, McEvoy P, Titov N. Computer therapy for the anxiety and depressive disorders is effective, acceptable and practical health care: a meta-analysis. PLoS One. 2010;5(10):e13196.

Blenkinsopp A, Hassey A. Effectiveness and acceptability of community pharmacy‐based interventions in type 2 diabetes: a critical review of intervention design, pharmacist and patient perspectives. Int J Pharm Pract. 2005;13(4):231–40.

Kulier R, Helmerhorst FM, Maitra N, Gülmezoglu AM. Effectiveness and acceptability of progestogens in combined oral contraceptives–a systematic review. Reprod Health. 2004;1(1):1.

Kaltenthaler E, Sutcliffe P, Parry G, Rees A, Ferriter M. The acceptability to patients of computerized cognitive behaviour therapy for depression: a systematic review. Psychol Med. 2008;38:1521–30.

Brooke-Sumner C, Petersen I, Asher L, Mall S, Egbe CO, Lund C. Systematic review of feasibility and acceptability of psychosocial interventions for schizophrenia in low and middle income countries. BMC Psychiatry. 2015;15:19.

Muftin Z, Thompson AR. A systematic review of self-help for disfigurement: Effectiveness, usability, and acceptability. Body Image. 2013;10(4):442–50.

El-Den S, O’Reilly CL, Chen TF. A systematic review on the acceptability of perinatal depression screening. J Affect Disord. 2015;188:284–303.

Littlejohn C. Does socio-economic status influence the acceptability of, attendance for, and outcome of, screening and brief interventions for alcohol misuse: a review. Alcohol Alcohol. 2006;41:540–5.

Figueroa C, Johnson C, Verster A, Baggaley R. Attitudes and acceptability on HIV self-testing among key populations: a literature review. AIDS Behav. 2015;19(11):1949–65.

Botella C, Serrano B, Baños RM, Garcia-Palacios A. Virtual reality exposure-based therapy for the treatment of post-traumatic stress disorder: A review of its efficacy, the adequacy of the treatment protocol, and its acceptability. Neuropsychiatr Dis Treat. 2015;11:2533–45.

Rodriguez MI, Gordon-Maclean C. The safety, efficacy and acceptability of task sharing tubal sterilization to midlevel providers: a systematic review. Contraception. 2014;89(6):504–11.

Moss-Morris R, Weinman J, Petrie K, Horne R, Cameron L, Buick D. The revised illness perception questionnaire (IPQ-R). Psychol Health. 2002;17(1):1–16.

Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev. 1977;84(2):191.

Lee C, Bobko P. Self-efficacy beliefs: comparison of five measures. J Appl Psychol. 1994;79(3):364.

Clement S. The self‐efficacy expectations and occupational preferences of females and males. J Occup Psychol. 1987;60(3):257–65.

Rixon L, Baron J, McGale N, Lorencatto F, Francis J, Davies A. Methods used to address fidelity of receipt in health intervention research: a citation analysis and systematic review. BMC Health Serv Res. 2016;16:1.

Eborall HC, Stewart MCW, Cunningham-Burley S, Price JF, Fowkes FGR. Accrual and drop out in a primary prevention randomised controlled trial: qualitative study. Trials. 2011;12(1):7.

Sanders C, Rogers A, Bowen R, Bower P, Hirani SP, Cartwright M, Fitzpatrick R, Knapp M, Barlow J, Hendy J, et al. Exploring barriers to participation and adoption of telehealth and telecare within the Whole System Demonstrator trial: a qualitative study. 2012.

Wickwar S, McBain H, Newman SP, Hirani SP, Hurt C, Dunlop N, Flood C, Ezra DG. Effectiveness and cost-effectiveness of a patient-initiated botulinum toxin treatment model for blepharospasm and hemifacial spasm compared to standard care: study protocol for a randomised controlled trial. Trials. 2016;17(1):1.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. The BMJ. 2009;339:b2535. http://doi.org/10.1136/bmj.b2535.

APA. Glossary of Psychological Terms. 2017 [online] Available at: http://www.apa.org/research/action/glossary.aspx?tab=3 . [Accessed 19 Jan 2017].

Glanz K, Rimer BK, Viswanath K. Health behavior and health education: theory, research, and practice: John Wiley & Sons; 2008.

Download references

Acknowledgements

Not applicable.

Availability of data and material

Data will be available via the supplementary files and on request by e-mailing the corresponding author.

Authors’ contribution s

JF and MC conceived the study and supervised this work. MS conducted the overview of reviews and wrote the main body of the manuscript. MC completed reliability checks on the screening of citations and 10% of the full texts included in the overview of reviews. MC also contributed to the writing and editing of the manuscript. JF completed reliability checks on 10% of the full texts included in the overview of reviews. JF also contributed to the writing and editing of the manuscript. All three authors contributed intellectually to the theorising and development of the theoretical framework of acceptability. All authors approved the final version of the manuscript (manuscript file).

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable to this study.

Ethics approval and consent to participate

Author information, authors and affiliations.

City, University of London, Northampton Square, London, EC1V 0JB, UK

Mandeep Sekhon, Martin Cartwright & Jill J. Francis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mandeep Sekhon .

Additional files

Additional file 1:.

Systematic review filters. Description of data: List of each review filter applied to MEDLINE, Embase and PsycINFO. (DOCX 19 kb)

Additional file 2:

References. Description of data: Citation details of all the systematic reviews included in the overview of reviews. (DOCX 40 kb)

Additional file 3:

Behavioural assessments of acceptability. Description of data: How acceptability was assessed in the included systematic reviews based on measures of observed behaviour. (DOCX 12 kb)

Additional file 4:

Self report assessments of acceptability. Description of data: Summary of the self-report measures of acceptability reported in the overview of reviews. (DOCX 12 kb)

Additional file 5:

When was acceptability assessed?. Description of data: Summary of the timing of acceptability assessments relative to start of intervention reported in the papers identified in the systematic reviews. (DOCX 12 kb)

Additional file 6:

Definition of TFA component constructs. Description of data: Definitions of each of the seven component constructs, including anticipated acceptability and experienced acceptability definitions for applicable constructs. (DOCX 13 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Sekhon, M., Cartwright, M. & Francis, J.J. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res 17 , 88 (2017). https://doi.org/10.1186/s12913-017-2031-8

Download citation

Received : 15 August 2016

Accepted : 17 January 2017

Published : 26 January 2017

DOI : https://doi.org/10.1186/s12913-017-2031-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Acceptability
  • Defining constructs
  • Theory development
  • Complex intervention
  • Healthcare intervention

BMC Health Services Research

ISSN: 1472-6963

theoretical framework in medical research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 22, Issue 2
  • Integration of a theoretical framework into your research study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Roberta Heale 1 ,
  • Helen Noble 2
  • 1 Laurentian University , School of Nursing , Sudbury , Ontario , Canada
  • 2 Queens University Belfast , School of Nursing and Midwifery , Belfast , UK
  • Correspondence to Dr Roberta Heale, School of Nursing, Laurentian University, Ramsey Lake Road, Sudbury, P3E2C6, Canada; rheale{at}laurentian.ca

https://doi.org/10.1136/ebnurs-2019-103077

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Often the most difficult part of a research study is preparing the proposal based around a theoretical or philosophical framework. Graduate students ‘…express confusion, a lack of knowledge, and frustration with the challenge of choosing a theoretical framework and understanding how to apply it’. 1 However, the importance in understanding and applying a theoretical framework in research cannot be overestimated.

The choice of a theoretical framework for a research study is often a reflection of the researcher’s ontological (nature of being) and epistemological (theory of knowledge) perspective. We will not delve into these concepts, or personal philosophy in this article. Rather we will focus on how a theoretical framework can be integrated into research.

The theoretical framework is a blueprint for your research project 1 and serves several purposes. It informs the problem you have identified, the purpose and significance of your research demonstrating how your research fits with what is already known (relationship to existing theory and research). This provides a basis for your research questions, the literature review and the methodology and analysis that you choose. 1 Evidence of your chosen theoretical framework should be visible in every aspect of your research and should demonstrate the contribution of this research to knowledge. 2

What is a theory?

A theory is an explanation of a concept or an abstract idea of a phenomenon. An example of a theory is Bandura’s middle range theory of self-efficacy, 3 or the level of confidence one has in achieving a goal. Self-efficacy determines the coping behaviours that a person will exhibit when facing obstacles. Those who have high self-efficacy are likely to apply adequate effort leading to successful outcomes, while those with low self-efficacy are more likely to give up earlier and ultimately fail. Any research that is exploring concepts related to self-efficacy or the ability to manage difficult life situations might apply Bandura’s theoretical framework to their study.

Using a theoretical framework in a research study

Example 1: the big five theoretical framework.

The first example includes research which integrates the ‘Big Five’, a theoretical framework that includes concepts related to teamwork. These include team leadership, mutual performance monitoring, backup behaviour, adaptability and team orientation. 4 In order to conduct research incorporating a theoretical framework, the concepts need to be defined according to a frame of reference. This provides a means to understand the theoretical framework as it relates to a specific context and provides a mechanism for measurement of the concepts.

In this example, the concepts of the Big Five were given a conceptual definition, that provided a broad meaning and then an operational definition, which was more concrete. 4 From here, a survey was developed that reflected the operational definitions related to teamwork in nursing: the Nursing Teamwork Survey (NTS). 5 In this case, the concepts used in the theoretical framework, the Big Five, were the used to develop a survey specific to teamwork in nursing.

The NTS was used in research of nurses at one hospital in northeastern Ontario. Survey questions were grouped into subscales for analysis, that reflected the concepts of the Big Five. 6 For example, one finding of this study was that the nurses from the surgical unit rated the items in the subscale of ’team leadership' (one of the concepts in the Big Five) significantly lower than in the other units. The researchers looked back to the definition of this concept in the Big Five in their interpretation of the findings. Since the definition included a person(s) who has the leadership skills to facilitate teamwork among the nurses on the unit, the conclusion in this study was that the surgical unit lacked a mentor, or facilitator for teamwork. In this way, the theory of teamwork was presented through a set of concepts in a theoretical framework. The Theoretical Framework (TF)was the foundation for development of a survey related to a specific context, used to measure each of the concepts within the TF. Then, the analysis and results circled back to the concepts within the TF and provided a guide for the discussion and conclusions arising from the research.

Example 2: the Health Decisions Model

In another study which explored adherence to intravenous chemotherapy in African-American and Caucasian Women with early stage breast cancer, an adapted version of the Health Decisions Model (HDM) was used as the theoretical basis for the study. 7 The HDM, a revised version of the Health Belief Model, incorporates some aspects of the Health Belief Model and factors relating to patient preferences. 8 The HDM consists of six interrelated constituents that might predict how well a person adheres to a health decision. These include sociodemographic, social interaction, experience, knowledge, general and specific health beliefs and patient preferences, and are clearly defined. The HDM model was used to explore factors which might influence adherence to chemotherapy in women with breast cancer. Sociodemographic, social interaction, knowledge, personal experience and specific health beliefs were used as predictors of adherence to chemotherapy.

The findings were reported using the theoretical framework to discuss results. The study found that delay to treatment, health insurance, depression and symptom severity were predictors to starting chemotherapy which could potentially be adapted with clinical interventions. The findings from the study contribute to the existing body of literature related to cancer nursing.

Example 3: the nursing role effectiveness model

In this final example, research was conducted to determine the nursing processes that were associated with unexpected intensive care unit admissions. 9 The framework was the Nursing Role Effectiveness Model. In this theoretical framework, the concepts within Donabedian’s Quality Framework of Structure, Process and Outcome were each defined according to nursing practice. 10 11  Processes defined in the Nursing Role Effectiveness Model were used to identify the nursing process variables that were measured in the study.

A theoretical framework should be logically presented and represent the concepts, variables and relationships related to your research study, in order to clearly identify what will be examined, described or measured. It involves reading the literature and identifying a research question(s) while clearly defining and identifying the existing relationship between concepts and theories (related to your research questions[s] in the literature). You must then identify what you will examine or explore in relation to the concepts of the theoretical framework. Once you present your findings using the theoretical framework you will be able to articulate how your study relates to and may potentially advance your chosen theory and add to knowledge.

  • Kalisch BJ ,
  • Parent M , et al
  • Strickland OL ,
  • Dalton JA , et al
  • Eraker SA ,
  • Kirscht JP ,
  • Lightfoot N , et al
  • Harrison MB ,
  • Laschinger H , et al

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; internally peer reviewed.

Patient and public involvement Not required.

Read the full text or download the PDF:

  • Research article
  • Open access
  • Published: 19 January 2017

Making theory explicit - An analysis of how medical education research(ers) describe how they connect to theory

  • Klara Bolander Laksov 1 , 2 ,
  • Tim Dornan 3 , 4 &
  • Pim W. Teunissen 4 , 5  

BMC Medical Education volume  17 , Article number:  18 ( 2017 ) Cite this article

7582 Accesses

33 Citations

3 Altmetric

Metrics details

As medical education develops into a varied and well-developed field of research, the issue of quality research anchored in or generating theory has gained increasing importance. Medical education researchers have been criticized of not connecting their work to relevant theory. This paper set out to analyse how researchers can connect to theory in medical education. The goal of this paper is to provide an accessible framework for those entering medical education research, regarding how theory may become an integral part of one's work.

Fifteen purposefully selected researchers in medical education were asked to nominate papers they considered influential in medical education. Through this process 41 papers were identified and included in the study.

The papers were analysed with thematic content analysis, which resulted in three approaches to the use of theory: as close-up exploration; as a specific perspective; and as an overview. The approaches are exemplified by quotes from the papers included in our dataset and further illuminated by a metaphoric story.

Conclusions

We conclude by pointing at the importance of making explicit how theory is used in medical education as a way to collaboratively take responsibility for the quality of medical education research.

Peer Review reports

For over a decade, there have been expressions of concern about medical education research publications lacking an explicit theoretical basis [ 1 – 5 ]. Although there are signs of an increase in use of theory in medical education [ 6 ], it is of interest to not only identifying the issue, but to better understand and remedy it. The aim of this paper is to help researchers make better use of theory by examining how people have done so in the past and suggesting how others might do so in the future. First, this requires an elaboration of what we mean by theory.

A general description of theory is that it is a system of ideas intended to explain a phenomenon. This perspective on theory is consistent with the view that is often taken in biomedical and physical research and is clearly linked to theory as something that can be repeatedly tested, and hence guide activity in all cases. However, theory in medical education needs to be viewed as different from the biomedical view. Rather than emphasising an imperative of proof [ 7 ], the point of departure is the participation in scientific dialogue around different explanations of phenomena with a specific lens through which the inquiry was conducted, which will result in theory generation [ 8 ]. Reeves and colleagues (ibid.) define theory as: "an organized, coherent, and systematic articulation of a set of issues that are communicated as a meaningful whole”. This definition is useful from a medical education as well as biomedical view on theory.

The conceptualisation of theory in education can be placed historically during the 20th century [ 9 ] at a continuum that covers different levels of abstraction ranging from high level theories at the turn of the 20th century, to middle range theories in the 1960s, and personal practice theories by the end of the 1900s. High level theories state the fundamental variables of systems and include a high level of abstractness, like Marxist theory, which are ’independent of the thing to be explained’ (social struggle, for example) to the extent that they might not arise from empirical research or lead directly, via testable ideas or hypotheses, to empirical research, however it can provide guidance for empirical enquiry.

In a seminal paper half a century ago, Merton (1968) introduced the idea that there are middle range theories – theories that lie between the minor but necessary working hypotheses that evolve in abundance during day-to-day research and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behavior, social organization, and social change ( [ 10 ] p. 39.). Bordage argued that, in the medical education domain, programmatic research potentially leads to middle range theories [ 11 ]. This is an iterative process in which observations give rise to (or refine) a theory, which guides further empirical research, and which further refines the theory. At the most detailed and individual level, ’personal theories’ [ 12 ] guide the day-to-day activities of every one of us. Our choice of how to feedback on student performance, for example, is most often guided by a highly individual theory of how to communicate and appraise performance. It is a personal theory, which is in a two-way relationship with empirical observations, even if it only tells us when to say what and how in relation to the student. In education, Donald Schön’s (1991) research has focused on these so called theories-in-use, that teachers apply in everyday teaching, and how they relate to their ’espoused theory’, which could be mid-range theories of feedback and communication patterns together with course design that might have been learned in a faculty development course. The focus of this paper is on how middle-range theory can be made explicit, since the contribution to development of theory depends on how effectively the community of scholars ‘integrates inquiry frameworks to achieve practical relevance’ [ 13 ].

Whatever one’s paradigm, being clear about the theoretical assumptions that underly research adds value to it. When people call for medical education research to be better theorised, they are asking researchers to position their work within some explicit theoretical framework, be able to justify how and why they did so, and use insights derived from the framework to help interpret empirical observations. Moving from philosophical considerations to more practical ones, Bordage (2009) explained how education researchers can use conceptual frameworks as ‘ways of thinking about a problem or study, or a way of representing how complex things work.’ Such conceptual frameworks may guide researchers to look at problems in particular ways or generate hypotheses to be tested [ 14 ] and are thus crucial in the linkage between theory and empirical data. They may arise from their own or other people’s research and the conceptual framework can be derived from a specific theory. When theories are adopted by many different researchers, they help the field build up a coherent body of work, which is transferable beyond the conditions in which individual studies were conducted.

As teachers of education research methodology, we have consistently found that Masters students, PhD students, and new medical education researchers find theory a difficult topic to engage with. It seemed logical to help them, not by writing another abstract paper about theory, but by developing an understanding of how authors describe their theory use in medical education research publications. To achieve this, we decided to look at a sample of influential medical education publications, which could help us explain how connections to conceptual frameworks are formed and used, and provide exemplars that would help neophyte researchers bridge theory and practice explicitly.

Conceptual orientation

A social constructivist approach [ 15 ] guided our research [ 16 – 18 ]. Social constructivism assumes that groups or communities create shared meaning as a result of their interactions. These shared meanings can be attributed to things, which are called ’artefacts’, such as a journal or a position or title, and together contribute to a shared culture. In this project, the research was social in that we regarded publications as artefacts produced by the collaborative efforts of the medical education community. It was constructivist in that an iterative process of data analysis and theory development between the three authors allowed the construction of an interpretation of how connections to conceptual frameworks were formed and used in the publications we included as data.

Data collection procedure

The dataset for this project consisted of a set of published papers that were deemed influential in the medical education domain. The data collection was carried out in two stages. The first stage included the use of a delphi-inspired approach where we, instead of selecting a set of papers based on our own interests or on potentially deceptive citation indices, based on our shared knowledge of the medical education field and its’ inhabitants, purposefully selected fifteen medical education researchers to nominate papers. In light of our aim, fifteen was considered a large enough number to allow for some dropout and still leave us with a large enough number, more than forty, of publications to analyse. The selected researchers differed in their variation of methodological preferences usually applied in their own research as well as research topic, gender and geography (see Table  1 ). They received the following request:

Please nominate approximately 5 research papers you consider as influential in the field of medical education. For each paper, please write a few sentences saying why you chose it.

The instruction to the scholars who nominated the papers were asked for papers that were considered influential to their work rather than papers that used theory in a particularly good way. By influential we clarified that it could be “ research papers that have, in your opinion, impacted medical education practice or research in general or your own research or educational practice ”. We deliberately chose the term ‘influential’, since this allowed the nominaters to choose papers whose research quality might be regarded as low, as well as high as not to require explicit criteria for quality, something we viewed problematic. The emphasis on influential rather than explicit theory use we believed also would provide us with a better data material in terms of the variation in how the use of theory was described. Unfortunately we did not receive enough responses as to why the researchers had chosen the research articles to be able to include that in our analysis.

The analysis of the papers was conducted iteratively based on principles of qualitative thematic content analysis [ 19 ]. The first step was a pilot exercise to develop a framework, which would be used for the main dataset. Three articles were selected to be read by all three researchers. Each wrote a short analysis of how theory was used in the three articles, which were discussed and led to the formulation of four main questions to guide the main analysis: 1) ”What was the starting point of this article?” The starting point could be, for instance, a practical or theoretical problem, or the findings of previous research. 2) ”What conceptual framework was used to approach the problem?”. This is where we could see a more or less explicit linkage to theoretical concepts or frameworks. 3)”How did the paper address the problem methodologically?”; Guba & Lincoln’s [ 20 ] typology of methodological approaches guided our analysis. 4)”How did the article contribute to theory?”. The questions were applied successfully to 6 more of the articles, following which we reviewed all interpretations in order to develop a set of heuristics that might be useful to other researchers. This resulted in a refined and elaborated set of questions, which we used to analyse the entire dataset:

What was the authors’ point of departure?

Where did the problem come from (e.g. practical issue, previous papers, theoretical problem, hypothesized based on theory)?

What route did the authors take?

How was the issue problematized and conceptualized?

How do the answers to questions 1 and 2 relate to each other?

What methodology did the authors use to tackle their problem and how explicit were they in considering their options?

Where did the authors arrive?

How did they suggest they had contributed to addressing the problem under investigation?

What is the apparent relationship between the different components of this scientific journey?

Although use of theory was not always described explicitly, from our social constructivist perspective, theory was implicitly or explicitly described, sometimes only as assumptions, and could be seen as a way of talking, a discourse, that was employed in the article. These assumptions were, from our perspective, stemming from the shared understanding in the medical education community of how certain methods or analysis procedures are understood to be of value.

This analytic framework was then applied to the entire dataset. All articles were read by two of us and we entered our answers into a web based database. Discussion of similarities and differences identified three linked metaphors of how theory was used in the included articles. Finally, we read all papers again and mapped them to the metaphors.

Ten of the 15 invited researchers, six men and four women, nominated a total of 41 papers. Two declined the invitation and three did not reply. The papers are listed in Additional file 1 : Appendix 1 and ranged from empirical papers, to reviews, conceptual papers and editorials. As the nominating researchers were asked to suggest influential research papers, we did not want to omit papers based on our own judgement of what counts as a research paper and hence decided to include all the nominated papers regardless of type of publication. We identified three distinct, but sometimes overlapping, ways in which authors connected to existing theory: a close-up exploration; a specific perspective; and a distanced perspective. We explain each theme below, followed by examples to make the meaning of each category more substantial.

Close-up exploration

Here, researchers aimed to explain some specific phenomenon, such as how residents learn from practical experience. Either instigated by a local issue or issues raised in other studies, they recognized a need or opportunity to add to the current understanding of this phenomenon. This allowed them to formulate a specific question, decide on a research plan, and set out to do the research. Middle range theory contributed to this process by helping them choose questions, methods, and a setting in which to conduct the research, which would contribute to building a clearer or novel understanding of the topic of interest.

An example of a study in this category is a study by Lingard et al. (2004). Examining communication failures in operating rooms, Lingard and colleagues [ 21 ] took as their point of departure an issue stemming from previous research:

Recent evidence suggests that adverse events resulting from error happen at unacceptably high rates in the inpatient setting and that ineffective or insufficient communication among team members is often a contributing factor. (p.330)

By referring to a growing body of literature regarding the relationship between teamwork and safety in health care, and trends in the way it had been studied, the route taken by Lingard et al. identified a gap of knowledge:

While these models have reinforced the importance of communication in effective team function, their multidimensionality precludes in depth attention to the individual variable of communication. (p.330)

The authors continued by referring to the findings from studies on communication in the specific context of operating rooms, formulated as ”lack of standardization and team integration”. Here, they referred to the ways of thinking about communication in aviation industry (i.e. theoretization from another field) both as a way to frame the issue at hand (communication failures) and to choose interventions to overcome the problem:

One potential solution to the described weaknesses in OR team communication is to adapt the checklist system currently in use for systematic preflight team communications in the aviation industry … we anticipate that a carefully adapted checklist system could promote safer, more effective communications in the OR team. (p.330)

The methods section aligned with the methodological gap identified at the outset of the paper and used a theory-based framework for analysis of the fieldnotes taken of the communication that was observed. This enabled the researchers to approach and identify the characteristics of communication failures and arrive at a more detailed understanding of the topic under exploration. It allowed them to analyse these failures in relation to the effects at system, process, and patient level and arrive at a detailed understanding of the landscape under investigation: communication in the operating room.

Another example of the first category is the study by Van Zanten [ 22 ]. It starts with an overview of existing knowledge on the topic of patient satisfaction in relation to physician ethnicity. The authors summarize what other people have discovered, not to reconceptualise the scientific landscape but to explain what part of it they want to explore and what they expect to find:

“While there are a number of identifiable standardized patient (SP) and physician characteristics that may influence the satisfaction ratings provided to the candidates, the purpose of this study was to look at possible differences in satisfaction ratings as a function of candidate and SP ethnicity. It was hypothesized that, given the simulated environment, the interaction of candidate and SP ethnicity would not significantly impact the ratings given to candidates. Evidence to the contrary, while in accord with most actual doctor–patient-encounter research studies, would suggest that the satisfaction ratings may be influenced by factors not related to the abilities of the candidates.” (p.15)

Van Zanten et al. (2004) asked trained standardized patients in a performance-based examination of international candidates applying for graduate entry to the United States to rate their overall satisfaction with physicians on five-point scales. The article shows that SPs generally gave higher satisfaction ratings to encounters in which there was racial concordance which the authors linked to the findings of other researchers.

A specific perspective

This category included research that intended to add to theory buiding from a deliberately chosen, fixed vantage point. Researchers argued for the advantages of applying a particular research perspective derived from psychological, sociological, anthropological, or philosophical domains to an issue in the field of healthcare education.

Albert and colleagues [ 2 ] interviewed a number of key figures from the English-speaking medical education community to inform the ongoing debate on what types of research should be accepted for publication in medical education. They introduced Bourdieu’s concept of ’field’ as a specific perspective from which to shine light on medical education research. Building on the cartographic metaphor introduced in the previous section, they used their Bourdieuvian perspectives to uncover particular aspects of the rocky landscape of medical education, as represented in empirical data. As a result, they were able to define ways of improving the quality of research in medical education. In that way, Albert et al. used the Bourdieuvian lens of ’field’ to bring weaknesses in medical education research to the surface.

The practical problem addresssed by Kerosuo and Engeström [ 23 ] was provision of care by multi-professional groups. They set out to examine how people in organisations learned to work collectively. They took a Change Laboratory approach, informed by Activity Theory, a theory that seeks to understand human activities as systemic and socially situated phenomena and hence bridges the gap between the individual subject and the social reality, to understand and change the health care environments they were working in.

Change Laboratory is an interventionist methodology for work-based learning and development of the activity. In the currently analyzed data, ten medical doctors and nurses in the pilot group applied the new calendar, care map and care agreement tools in the care of one of their patients. These tools are conceptualized as potential integrated instrumentality for the patient care.

The researchers arrived at a description of common themes in the implementation process. They expanded notions about organizational learning by asserting that it also involved tool-creation and implementation of these tools. Here, the methodological approach of the change laboratory became the lens through which the project was designed, carried through and interpreted.

A distanced perspective

This third category operates at a relatively abstract level. Scholars scan an area of research, piecing together what others had previously mapped and identifying contradictions and areas that need further exploration. It would not be possible to do this type of work were it not for the efforts of researchers who have done close up explorations of specific phenomena or looked at the issue from a specific perspective. However, sometimes one needs to take a step back and look at how the pieces of information fit together, or not. Typically, papers in this third category do not report new empirical data; instead, previous research findings are their data.

A systematic review by Steinert et al. [ 24 ] on faculty development started from the observation that a myriad of faculty development programs had been delivered without any clear understanding of differences in their effectiveness. By scanning the numerous pieces of knowledge produced by other scholars, the authors were able to map out how these pieces fitted together, overlapped, and left areas undiscovered. This led to a conceptual framework that synthesized the knowledge generated by previous research.

This framework acknowledges the different roles of faculty members, of which teaching is one. It also highlights the fact that many mediating factors beyond specific faculty development activities can influence teacher effectiveness, and that outcome can be observed at a number of levels. (p.500)

The authors used evidence about faculty development to produce a framework that contributes to people’s thinking about their actions.

This example is provided by Schmidt, Norman & Boshuizen [ 25 ], who concluded from a review of literature on clinical competence:

…a number of recurrent problems emerged, casting doubt on some of the fundamental assumptions about the nature of clinical competence. (p.611)

By taking an overview, they were able to produce a novel conceptualization of clinical skills, which explained existing research findings.

The theory we elaborate here rests on three assumptions. First, in acquiring expertise in medicine, students progress through several transitory stages, characterized by distinctively different knowledge structures underlying their performances. Second, these representations do not decay or become inert in the course of developing expertise but rather remain available for future use when the situation requires their activation. Third, experienced physicians, while diagnosing routine cases, are operating upon knowledge structures that we call ”illness scripts”. (p.613)

An organizational framework – presented by the authors as a theory - was thus laid out and illustrated with cases, which showed how problem solving develops. The example above is based on a literature review. In other words, when theory is approached as overview, some form of literature review is required, but there is a difference in whether the aim is to provide guidance in terms of how practice should change and/or to provide and develop theory.

Having analysed 41 medical education research articles, we propose that there are three qualitatively different ways of connecting with theory, defined as a system of ideas intended to explain a phenomenon. However, although the three approaches were discerned from our primary data (the papers), there were papers that could not easily be categorized into only one of the categories. This was mostly due to the fact that these papers had not made their theoretical point of departure explicit. There were also examples of publications in the form of commentaries that did not fit any of the approaches.

As well as a categorisation, our analysis has produced a metaphor, which we hope will help explain how theory is used. The metaphor is of a person wanting to explore a coastal landscape and being able to do so from a boat, a lighthouse, or a plane. The coastal landscape represents the people, their behaviour, and the social processes that together constitute a field of inquiry. The boat, lighthouse, and plane provide three different perspectives, levels of detail, and types of illumination of the landscape. This ’story’ is outlined below.

A narrative explaining the system of metaphors used in this paper

Imagine you have to chart a far-off island. There are some crude, inaccurate maps of it made by people who lived there in the distant past. At a vantage point on the island stand the solitary remains of a lighthouse. The island is being surveyed because there may be valuable mineral deposits there. You have, at your disposal, three ways of surveying it. You can approach its rocky coast by boat, you can survey it from the top of the lighthouse or you can overfly it.
According to this metaphor, the island is a research topic. The valuable mineral deposits are a purpose for surveying it. The map represents the state of knowledge of the topic. The boat, lighthouse, and plane represent the three different ways theory can help refine the map discussed in the finding section: theory as close-up exploration (boat); theory as a specific perspective (lighthouse); and theory as overview (plane). You would get very different types and levels of detail, and perspectives on the rocky landscape from them. In the same way, the crude map you inherited was influenced by the perspective from which the land was surveyed and the sophisticated map you produce will, likewise, be influenced by the perspective you have chosen as well as the topographical features of the island.
This metaphor illustrates a fundamental principle about research. There is no single, incontrovertible way of knowing a topic, just as there is no incontrovertible way of knowing a landscape. Whether we acknowledge it or not, “truths”, like maps, are influenced by the theoretical perspective from which they were gleaned. Ultimately, theory permeates our research in many ways, just as perspective and distance leaves their indelible marks on a map. Even the decision that a topic is, like mineral deposits, worth exploring is influenced by theory. But let’s stick with those three different perspectives and how they can help you achieve your goal.
The boat allows you to come close to the landscape; even touch it. You can get very fine detail. It would be invaluable if, for example, you wanted to plan where to build a dock for ships exporting the valuable mineral. But it would not be so good for putting the entire island into a coherent perspective. In research terms, this use of theory means identifying a specific area of interest, getting out there and investigating a specific piece of the map. It is better at giving fine detail of part of a topic than producing a coherent map of the topic as a whole. Surveying solely by boat could produce a patchy map of the field of interest with unresolved, conflicting results.
You would choose the lighthouse if its fixed vantage point helped you, for example, choose the route from the mineral mine to the dock across an undulating landscape. Likewise, theory can help you add a piece of information to the evolving map of scientific knowledge from the deliberately chosen, fixed vantage point. You might choose some specific psychological, sociological, anthropological or philosophical stance because you want to know what that stance will tell you about the topic. Having done so from the lighthouse, you would shed a valuable new perspective on a topic, though perhaps not at the same level of detail as if you had surveyed it from a boat.
A plane allows you to overview the entire landscape and, for example, pull together the previous efforts of surveyors in boats and lighthouses into a more or less fitting whole. As a researcher, the plane perspective could help you identify misrepresentations or areas that need further exploration, though it would not allow you to examine topics in the same detail as either a lighthouse or a boat. It could give you valuable insights into the state of the maps so far and provide new insights that help drive future research agendas.

By applying the metaphors, strengths and weaknesses attached to different ways of using theory in medical education research can be uncovered. Being in a boat limits, for example, the scope of the quest; it works best when focusing on one question at a time in a well-defined area. The result of many researchers trying to answer different but related questions is a patchy map of the field of interest, with areas that are well defined, blind spots, and conflicting findings. The lighthouse perspective can be used to look at different areas and study their peculiarities. Areas that were previously researched by boat can be re-examined and this can lead to valuable enlightenment. However, researching the world from a lighthouse comes at the expense of flexibility. Areas on which the chosen perspective does not shed light cannot be explored because the lighthouse cannot move around a research topic like a boat can. It is thus essential that lighthouse researchers describe the perspective thoroughly and acknowledge that using a different perspective (or light) might have been brought forward different findings. The plane approach provides an important resource to the research community by generalising and building on or critiquing different people’s work. However, the distance from the area of interest results in loss of detail. On the other hand, the oversight one gets from being in a plane can lead to valuable outlines of the state of the map so far and even result in new insights that drive future research agendas.

What this study adds

So, how do the three perspectives of how theory is made explicit inform our understanding of theory? Theory is not an automated result of empirical research but emerges from a choice on the part of the researcher [ 26 ]. In contrast to theory use in the positivist paradigm applied in biomedical research, where the function of theory is as a tool in generating research ideas and predict outcomes in empirical studies we have in this study exemplified the use of theory in medical education by the three approaches. Together we suggest that they provide a basis for understanding the development and use of theory at the mid-range theory level allthough we acknowledge the boundaries between personal theory, mid-range and high level theory to be blurred.

By analysing how theory was approached in the articles we could see a variation in approaches. Firstly, we could see that the included articles approached theory ranging from micro-level theory to mid-range theory [ 10 ]. Secondly we saw a difference in the degree to which the articles worked with theory to better understand a phenomenon, i.e. generated research questions, methodology and interpretation at one end, or contributed to theory as a result of an inductive process of data analysis, at the other. As several of the papers were based on a practical problem, the paper specifically aimed to answer this specific question, and did not intentionally also contribute to mid-range theory. Here, often, theory was only viewed as findings from previous research. However, there were also examples where the research question was framed in relation to theory, where the research question was based on particular theories, and the paper is an example of an argumentation in relation to that theory, and as such is a contribution to a theoretical discussion. Finally there was a difference in the way in which theory was introduced in a paper. This ranged from very subtle or implicit introduction of theoretical stance, to very clear and conceptual explanations of the theoretical perspective. If we go back to the definition of theory referred to in the background [ 8 ] it is less helpful when theory is not made explicit. Although it was possible to read between the lines in terms of the theoretical stance taken by some authors of the publications, it became clear that papers where theory was made explicit were participating in a scientific dialogue with a specific lens, rather than claiming to having found proof of something, in a technical sense. As theory is gaining in importance [ 27 ], and should continue doing so, the way of introducing and discussing theory as part of the research process is vital.

Already in 1968, Habermas emphasized the need for diversity in how theory, or knowledge interests as he labels the different approaches to knowledge and theory, is approached [ 28 ]. Different approaches are necessary and in play to different degree in different disciplines or scientific tradtions. Adapted to medical education, it seems that although to a large degree being a social science, the aim for establishing objective truths has for a long time dominated the research, something that is also part of our findings as exemplified by studies written in a (post) positivist tradition. However, several of the papers included in this study challenged this view and papers including both hermeneutic and emancipatory knowledge interests were also included.

In a contribution by Hodges and Kuper [ 29 ], three different kinds of theory are outlined as useful for graduate medical education; bioscience theories, learning theories and socio-cultural theories. The authors emphasize the merits of anchoring medical education problems to theory and elaborate on what the different theoretical stances have to offer as well as their weaknesses. Consequently we believe the current article adds insight into how theories are used. The paper itself is a worked example of how researchers can learn from specifically looking at papers from this standpoint.

A number of guiding articles to researchers who are new to the field of medical education research already exist. One such article is ’The research compass’ [ 30 ], in which readers are guided through four categories of research approach; explorative, experimental, observational and translational studies. A main point made in that paper is that research should be about researchable problems that lead to generalisable knowledge and are practically relevant. By asking simple questions and use simple methods, the approaches theory as close-up exploration and theory as specific perspective play a crucial role both in terms of providing a scholarly approach to the development of teaching and learning (scholarship of teaching and learning), and in providing the basis for the development of theory at higher level as when theory is used as overview . Finally, Thomas [ 26 ] argues for the need for more ’bricolage’ in educational enquiry, giving room for multiple theoretical approaches in exploring the field of research. This need for multiple perspectives was recently commented as increasing in medical education [ 27 ] and it is in line with our view that research with theory as a specific perspective is increasing in medical education.

Methodological considerations

Our description of how theory is used in medical education has limitations. The starting point for our analysis was articles nominated by highly regarded researchers representing different paradigmatic stances as a point of departure. Had we, instead chosen to ask a random group of, less experienced researchers, or a different set of papers, for instance based on citation indices or just picked a random sample, the outcomes of this study may have been different. We therefore acknowledge the nomination process as an important aspect of the data generated as it provided a broader scope in terms of the publications included as data in the study.

Secondly, uncovering the way in which researchers use theory in their work based on an anlysis of papers may paint a distorted image. Papers are written to convey the research findings and their implications, not the way in which researchers interacted with theory. Our use of theory may not meet the intentions of the authors of the papers that were discussed in the study.

Thirdly, as this study aimed to explore variation in how connections to conceptual frameworks are formed and used, we treated each article as a separate representation. It was hence not included in our scope to explore whether any of these approaches was more common than the others.

The continuous criticism of medical education research as a field that lacks theoretical basis is subject to decreased justification. As it is an area characterized by research carried out by researchers from multiple disciplinary and paradigmatic backgrounds the assumptions of how to treat the issue of theory in medical education research will probably be contentious depending on the perspective one brings to research. At a minumum, we argue, theory use needs to be made explicit. The current paper shows three different approaches to how to connect to theory in medical education research; as close-up exploration, as specific perspective and as overview. Suggestions are given as to how researchers new to the field of medical education can clarify their theoretical basis.

Van Der Vleuten CPM, Dolmans DHJM, Scherpbier AJJA. The need for evidence in education. Med Teach. 2000;22(3):246.

Article   Google Scholar  

Albert M, Hodges B, Regehr G. Research in medical education: balancing service and science*. Adv Health Sci Educ. 2007;12(1):103–15.

Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43(4):312–9.

Teunissen PW. On the transfer of theory to the practice of research and education. Med Educ. 2010;44(6):534–5.

Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: An introduction to research in medical education: AMEE Guide No. 56. Med Teach. 2011;33(9):695–709.

Cleland J. Exploring versus measuring: Considering the fundamental differences between qualitative and quantitative research. In: Cleland J, Durning S, editors. Researching Medical Education. West Sussex: John Wiley & Sons; 2015. p. 1-19.

Brosnan C. Making sense of differences between medical schools through Bourdieu’s concept of ‘field’. Med Educ. 2010;44(7):645–52.

Reeves S, Albert M, Kuper A, Hodges BD. Why use theories in qualitative research. BMJ. 2008;337(7670):631–4.

Google Scholar  

Carr W. Education without theory. Br J Educ Stud. 2006;54(2):136–59.

Merton RK. Social theory and social structure: Simon and Schuster; 1968

Bordage G. Moving the Field Forward: Going Beyond Quantitative–Qualitative*. Acad Med. 2007;82(10):S126–S8.

Handal G, Lauvas P. Promoting reflective teaching: Supervision in practice. Milton Keynes: Society for Research into Higher Education & Open University Press; 1987.

Cianciolo AT, Eva KW, Colliver JA. Theory development and application in medical education. Teach Learn Med. 2013;25(sup1):S75–80.

Chalmers AF. What is this thing called science?. St. Lucia: Univ. of Queensland Press; 1999.

Denzin NK, Lincoln YS. The SAGE handbook of qualitative research: Sage; 2011.

Patton MQ. Qualitative research and evaluation methods. London: Sage; 2001.

Ricoeur P. Interpretation Theory: Discourse and the Surplus of Meaning. Fort Worth: Texas Christian University Press; 1976.

Gadamer H-G. Philosophical Hermeneutics. Linge DE, editor. Los Angeles: University of California Press; 1976.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

Guba EG, Lincoln YS. Competing paradigms in qualitative research. Handbook Qual Res. 1994;2:163–94.

Lingard L, Espin S, Whyte S, Regehr G, Baker G, Reznick R, et al. Communication failures in the operating room: an observational classification of recurrent types and effects. Qual Saf Health Care. 2004;13(5):330–4.

Van Zanten M, Boulet JR, McKinley DW. The influence of ethnicity on patient satisfaction in a standardized patient assessment. Acad Med. 2004;79(10):S15–S7.

Kerosuo H, Engeström Y. Boundary crossing and learning in creation of new work practice. J Work Learn. 2003;15(7/8):345–51.

Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med Teach. 2006;28(6):497–526.

Schmidt H, Norman G, Boshuizen H. A cognitive perspective on medical expertise: theory and implication [published erratum appears in Acad Med 1992 Apr; 67 (4): 287]. Acad Med. 1990;65(10):611–21.

Thomas G. Education and theory: Strangers in paradigms: McGraw-Hill Education (UK). 2007.

Kuper A, Whitehead C. The practicality of theory. Acad Med. 2013;88(11):1594–5.

Colliver JA. Effectiveness of problem‐based learning curricula: research and theory. Acad Med. 2000;75(3):259–66.

Hodges BD, Kuper A. Theory and practice in the design and conduct of graduate medical education. Acad Med. 2012;87(1):25–33.

Boyer EL. The scholarship of teaching from: Scholarship reconsidered: Priorities of the professoriate. Coll Teach. 1991;39(1):11–3.

Download references

Acknowledgments

The authors would like to thank the researchers who took time to share the publications that they found influential in medical education research.

This work was not funded by any particular funding organisation. However, the authors’ academic institutions provided the opportunities for the study to be carried out in terms of time.

Availability of data and materials

Data consist of published articles. A list of the articles used as data in this study is available in Additional file 1.

Authors’ contributions

All authors (KBL, TD and PT) have participated and made substantial contribution to conception and design, analysis of the data as well as drafting and approval of the manuscript.

Authors’ information

Klara Bolander Laksov is associate professor at Karolinska Institutet, Stockholm, Sweden. Since 2015 she is also the director of the Centre for Advancement of Teaching and Learning at Stockholm University, Sweden

Tim Dornan is Professor of Medical Education at Queen's University Belfast, UK, and Emeritus Professor at Maastricht University, the Netherlands

Pim W. Teunissen is associate professor at Maastricht University, School of Health Professions Education (SHE), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands. He is also gynaecologist at VU University Medical Center, Department of Obstetrics & Gynecology, Amsterdam, the Netherlands.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethical approval and consent to participate

Ethical approval was not required for this study type. As the data in this study consist of published articles, an informed consent to participate was not applicable.

Declaration of interest

All authors declare that we have had no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work and no other relationships or activities that could appear to have influenced the submitted work.

Author information

Authors and affiliations.

Department of Learning, Informatics, Management and Ethics (LIME), Karolinska Institutet, Stockholm, Sweden

Klara Bolander Laksov

Department of Education, Centre for the Advancement of University Teaching, Stockholm University, Stockholm, Sweden

Dentistry and Biomedical Sciences, School of Medicine, Queen’s University Belfast, Belfast, UK

School of Health Professions Education (SHE), Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, NL, Netherlands

Tim Dornan & Pim W. Teunissen

Department of Obstetrics & Gynecology, Gynaecologist at VU University Medical Center, Amsterdam, The Netherlands

Pim W. Teunissen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Klara Bolander Laksov .

Additional file

Additional file 1:.

List of publications. Here the list of publications that was the result of the nomination process are listed. (DOC 46 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Bolander Laksov, K., Dornan, T. & Teunissen, P.W. Making theory explicit - An analysis of how medical education research(ers) describe how they connect to theory. BMC Med Educ 17 , 18 (2017). https://doi.org/10.1186/s12909-016-0848-1

Download citation

Received : 06 May 2016

Accepted : 15 December 2016

Published : 19 January 2017

DOI : https://doi.org/10.1186/s12909-016-0848-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical Education
  • High Level Theory
  • Medical Education Research
  • Specific Perspective
  • Future Research Agenda

BMC Medical Education

ISSN: 1472-6920

theoretical framework in medical research

  • UR Research
  • Citation Management
  • NIH Public Access

More information on Research

  • Apps and Mobile Resources
  • Med Students Phase 1
  • School of Nursing
  • Faculty Resources

More information on Guides and Tutorials

  • iPad Information and Support

More information on Computing and iPads

  • Library Hours
  • Make A Gift
  • A-Z Site Index

More information on us

  • Miner Library Classrooms
  • Multimedia Tools and Support
  • Meet with a Librarian

More information on Teaching and Learning

  • History of Medicine

More information on Historical Services

  • Order Articles and Books
  • Find Articles
  • Access Resources from Off Campus
  • Get Info for my Patient
  • Contact my Librarian

More information on our services

  • Research and Publishing
  • Guides and Tutorials
  • Computing and iPads
  • Teaching and Learning
  • How Do I..?

Medical Education Research and Scholarship: Theories

  • Literature Reviews
  • Program Evaluation
  • Quantitative Methods
  • Qualitative Methods
  • Dissemination

Theory, Theoretical Frameworks, and Conceptual Frameworks

​​​​​​MedEdMentor

  • Introduction to medical education theory and AI-driven theory finder
  • Note: This is free while in testing but may require a subscription in the future

Synopses of theories for new MedEd scholars:

  • International Clinician Educators blog
  • Some Learning Theories for Medical Educators 

Theories, theoretical frameworks, and conceptual frameworks:

  • Associated article:  The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework
  • Slide deck:  The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework

Another landmark article with a different view on conceptual frameworks:

  • Conceptual frameworks to illuminate and magnify
  • << Previous: Home
  • Next: Literature Reviews >>
  • Last Updated: May 24, 2024 2:42 PM
  • URL: https://libguides.urmc.rochester.edu/meded-research-scholarship
  • Open access
  • Published: 22 November 2022

Using a theoretical framework to inform implementation of the patient-centred medical home (PCMH) model in primary care: protocol for a mixed-methods systematic review

  • Deniza Mazevska   ORCID: orcid.org/0000-0003-0710-821X 1 ,
  • Jim Pearse 1 &
  • Stephanie Tierney 2  

Systematic Reviews volume  11 , Article number:  249 ( 2022 ) Cite this article

2529 Accesses

1 Citations

Metrics details

The patient-centred medical home (PCMH) was conceived to address problems that primary care practices around the world are facing, particularly in managing the increasing numbers of patients with multiple chronic diseases. The problems include fragmentation, lack of access and poor coordination. The PCMH is a complex intervention combining high-quality primary care with evidence-based disease management. Becoming a PCMH takes time and resources, and there is a lack of empirically informed guidance for practices. Previous reviews of PCMH implementation have identified barriers and enablers but failed to analyse the complex relationships between factors involved in implementation. Using a theoretical framework can help with this, giving a better understanding of how and why interventions work or do not work. This review will aim to refine an existing theoretical framework for implementing organisational change — the Consolidated Framework for Implementation Research (CFIR) — to apply to the implementation of the PCMH in primary care.

We will use the ‘best-fit’ framework approach to synthesise evidence for implementing the PCMH in primary care. We will analyse evidence from empirical studies against CFIR constructs. Where studies have identified barriers and enablers to implementing the PCMH not represented in the CFIR constructs, we will use thematic analysis to develop additional constructs to refine the CFIR. Searches will be undertaken in MEDLINE (Ovid), Embase (Ovid), Web of Science Core Collection (including Science Citation Index and Social Science Citation Index) and CINAHL. Gaps arising from the database search will be addressed through snowballing, citation tracking and review of reference lists of systematic reviews of the PCMH. We will accept qualitative, quantitative and mixed methods primary research studies published in peer-reviewed publications. A stakeholder group will provide input to the review.

The review will result in a refined theoretical framework that can be used by primary care practices to guide implementation of the PCMH. Narrative accompanying the refined framework will explain how the constructs (existing and added) work together to successfully implement the PCMH in primary care. The unpopulated CFIR constructs will be used to identify where further primary research may be needed.

Systematic review registration

PROSPERO CRD42021235960

Peer Review reports

Chronic diseases account for a large and growing proportion of disease burden worldwide [ 1 ]. In high-income countries, two out of every five people are estimated to be experiencing two or more chronic diseases concurrently [ 2 ]. This is set to rise as lifespans increase. Primary care is best placed to support people with chronic disease given its focus on generalist, whole-of-person care [ 3 , 4 ]. Nevertheless, in many countries, primary care does not work as well as it should. Fragmentation, lack of access and poor coordination are key issues [ 5 , 6 , 7 ].

The patient-centred medical home (PCMH) model was conceived to address problems with primary care [ 8 ]. It aims to achieve the ‘quadruple aims’ of improved health outcomes for patients, improved experiences of care, reduced healthcare costs and improved provider experience of care delivery [ 8 , 9 , 10 ].

Examples of PCMH models across countries include the Patient Aligned Care Teams (PACT) in the Veterans’ Health Administration in the USA, the Patient Medical Home (PMH) in Canada and Health Care Homes in New Zealand. Features these models have in common include a formalised ongoing relationship between a physician and a patient, team-based care, patient-centred care, coordination, prompt access to care, preventative care and continuous quality improvement. These features tend to be enabled by payment reform (that is use of capitation or bundled payments rather than fee for service) [ 11 , 12 ], regulation (including accreditation) and external facilitation and/or collaboration between practices.

Because of the multifaceted nature of the changes involved, transformation from a traditional primary care practice to a PCMH often takes years [ 13 , 14 ].

Systematic reviews of the PCMH have tended to focus on outcomes of the model for patients and practice staff and not on its implementation. Where reviews have focussed on implementation, they have thematically analysed data from primary studies rather than using a theoretical framework (for example Janamian, Jackson [ 15 ], Pearse and Mazevska [ 16 ], Miller, Weir [ 17 ]). Theoretical frameworks organise explanations of how things work into descriptive categories that represent a specific theory. In implementation science, theoretical frameworks are used to understand how and why interventions work or do not work [ 18 ] and are seen as critical for successfully implementing interventions [ 19 , 20 ]. Analysis of individual barriers and enablers without a theoretical framework fails to recognise the complex relationships between factors involved in implementation and their combined effect on outcomes [ 18 ].

In this review, the Consolidated Framework for Implementation Research (CFIR) will be used to synthesise evidence on implementing the PCMH. The CFIR has been selected as it is specifically designed for healthcare settings. It has also been applied to primary care change initiatives, including the PCMH. For example, Keith and Crosson [ 21 ] used it to evaluate the implementation of the Comprehensive Primary Care initiative, a PCMH model, amongst 21 primary care practices in the USA. The researchers used the tool to identify where adjustments or refinements to the intervention could be made for the participating practices for the next phase of the implementation. The current review will apply the CFIR to the large number and diverse range of PCMH implementation initiatives reported in the literature. A refined theoretical framework will be produced that can be used by primary care practices to guide implementation of the PCMH. The review will also identify gaps in studying the implementation of the PCMH that may be filled by future research.

Methods/design

This protocol is compliant with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) guideline recommended for systematic review protocols [ 22 ]. The PRISMA-P checklist is included in Additional file 1 .

Review design

We will use the ‘best-fit’ framework approach [ 23 , 24 , 25 ] to synthesise evidence for implementing the PCMH in primary care practices. It is an approach to evidence synthesis that allows reviewers to test and refine an existing theory with findings from practice [ 25 ]. It involves selecting an a priori (starting) framework, so called because it need not be perfect for the research question, just ‘good enough’ [ 25 ]. The concepts represented by the framework are then used to organise data extracted from the included literature. For data that does not align with the concepts, thematic analysis is used to identify further concepts, which are then added to the framework, modifying it. A final step is to explore the relationships between the concepts in the revised framework.

There are different approaches to selecting the a priori framework for best-fit synthesis, including identifying a single framework that may be the ‘obvious choice’ for the question or topic, or a systematic search for candidate published frameworks and synthesis to achieve a single ‘meta-framework’.

As mentioned above, this review will use the CFIR [ 26 ] as the a priori framework to synthesise evidence on the implementation of the PCMH. With 39 constructs (theoretical concepts), it is the obvious choice for studying the implementation of a complex intervention like the PCMH, and it is also a meta-framework, incorporating 19 theories of implementation. Its constructs are organised into five domains (Table 1 ).

Aims of the review

The review will aim to answer the question: what factors affect the implementation of the PCMH model in primary care practices and how? In answering this question, the review will also identify strategies that primary care practices have used in implementing the PCMH model to overcome barriers and to smooth transition to the model.

The outcome will be a refined theoretical model of implementing organisational change to specifically apply to implementing the PCMH in primary care.

Stakeholder group

We will establish a stakeholder group to provide input to the review, comprised of a patient, a general practitioner, a practice manager and a practice facilitator (a role assisting primary care practices in redesigning business and clinical processes). These are roles that are either likely to use the tool arising from the review or be impacted by its use. The group will meet four times over the course of the review (Table 2 )

Eligibility criteria

Types of studies.

The review will include primary research using any study design (qualitative, quantitative or mixed methods), published in peer-reviewed publications (selected due to having undergone external review). Primary research published in grey literature (including evaluation reports), thesis/dissertation or conference abstract or presentation will be excluded. Our preliminary investigations have shown that evaluations of large-scale PCMH implementations have been published in peer-reviewed literature in addition to grey literature. Systematic reviews, editorials, commentary and opinion pieces will also be excluded.

The domain studied is the PCMH model of care. Papers will be accepted if they include a statement that the study is related to a practice’s/health organisation’s/health system’s implementation of a practice-level model that includes the following: a formalised ongoing relationship between a physician and a patient, team-based care, patient-centred care, coordination, prompt access to care, preventative care and continuous quality improvement. All these features need to be present for a study to be included.

Participants

This review will focus on stand-alone primary care practices (that is not part of a hospital) led by physicians (that is, not nurse practitioners or allied health professionals). It will also focus on primary care practices providing generalist primary care services (that is not focussing on a specific population or disease/condition). In relation to the latter, studies may report on the implementation of elements of the model for a specific population cohort (for example older people or people with diabetes), but the implementation should involve practices providing services to a range of patients and across disease/condition groups. We will exclude studies involving specialist primary care services, such as paediatric primary care, behavioural/mental health care, addiction services, cancer care, palliative care and geriatrics.

Intervention

Studies will be included if they report on factors affecting the implementation of the PCMH. The study need not be focussed on implementation, but to be included, it must include amongst its findings factors affecting implementation. The topic of study may be one or more features of the PCMH (such as team-based care or preventative care). However, to be included, the feature(s) must be related to a practice’s/health organisation’s/health system’s implementation of the PCMH. Studies reporting on the uptake by a practice(s) of a feature associated with the PCMH, but not in relation to the implementation of the PCMH, will be excluded.

Comparators

There are no comparators for this review.

The primary outcome is factors affecting implementation of the PCMH. Factors are any determinants of implementation of a PCMH feature or the model. These will be classified as barriers and enablers.

Search strategy

Searches for this review will be through the following:

Ovid MEDLINE (1946–present)

Ovid Embase (1974–present)

Web of Science Core Collection (including Science Citation Index and Social Science Citation Index) (1900–present)

CINAHL EBSCOHost (1982–present)

The first two have been selected as general healthcare databases, the third as a multidisciplinary database and the last one is a specialist database relating to nursing and allied health, where additional studies on implementation from the perspectives of these disciplines might be found.

Search terms will relate to the setting/population (primary care) and the phenomenon of interest (PCMH). Gaps arising from the database search will be addressed through snowballing (following up references to other primary studies mentioned in included studies), citation tracking (tracking subsequently published studies that cite the included studies, using databases such as Google Scholar and Scopus) and review of reference lists of systematic reviews of the PCMH model (relevant reviews will be set aside for this purpose during the title and abstract screening). Also, any relevant study protocols identified during the title and abstract screen will be followed up to check whether the study has been published subsequently.

Where databases have a controlled vocabulary or subject heading list, this will be used, alongside keywords representing the key concepts.

Sample search terms and a search strategy using MEDLINE (Ovid) are in Additional file 2 .

Study selection, data extraction and analysis

EndNote X9 will be used to manage citations returned from the searches, and Covidence will be used for recording the outcomes of the title/abstract screen and full-text review.

One reviewer will screen the titles and abstracts from papers returned from the search, and a second reviewer will independently screen 20% of the papers. Where there are disagreements, the reviewers will discuss and decide jointly.

All papers that make it through to the full-text review will be reviewed independently by the two reviewers. Disagreements will again be resolved through discussion.

Depending on the number of studies selected for synthesis, to manage the review, further criteria may be applied to narrow the selection. For example, included studies may be rated on the ‘richness’ of the information that they contribute to the review question, and only conceptually rich studies were selected for synthesis.

Data extraction and analysis

MAXQDA, a qualitative analysis software package, will be used to code segments of the included papers and enter additional variables related to each study. Data entered/coded will include the following:

Information about the study : For example, author, year, study design, country, setting and participants

Details about the PCMH model : For example, features of the PCMH model, feature(s) that the study is focusing on, number of primary care practices involved, staff types involved and any special populations such as disadvantaged groups, older people or people with specific health conditions

Information related to quality assessment criteria : See below.

CFIR constructs : Factors affecting implementation of the PCMH relating to the constructs in the CFIR, organised into barriers and enablers, and how the constructs work together to achieve implementation and strategies that practices have used to overcome barriers within specific constructs

Other data : Other factors affecting the implementation of the PCMH element or model, not captured by the CFIR constructs. These will be analysed in a subsequent stage, described below.

One reviewer will code the data from the primary studies to the appropriate fields in MaxQDA and enter additional information about each study. The same reviewer will code any data relating to implementation that cannot be mapped to the CFIR constructs into a separate ‘miscellaneous’ category. The second reviewer will progressively review the coding of the first reviewer and suggest alternative or additional ideas for consideration, which will be discussed between the reviewers as the coding develops. The second reviewer will also focus on any unpopulated CFIR constructs to ensure that they have not been missed in the coding process.

Thematic analysis will be used to develop additional constructs from the data coded to the ‘miscellaneous’ category, using the approach detailed by Braun and Clarke [ 27 ]. This will be done by one reviewer; however, another reviewer will sense check the work of the first reviewer and raise alternative views for discussion.

The unpopulated CFIR constructs will be used to identify where further primary research may be required and explore the potential for publication bias.

Quality assessment

The mixed-methods appraisal tool (MMAT) [ 28 , 29 ] will be used to assess the methodological quality of the included studies. The MMAT allows appraisal of heterogenous study designs within the single tool.

The results of the quality assessment will be used to test the synthesis (see ‘Testing the synthesis’). Lower quality studies will be ones without a ‘yes’ response to one of the two screening questions for all study types or two or less ‘yes’ responses from the five related to the study design.

Data synthesis

The review will use a data-based convergent synthesis design as per the typology outlined by Hong et al. [ 30 ]. This approach analyses qualitative and quantitative findings together using the same synthesis method (qualitative or quantitative) and presents them as a single set of results. For this review, we are using a qualitative synthesis approach. Hence, quantitative data will be transformed into themes against the CFIR constructs or other categories developed using the thematic analysis if the data does not fit the CFIR constructs.

The CFIR will be modified using the additional categories identified during the thematic analysis of the residual data not able to be mapped to the existing CFIR constructs. Narrative of the results of the synthesis will be accompanied by a diagrammatic representation of the revised framework, showing the differences from the CFIR.

Testing the synthesis

Carroll and Booth [ 24 ] suggest that a final step in a best-fit framework synthesis is to test it by comparing the a priori framework with the framework arising from the synthesis. This means explaining any unpopulated constructs from the CFIR as well as explaining the added constructs/refinements. Differences between the two may be due to the differences between the PCMH and implementations of other types of interventions (namely the types on which CFIR has been based) or publication bias.

Further testing of the synthesis will be through removing lower quality studies to see if their omission makes a difference to the new framework.

Systematic reviews of the effectiveness of the PCMH [ 31 , 32 , 33 , 34 , 35 , 36 ] and individual studies [ 37 , 38 , 39 , 40 , 41 , 42 ] show mixed results. Sinaiko and Landrum [ 36 ] comment as follows:

… Recent work identifies five domains to consider when interpreting findings of practice transformation: the practice setting, the organizational setting, the external environment, the implementation pathway, and the motivation for transformation. Understanding which specific components of the PCMH contribute most to success is critical to determining how to invest resources in primary care transformation .

Essentially, the authors are arguing for a theoretical framework for implementing the PCMH, identifying the critical components for successful implementation and how they work together to achieve this. The CFIR is a good starting point for identifying constructs that may be important for implementing change in primary care practices. However, the CFIR is a generic tool, and it is necessary to see how it specifically applies to implementing the PCMH model. The best fit framework synthesis will offer an opportunity to test and refine the CFIR as well as a systematic way to organise the evidence from primary studies.

Any amendments made to this protocol when conducting the review will be outlined in PROSPERO and reported in the final manuscript. Results will be disseminated through publication in a peer-reviewed journal.

Availability of data and materials

Not applicable

Abbreviations

Consolidated Framework for Implementation Research

Cumulative Index to Nursing and Allied Health

  • Patient-centred medical home

Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols

International prospective register of systematic reviews

Vos T, Lim SS, Abbafati C, Abbas KM, Abbasi M, Abbasifard M, et al. Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet. 2020;396(10258):1204–22.

Article   Google Scholar  

Nguyen H, Manolova G, Daskalopoulou C, Vitoratou S, Prince M, Prina AM. Prevalence of multimorbidity in community settings: a systematic review and meta-analysis of observational studies. J Comorb. 2019;9:2235042x19870934.

Article   PubMed   PubMed Central   Google Scholar  

Starfield B. Challenges to primary care from co- and multi-morbidity. Prim Health Care Res Dev. 2011;12(1):1–2.

Article   PubMed   Google Scholar  

Starfield B, Shi L, Macinko J. Contribution of primary care to health systems and health. Milbank Q. 2005;83(3):457–502.

Schoen C, Osborn R, Huynh PT, Doty M, Peugh J, Zapert K. On the front lines of care: primary care doctors' office systems, experiences, and views in seven countries. Health Aff (Millwood). 2006;25(6):w555–71.

Schoen C, Osborn R, Squires D, Doty M, Pierson R, Applebaum S. New 2011 survey of patients with complex care needs in eleven countries finds that care is often poorly coordinated. Health Aff (Millwood). 2011;30(12):2437–48.

Ellner AL, Phillips RS. The coming primary care revolution. J Gen Intern Med. 2017;32(4):380–6.

Arend J, Tsang-Quinn J, Levine C, Thomas D. The patient-centered medical home: history, components, and review of the evidence. Mt Sinai J Med. 2012;79(4):433–50.

Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759–69.

Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573–6.

Nutting PA, Crabtree BF, Miller WL, Stange KC, Stewart E, Jaen C. Transforming physician practices to patient-centered medical homes: lessons from the national demonstration project. Health Aff. 2011;30(3):439–45.

Nutting PA, Miller WL, Crabtree BF, Jaen CR, Stewart EE, Stange KC. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home. Ann Fam Med. 2009;7(3):254–60.

Sugarman JR, Phillips KE, Wagner EH, Coleman K, Abrams MK. The safety net medical home initiative: transforming care for vulnerable populations. Med Care. 2014;52(Supplement 4):S1–S10.

Quigley DD, Predmore ZS, Chen AY, Hays RD. Implementation and sequencing of practice transformation in urban practices with underserved patients. Qual Manag Health Care. 2017;26(1):7–14.

Janamian T, Jackson CL, Glasson N, Nicholson C. A systematic review of the challenges to implementation of the patient-centred medical home: lessons for Australia. Med J Austr. 2014;201(3 Suppl):S69–73.

Google Scholar  

Pearse J, Mazevska D. The patient centered medical home: barriers and enablers: an evidence check rapid review brokered by the Sax Institute for COORDINARE. Sydney: Sax Institute; 2018.

Miller R, Weir C, Gulati S. Transforming primary care: scoping review of research and practice. J Integr Care (Brighton). 2018;26(3):176–88.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

Sales A, Smith J, Curran G, Kochevar L. Models, strategies, and tools. Theory in implementing evidence-based findings into health care practice. J Gen Intern Med. 2006;21 Suppl 2(Suppl 2):S43–S9.

PubMed   Google Scholar  

van Achterberg T, Schoonhoven L, Grol R. Nursing implementation science: how evidence-based nursing requires evidence-based implementation. J Nurs Scholarsh. 2008;40(4):302–10.

Keith RE, Crosson JC, O'Malley AS, Cromp D, Taylor EF. Using the Consolidated Framework for Implementation Research (CFIR) to produce actionable findings: a rapid-cycle evaluation approach to improving implementation. Implement Sci. 2017;12(1):15.

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.

Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Qual Saf. 2015;24(11):700.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Hong QN, Gonzalez-Reyes A, Pluye P. Improving the usefulness of a tool for appraising the quality of qualitative, quantitative and mixed methods studies, the Mixed Methods Appraisal Tool (MMAT). J Eval Clin Pract. 2018;24(3):459–67.

Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Improving the content validity of the mixed methods appraisal tool: a modified e-Delphi study. J Clin Epidemiol. 2019;111:49–59.e1.

Hong QN, Pluye P, Bujold M, Wassef M. Convergent and sequential synthesis designs: implications for conducting and reporting systematic reviews of qualitative and quantitative evidence. Syst Rev. 2017;6(1):61.

John JR, Jani H, Peters K, Agho K, Tannous WK. The effectiveness of patient-centred medical home-based models of care versus standard primary care in chronic disease management: a systematic review and meta-analysis of randomised and non-randomised controlled trials. Int J Environ Res Public Health. 2020;17(18):6886.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Peikes D, Zutshi A, Genevro JL, Parchman ML, Meyers DS. Early evaluations of the medical home: building on a promising start. Am J Manag Care. 2012;18(2):105–16.

Williams JW, Jackson GL, Powers BJ, Chatterjee R, Bettger JP, Kemper AR, et al. Closing the quality gap: revisiting the state of the science (vol. 2: the patient-centered medical home). Evid Rep Technol Assess (Full Rep). 2012;(208.2):1–210.

Jackson GL, Powers BJ, Chatterjee R, Bettger JP, Kemper AR, Hasselblad V, et al. Improving patient care. The patient centered medical home. A systematic review. Ann Intern Med. 2013;158(3):169–78.

O'Loughlin M, Mills J, McDermott R, Harriss L. Review of patient-reported experience within patient-centered medical homes: insights for Australian Health Care Homes. Aust J Prim Health. 2017;23(5):429–39.

Sinaiko AD, Landrum MB, Meyers DJ, Alidina S, Maeng DD, Friedberg MW, et al. Synthesis of research on patient-centered medical homes brings systematic differences into relief. Health Aff (Millwood). 2017;36(3):500–8.

Fishman PA, Johnson EA, Coleman K, Larson EB, Hsu C, Ross TR, et al. Impact on seniors of the patient-centered medical home: evidence from a pilot study. Gerontologist. 2012;52(5):703–11.

Mosquera RA, Avritscher EB, Samuels CL, Harris TS, Pedroza C, Evans P, et al. Effect of an enhanced medical home on serious illness and cost of care among high-risk children with chronic illness: a randomized clinical trial. JAMA. 2014;312(24):2640–8.

Article   CAS   PubMed   Google Scholar  

Friedberg MW, Rosenthal MB, Werner RM, Volpp KG, Schneider EC. Effects of a Medical home and shared savings intervention on quality and utilization of care. JAMA Intern Med. 2015;175(8):1362–8.

Rosenthal MB, Sinaiko AD, Eastman D, Chapman B, Partridge G. Impact of the Rochester Medical Home Initiative on primary care practices, quality, utilization, and costs. Med Care. 2015;53(11):967–73.

Fifield J, Forrest DD, Martin-Peele M, Burleson JA, Goyzueta J, Fujimoto M, et al. A randomized, controlled trial of implementing the patient-centered medical home model in solo and small practices. J Gen Intern Med. 2013;28(6):770–7.

Reddy A, Gunnink E, Taylor L, Wong E, Batten AJ, Fihn SD, et al. Association of high-cost health care utilization with longitudinal changes in patient-centered medical home implementation. JAMA Netw Open. 2020;3(2):e1920500.

Download references

Acknowledgements

Nia Wyn Roberts, Outreach Librarian, Bodleian Health Care Libraries, University of Oxford, reviewed and contributed to the search strategy that will be used for this review.

The authors did not receive funding for this work.

Author information

Authors and affiliations.

Health Policy Analysis, PO Box 403, St Leonards, NSW, 1590, Australia

Deniza Mazevska & Jim Pearse

Radcliffe Primary Care Building, Radcliffe Observatory Quarter, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK

Stephanie Tierney

You can also search for this author in PubMed   Google Scholar

Contributions

DM conceived the study, and ST provided ideas to focus it. DM wrote the first draft of the protocol, and ST and JP contributed to subsequent drafts and read and approved the final manuscript. DM set up the search strategy and undertook the preliminary searches. JP tested and refined the eligibility criteria.

Corresponding author

Correspondence to Deniza Mazevska .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

PRISMA-P 2015 Checklist

Additional file 2.

Ovid Medline search terms (1)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mazevska, D., Pearse, J. & Tierney, S. Using a theoretical framework to inform implementation of the patient-centred medical home (PCMH) model in primary care: protocol for a mixed-methods systematic review. Syst Rev 11 , 249 (2022). https://doi.org/10.1186/s13643-022-02132-x

Download citation

Received : 14 May 2021

Accepted : 08 November 2022

Published : 22 November 2022

DOI : https://doi.org/10.1186/s13643-022-02132-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • ‘Best-fit’ framework synthesis

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

theoretical framework in medical research

Theoretical Frameworks in Medical Education: Using a Systematic Review of Ophthalmology Education Research to Create a Theory of Change Model

Affiliations.

  • 1 is a Medical Student, Warren Alpert Medical School, Brown University.
  • 2 is a Research Librarian, University of Maryland School of Pharmacy and University of Maryland Health and Human Services Library.
  • 3 is a Professor of Ophthalmology and Public Health Sciences, Penn State College of Medicine.
  • 4 is Deputy Chief Academic Affiliations Officer, Office of Academic Affiliations, United States Department of Veterans Affairs, and Professor of Surgery (Ophthalmology), Warren Alpert Medical School, Brown University.
  • PMID: 36274766
  • PMCID: PMC9580314
  • DOI: 10.4300/JGME-D-22-00115.1

Background: Theoretical frameworks provide a lens to examine questions and interpret results; however, they are underutilized in medical education.

Objective: To systematically evaluate the use of theoretical frameworks in ophthalmic medical education and present a theory of change model to guide educational initiatives.

Methods: Six electronic databases were searched for peer-reviewed, English-language studies published between 2016 and 2021 on ophthalmic educational initiatives employing a theoretical framework. Quality of studies was assessed using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) approach; risk of bias was evaluated using the Medical Education Research Study Quality Instrument (MERSQI) and the Accreditation Council for Graduate Medical Education (ACGME) guidelines for evaluation of assessment methods. Abstracted components of the included studies were used to develop a theory of change model.

Results: The literature search yielded 1661 studies: 666 were duplicates, 834 studies were excluded after abstract review, and 132 after full-text review; 29 studies (19.2%) employing a theoretical framework were included. The theories used most frequently were the Dreyfus model of skill acquisition and Messick's contemporary validity framework. GRADE ratings were predominantly "low," the average MERSQI score was 10.04, and the ACGME recommendation for all assessment development studies was the lowest recommendation. The theory of change model outlined how educators can select, apply, and evaluate theory-based interventions.

Conclusions: Few ophthalmic medical education studies employed a theoretical framework; their overall rigor was low as assessed by GRADE, MERSQI, and ACGME guidelines. A theory of change model can guide integration of theoretical frameworks into educational initiatives.

Publication types

  • Systematic Review
  • Education, Medical*
  • Education, Medical, Graduate
  • Internship and Residency*
  • Ophthalmology*

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Grad Med Educ
  • v.14(5); 2022 Oct

Theoretical Frameworks in Medical Education: Using a Systematic Review of Ophthalmology Education Research to Create a Theory of Change Model

Sophia l. song.

Sophia L. Song, ScB, is a Medical Student, Warren Alpert Medical School, Brown University

Zane Z. Yu, AB, is a Medical Student, Warren Alpert Medical School, Brown University

Laura Pavlech

Laura Pavlech, DVM, MSLS, is a Research Librarian, University of Maryland School of Pharmacy and University of Maryland Health and Human Services Library

Ingrid U. Scott

Ingrid U. Scott, MD, MPH, is a Professor of Ophthalmology and Public Health Sciences, Penn State College of Medicine

Paul B. Greenberg

Paul B. Greenberg, MD, MPH, is Deputy Chief Academic Affiliations Officer, Office of Academic Affiliations, United States Department of Veterans Affairs, and Professor of Surgery (Ophthalmology), Warren Alpert Medical School, Brown University

Associated Data

Theoretical frameworks provide a lens to examine questions and interpret results; however, they are underutilized in medical education.

To systematically evaluate the use of theoretical frameworks in ophthalmic medical education and present a theory of change model to guide educational initiatives.

Six electronic databases were searched for peer-reviewed, English-language studies published between 2016 and 2021 on ophthalmic educational initiatives employing a theoretical framework. Quality of studies was assessed using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) approach; risk of bias was evaluated using the Medical Education Research Study Quality Instrument (MERSQI) and the Accreditation Council for Graduate Medical Education (ACGME) guidelines for evaluation of assessment methods. Abstracted components of the included studies were used to develop a theory of change model.

The literature search yielded 1661 studies: 666 were duplicates, 834 studies were excluded after abstract review, and 132 after full-text review; 29 studies (19.2%) employing a theoretical framework were included. The theories used most frequently were the Dreyfus model of skill acquisition and Messick's contemporary validity framework. GRADE ratings were predominantly “low,” the average MERSQI score was 10.04, and the ACGME recommendation for all assessment development studies was the lowest recommendation. The theory of change model outlined how educators can select, apply, and evaluate theory-based interventions.

Conclusions

Few ophthalmic medical education studies employed a theoretical framework; their overall rigor was low as assessed by GRADE, MERSQI, and ACGME guidelines. A theory of change model can guide integration of theoretical frameworks into educational initiatives.

Introduction

A theory is a set of logically related propositions that describe relationships among concepts and help explain phenomena. 1 In medical education, theories serve as the basis of theoretical frameworks that provide a lens to explore questions, design initiatives, evaluate outcomes, measure impact, and disseminate findings. 2 Studies grounded in theory guide best practices and may serve as “clarification” studies that evoke depth of understanding and propel the field forward. 2 , 3 For example, the Shannon and Weaver Model of Communication has been used to analyze opportunities for error in clinician handoffs, 4 and Ericsson's deliberate practice theory has been used to design a simulation course to teach advanced life support skills. 5

However, theoretical frameworks are underutilized in medical education research. 3 , 6 Many educational initiatives, especially within subspecialty medical education, continue to be developed based on the traditional teacher-apprentice model. 2 , 7 Lack of theory-based educational initiatives can preclude meaningful interpretation of study methods and results, as theories ground new scholarly work within current literature, allow application of findings to other settings, and provide a framework for adaptation of existing theories or development of new theories. 3 , 6 Additionally, there is a dearth of studies on the prevalence of theoretical framework usage in subspecialty medical education. 8

This article has 2 purposes: to systematically review the role of theoretical frameworks in subspecialty medical education, using ophthalmology as an example, and to use the findings to construct a theory of change model 9 for guiding the development of theory-based graduate medical education curricula. Our primary questions are: What is the prevalence of theoretical framework use in ophthalmic medical education, and how can educators best integrate theory-based educational initiatives? Our findings may benefit educators by highlighting the state of theoretical framework use in subspecialty medical education and by extending these findings into a theory of change model to encourage the more widespread use of theoretical frameworks.

Search Strategy

A research librarian (L.P.) was consulted to develop a comprehensive search strategy. Following updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 10 on the conduct and reporting of systematic reviews, we searched 6 online databases (PubMed, Embase, Web of Science, CINAHL, PsycINFO, and ERIC) for articles published between January 1, 2016 and January 16, 2021 ( Figure 1 ). We selected a 5-year period prior to the writing of this review to capture current practices in medical education. Our searches included database-specific thesaurus terms, such as medical subject headings (MeSH) and Emtree, as well as keywords relevant to ophthalmic education and theoretical frameworks (online supplementary data).

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-14-5-568-f01.jpg

PRISMA Flow Diagram

Selection Criteria

Eligibility criteria included peer-reviewed, English-language studies discussing educational initiatives in an ophthalmology setting that employed a theoretical framework at the onset of the initiative. We used the definition of theoretical framework by Varpio et al: “a logically developed and connected set of concepts and premises—developed from one or more theories—that a researcher creates to scaffold a study.” 1 Educational initiatives included development of curricula, learning interventions, training strategies, and evaluation methods (eg, rubrics). We also included studies that referenced initiatives informed by a theoretical framework and studies that assessed learners with clinical evaluation methods, such as rubrics, employing a theoretical framework. We excluded reviews, studies that were not explicitly informed a priori by a theoretical framework, and studies that focused on populations other than medical students, ophthalmology trainees, or ophthalmologists. We also excluded studies that employed best practice models without describing a theoretical framework.

Eligible studies were de-duplicated in EndNote (Clarivate Analytics, Philadelphia, PA) using the method by Bramer et al 11 and imported into the systematic review software Covidence (Melbourne, Victoria, Australia) for screening, full-text review, and data extraction. Two reviewers (S.L.S, Z.Z.Y.) conducted abstract screening and full-text review independently and in duplicate, with disagreements arbitrated by the senior author (P.B.G.).

Data Extraction

A data extraction template developed in Covidence was used to extract relevant information, including year of publication, location, study design, characteristics of study participants, sample size, educational initiatives, theoretical frameworks, underlying theories, outcomes, and results. Data extraction was completed independently and in duplicate by 2 reviewers (S.L.S., Z.Z.Y.), with disagreements arbitrated by the senior author (P.B.G.).

Quality Assessment

The Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) guidelines 12 were used to evaluate the overall quality of the studies. The GRADE approach scores quality of evidence based on risk of bias, inconsistency, indirectness, imprecision, and publication bias; studies can be upgraded by demonstrating large effects, plausible confounding, and dose response gradients. The GRADEPro Guideline Development Tool (Evidence Prime, Ontario, Canada) was used to create a GRADE evidence profile for outcomes.

Comprehensive risk of bias (methodological quality) for experimental, quasi-experimental, and observational studies was measured using the Medical Education Research Study Quality Instrument (MERSQI). 13 MERSQI scores medical education studies on 10 questions across 6 domains for a maximum of 18 points.

Comprehensive risk of bias (methodological quality) for studies that developed clinical assessment methods (eg, rubrics) was determined using guidelines developed by the Accreditation Council for Graduate Medical Education (ACGME). 14 Unlike the GRADE standards which evaluate overall quality of studies based on outcomes, the ACGME guidelines are the only published method to date that evaluates quality of clinical assessment methods. 15 Studies are assigned a letter grade ranging from A to C on 6 domains (reliability, validity, ease of use, resources required, ease of interpretation, and educational impact), an overall level of evidence, and an overall recommendation for uptake into a program's evaluation system. All components of quality assessment and risk of bias analysis were completed independently and in duplicate by 2 co-authors (S.L.S., Z.Z.Y.) with disagreements arbitrated by the senior author (P.B.G.). This study adhered to the tenets of the Declaration of Helsinki.

Developing a Theory of Change Model

Theory of change models are commonly used in large-scale projects 16 , 17 to delineate the steps and interventions needed to achieve a set of long-term outcomes by backwards mapping the required preconditions, assumptions, rationale, and interventions necessary to achieve these outcomes. We used our findings to construct a theory of change model 9 depicting the steps and resources required for an educational system to develop theory-based initiatives.

Study Selection

A total of 1661 results were identified: 700 from PubMed and 961 from the other electronic databases ( Figure 1 ). After excluding 666 duplicates, 995 potential studies were identified; 834 were excluded following title and abstract screening. We reviewed 161 articles in full. We excluded 10 articles that were not studies or were not ophthalmology specific. Of the remaining 151 articles discussing educational initiatives in ophthalmology research, 29 (19.2%) were explicitly informed by a theoretical framework and made up the final analytic sample.

Study Quality

According to the GRADE approach for rating certainty of outcomes, 10 outcomes were rated as “very low” certainty, 7 were rated as “low” certainty, 1 as “moderate” certainty, and 1 as “high” certainty; this is consistent with reported ratings for non-randomized studies. 12 The online supplementary data contain a GRADE evidence table for the 7 most important outcomes, rated by 3 authors (S.L.S., Z.Z.Y., P.B.G.) using the GRADE guidelines.

The average MERSQI score for all applicable studies was 10.04 out of 18 points; by comparison, recently published mean MERSQI scores ranged from 9.05 to 12 in other surgical subspecialties. 18 , 19 The online supplementary data list MERSQI scores for each applicable study.

For studies that developed clinical assessment methods, overall ACGME guideline scores were mixed for reliability, relatively high for validity, high for ease of use, very high for resources, relatively high for ease of interpretation, and unclear for educational impact. In the absence of large-scale studies or randomized trials, the overall recommendation for all applicable studies was judged as “Class 3” (provisional usage as a component of a program's evaluation system), the lowest rating. These scores are consistent with other reviews of clinical skill assessment methods. 15 , 20 The online supplementary data list ACGME guideline ratings for clinical assessment development studies.

Characteristics of Included Studies and Interventions

The most common study types were prospective cohort 21 - 29 and cross-sectional. 30 - 36 Studies were most commonly conducted in the United States, 27 , 37 - 42 Denmark, 23 , 28 , 36 , 43 and the United Kingdom. 25 , 26 , 44 , 45 Only 7 studies 22 , 26 , 31 , 32 , 35 , 44 , 46 had sample sizes over 50 participants (range 52-311). Ten studies 22 , 29 , 34 , 36 , 38 , 44 , 45 , 47 - 49 included attending ophthalmologists, 9 studies 21 , 27 , 30 , 33 , 37 , 39 - 42 included residents, 6 studies 24 , 26 , 31 , 32 , 35 , 46 included medical students, and 4 studies 23 , 25 , 28 , 43 included a mixed selection of participants. Seven educational intervention studies were conducted in a hospital/clinic, 21 , 22 , 29 , 32 , 37 , 39 , 40 6 in an in-person classroom, 24 , 30 , 31 , 35 , 41 , 46 9 in simulation centers, 23 , 25 , 28 , 33 , 36 , 40 , 42 , 44 , 45 1 in a virtual classroom, 26 and 1 at an academic conference. 34 Table 1 contains characteristics of the included studies.

Characteristics of Included Studies

Abbreviations: SDL, self-directed learning; PBL, problem-based learning; ENT, otolaryngology; ICO-OSCAR, International Council of Ophthalmology–Ophthalmology Surgical Competency Assessment Rubric; ICO-OSCAR: Phaco, International Council of Ophthalmology–Ophthalmology Surgical Competency Assessment Rubric: Phacoemulsification; VARK, visual, aural, read/write, kinesthetic; OR, operating room; CITC, capsulorhexis intensive training curriculum; OCEX, Ophthalmic Clinical Evaluation Exercise.

Theories and Theoretical Frameworks

Studies used a variety of theoretical frameworks ( Table 1 ). The most commonly used theoretical frameworks were the Dreyfus model of skill acquisition, 22 , 27 , 29 , 33 , 38 , 44 , 45 , 47 - 49 Messick's contemporary validity framework, 23 , 28 , 36 , 43 and Bloom's taxonomy of cognitive abilities. 24 , 31 , 34 , 46

Outcomes and Results

Table 2 describes outcomes and study results. Most studies measured components of surgical performance and skill, such as intraoperative complications, 21 , 22 , 29 , 37 , 40 surgery performance, 22 , 27 - 29 , 36 aesthetic grade, 42 surgery completion, 22 and surgical efficiency. 39 , 42 Several studies investigated learning outcomes, such as examination performance, 24 , 31 , 43 learning readiness, 35 learning style, 30 and learning barriers. 33 Nine studies examined components of validity. 23 , 28 , 38 , 43 - 45 , 47 - 49 Studies also assessed subjective participant evaluation of the initiative; 8 studies 24 , 26 , 27 , 32 , 34 , 41 , 42 , 46 surveyed participants, and 1 study surveyed surgical teams. 25

Outcome Measures and Study Results

Abbreviations: PBL, problem-based learning; SDL, self-directed learning; ENT, otolaryngology; ICO-OSCAR: VIT, International Council of Ophthalmology–Ophthalmology Surgical Competency Assessment Rubric: Vitrectomy; VARK, visual, aural, read/write, kinesthetic; CCC, continuous curvilinear capsulorhexis; CITC, capsulorhexis intensive training curriculum; NOTSS, Non-technical Skills for Surgeons; NOTECHS, Non-technical skills; ANTS, Anesthetists' Non-Technical Skills; OTAS, Observational Teamwork Assessment for Surgery; ICO-OSCAR: Phaco, International Council of Ophthalmology–Ophthalmology Surgical Competency Assessment Rubric: Phacoemulsification.

Theory of Change Model

Our theory of change model ( Figure 2 ) aimed to provide a framework to guide educators in developing, implementing, and evaluating theory-based educational interventions. We abstracted key components of ophthalmic educational initiatives based on theoretical frameworks. Additionally, we analyzed studies that described how theoretical framework usage informed study design to reveal preconditions for designing theory-based initiatives. Given the relative dearth of studies that transparently reported theoretical framework usage, we also referenced literature on theory of change models and the Center for Theory of Change's guidelines 50 to further inform the development of our model.

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-14-5-568-f02.jpg

Assumptions that must hold true for developing theory-based interventions successfully include flexibility of the curriculum to accommodate change, educators' willingness to learn about and employ theoretical frameworks, participants' willingness to trial curricular interventions, and administrators' willingness and ability to support educators and participants. Resources include educators, participants, administrators, material resources, educational resources, and data collection systems.

In our hypothetical example of an ophthalmology residency curriculum initiative, an area for curriculum improvement or change must first be identified by analyzing performance trends and summative or formative evaluations or conducting a needs assessment. For example, if 40% of first-year residents in an ophthalmology program scored poorly on their national training examination, educators may be asked to develop an educational intervention to rectify the low scores.

Prior to developing the intervention, educators may undergo training in educational theory to better select and apply theoretical frameworks. Administrators may set aside protected time for learning and make funding available to provide educators with learning resources such as webinars and reading lists. Educators are then better equipped to conduct a literature review and select appropriate theories to inform their intervention. Educators may, for example, select Vygotsky's collaborative learning theory, 51 which suggests that peer-to-peer learning fosters deeper thinking. With administrative support, they may review available resources and plan to dedicate 30 minutes at the end of weekly didactics for resident-led examination practice. After each session, residents may be invited to fill out evaluations on their satisfaction with the initiative within 72 hours.

Intermediate outcomes include satisfaction with the initiative, improved standardized evaluation metrics (eg, proportion of residents who score well on the national examination), and increased number of learning or graduation competencies fulfilled. Long-term outcomes for learners include improved knowledge base and better performance as resident and practicing physicians. 52 Long-term outcomes for educators include increased use of conceptual frameworks in educational initiatives, which may translate to increased scholarly output and funding. 3 Ultimately, achieving these outcomes will support the goal of increasing theory-based educational interventions throughout an educational system.

Finally, the initiative development process is iterative, and performance data may be routinely reviewed to inform future modifications. For example, residents may prefer more timed examination simulations; educators may then reexamine the initiative using another theoretical framework. 53

The primary purpose of this systematic review was to evaluate theoretical frameworks in subspecialty medical education, using ophthalmology as an example. We found that less than 20% of ophthalmic medical education studies published between 2016 and 2021 were informed by a theoretical framework. When included studies used frameworks, they often named the theory without describing how it framed the research question, informed the methods, or elucidated the results. 6 Several studies incorporated previously designed theory-based courses or evaluation methods into their medical education initiative but did not further describe the theoretical framework.

Few studies have investigated the prevalence of conceptual frameworks in medicine and surgery. Schwartz et al reviewed the use of conceptual frameworks in the study of duty hours regulations for residents and found that several made contradictory predictions. 54 Davis et al reviewed the conceptual underpinnings of pediatrics quality-of-life instruments and found that only 7.9% (3 of 38) were based in theory. 55

Our findings are consistent with other studies investigating use of theoretical frameworks in medical education. A review by Bajpai et al on the use of learning theories in digital health professions education reported that 33.4% (81 of 242) were informed by theory. 56 Similarly, a review by Hauer et al on behavior change curricula for medical trainees demonstrated that 35.7% (39 of 109) used a theoretical framework. 57 In addition, a review by van Gaalen et al on gamification in health professions education found that only 15.9% (7 of 44) of studies employed a theoretical framework, 58 and a review by Leslie et al on faculty development programs in medical education found that only 18.2% (4 of 22) of studies employed a theoretical framework. 58 Of note, some studies employed different definitions of a theory-based approach, 56 , 58 and others did not define theory or theoretical framework. 57 , 59 These discrepancies obscure accurate prevalence data and highlight the need to adhere to a standardized set of definitions. 1 , 2 , 53 , 60 - 62

The secondary purpose of this systematic review was to use our findings to construct a theory of change model to guide educators in creating theory-based initiatives. Given the complexity and heterogeneity of medical education systems across institutions, theory of change models are excellent tools to map large-scale initiatives, especially those with multiple outcomes. 50 Our theory of change model illustrated the comprehensive process of selecting, integrating, and evaluating theory-based interventions, including the resources required and the underlying assumptions.

There are several limitations of this study. Our definition of a theoretical framework may differ from that of other studies; there is a need for researchers to adopt standardized definitions of the following terms: theory, theoretical framework, and conceptual framework. 1 We used the definitions by Varpio et al, as they provide the most current model of these terms, informed by literature review, for health professions education research. 1 We also limited our review to studies published between 2016 and 2021 in order to focus our search on current work in ophthalmic medical education 6 ; however, this may have masked trends over time. Due to the heterogeneity in study initiatives and outcomes, we were unable to evaluate the efficacy of theoretical framework use and its effects on learner performance. Additionally, given the relatively poor overall quality of included studies and the heterogeneity in reporting use of theoretical frameworks, we were unable to assess the impact of theoretical frameworks on ophthalmic medical education. Moreover, it is possible that effective theoretical frameworks employed in certain study settings (eg, a classroom) may not translate to real-world practical settings (eg, an operating room).

We were also unable to fully determine applicability of many domains of the ACGME guidelines due to ambiguity in their wording; however, reviewers remained consistent in their application of this tool. Other studies have reported similar challenges in evaluating clinical assessment methods, such as evaluation tools for surgical skills, using the ACGME guidelines. 15 , 20 Further investigation is needed into the efficacy of assessment and evaluation tools for surgical subspecialties. Finally, it is possible that some authors used theoretical frameworks without reporting them or without being consciously aware of using them 53 ; for a study to be included, authors must have reported usage of a theoretical framework or employed a named intervention or methodology based in theory.

Educators interested in designing curricular interventions or longitudinal programs based on theoretical frameworks may benefit from examining questions and results through several “lenses” of theoretical frameworks and using standardized evaluation and assessment systems. In addition, medical educators may consider testing interventions in more than one study setting or institution. Future studies can be improved by transparently reporting theoretical frameworks, including the rationale for selecting a particular framework and how it informed study design and setting.

In summary, theoretical frameworks are underutilized in ophthalmic medical education research, and many studies that employ them do not do so transparently; in the few studies that integrated a theoretical framework, overall study rigor was low as assessed by GRADE, MERSQI, and ACGME guidelines. A theory of change model may guide educators in selecting, applying, and evaluating theory-based initiatives.

Supplementary Material

Advertisement

Advertisement

Health Care Coordination Theoretical Frameworks: a Systematic Scoping Review to Increase Their Understanding and Use in Practice

  • Review Paper
  • Published: 16 May 2019
  • Volume 34 , pages 90–98, ( 2019 )

Cite this article

theoretical framework in medical research

  • Kim Peterson MS 1 ,
  • Johanna Anderson MPH 1 ,
  • Donald Bourne MPH 1 ,
  • Martin P. Charns DBA 2 , 3 ,
  • Sherri Sheinfeld Gorin PhD 4 , 5 ,
  • Denise M. Hynes MPH, PhD, RN 6 , 7 ,
  • Kathryn M. McDonald MM, PhD 8 ,
  • Sara J. Singer MBA, PhD 8 , 9 &
  • Elizabeth M. Yano PhD, MSPH 10 , 11  

19k Accesses

42 Citations

11 Altmetric

Explore all metrics

Care coordination is crucial to avoid potential risks of care fragmentation in people with complex care needs. While there are many empirical and conceptual approaches to measuring and improving care coordination, use of theory is limited by its complexity and the wide variability of available frameworks. We systematically identified and categorized existing care coordination theoretical frameworks in new ways to make the theory-to-practice link more accessible.

To identify relevant frameworks, we searched MEDLINE®, Cochrane, CINAHL, PsycINFO, and SocINDEX from 2010 to May 2018, and various other nonbibliographic sources. We summarized framework characteristics and organized them using categories from the Sustainable intEgrated chronic care modeLs for multi-morbidity: delivery, FInancing, and performancE (SELFIE) framework. Based on expert input, we then categorized available frameworks on consideration of whether they addressed contextual factors, what locus they addressed, and their design elements. We used predefined criteria for study selection and data abstraction.

Among 4389 citations, we identified 37 widely diverse frameworks, including 16 recent frameworks unidentified by previous reviews. Few led to development of measures (39%) or initiatives (6%). We identified 5 that are most relevant to primary care. The 2018 framework by Weaver et al., describing relationships between a wide range of primary care-specific domains, may be the most useful to those investigating the effectiveness of primary care coordination approaches. We also identified 3 frameworks focused on locus and design features of implementation that could prove especially useful to those responsible for implementing care coordination.

This review identified the most comprehensive frameworks and their main emphases for several general practice-relevant applications. Greater application of these frameworks in the design and evaluation of coordination approaches may increase their consistent implementation and measurement. Future research should emphasize implementation-focused frameworks that better identify factors and mechanisms through which an initiative achieves impact.

Similar content being viewed by others

theoretical framework in medical research

Continuity and care coordination of primary health care: a scoping review

theoretical framework in medical research

Care Coordination Models and Tools—Systematic Review and Key Informant Interviews

The effects of integrated care: a systematic review of uk and international evidence.

Avoid common mistakes on your manuscript.

Clinical care of complex patients often requires input from multiple providers from a variety of clinical disciplines and social services. 1 , 2 Lack of deliberate organization, cooperation, and information-sharing among patients and providers can lead to fragmented care, which can jeopardize the effectiveness, safety, and efficiency of health care delivery. 2 , 3 Improving care coordination for clinically complex patients could potentially improve their health care quality. A wide range of complex multicomponent care coordination initiatives have been developed, which often feature forms of case management and enhanced multidisciplinary team work. 4 However, the results of their effect on clinically relevant patient outcomes have been mixed. 4 Some experts have suggested that the suboptimal outcomes of some care coordination initiatives may be because they were not developed with grounding in theoretical frameworks that outline the broad range of factors that may influence care coordination, their mechanisms, and how to know if care coordination is working.

Many theoretical frameworks exist to provide guidance in improving, implementing, and evaluating care coordination. However, framework use is currently limited by complexity and wide variability in their focus. 5 Theoretical frameworks provide valuable resources necessary to better understand effective care coordination pathways. 6 Theoretical coordination initiatives risk potential wasted resources and insufficient care coordination process changes. 3

Previous reviews of care coordination theoretical frameworks have detailed key coordination concepts and elements. 5 However, they provide insufficient information to enable users to understand their focus and to identify which frameworks are most relevant in different settings. As a result, potential users are faced with a dizzying array of options without clear guidance on how to advance their aims. To select the most relevant frameworks, helpful information may include knowing whether a framework addresses contextual factors (i.e., external, immutable), an initiative’s locus (e.g., setting, level, purpose), or elements of its design (e.g., personal, relationship-oriented, technical means of coordination). Contextual factors may impact the adoption, implementation, and effectiveness of an initiative, and as such deserve consideration by program developers, implementers, or managers when designing, leading, or evaluating an initiative. Questions about locus, including an initiative’s intended purpose, setting, and scope, may arise when choosing among opportunities to enhance coordination or ensuring an initiative is sufficiently comprehensive. Key for implementation and scale is an understanding of an initiative’s design through the strategies employed and mechanisms of action, as well as relationships to health outcomes. Prior reviews have compared coordination frameworks, but have not been designed explicitly to achieve these utilitarian objectives. We extend previous reviews by systematically reviewing the literature to identify new care coordination theoretical frameworks published since 2010, to categorize their key components, approaches, and impact (i.e., led to development of measures or initiatives), and to compare frameworks in new ways.

Overview of Review Process

We conducted this review in two steps: (1) we performed a rapid evidence review to provide an initial overview of frameworks’ key components and (2) we incorporated input from subject matter experts (including researchers who have developed frameworks and surveys for care coordination and integration and researchers and clinicians who have used them in designing, implementing, synthesizing, and evaluating care coordination initiatives) through telephone discussions to complete a more detailed analysis of frameworks’ key components and purposes. We report the methodological steps taken in the rapid evidence review step according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR) guidelines. 7

We conducted the initial rapid review 8 in response to an urgent request by a US Department of Veterans Affairs’ Health Services Research & Development (HSR&D) Care Coordination State-Of-The-Art (SOTA) planning committee for use in informing a national conference (March 2018). Although guided by current standard Agency for Healthcare Research & Quality (AHRQ) systematic review methods, 9 in order to meet a condensed timeframe of 3 months, we streamlined our process in two ways—both consistent with current rapid review standards 10 : (1) we limited our search to articles published subsequent to 2010, which was the end date covered by the most recent systematic review 5 ; and (2) to minimize bias and error in all stages of the review, we used second-reviewer checking (i.e., “sequential review”) instead of independent dual review processes. For the second step, between May and August of 2018, we facilitated biweekly semi-structured telephone discussions among a smaller group of subject matter experts (MC, SSG, DH, KM, SS, EY) to increase the usefulness of our review in identifying which coordination frameworks are most relevant to clinical care. We shared written call summaries with the group to ensure consensus on our approach to distinguishing among framework components.

Search and Framework Selection

To identify frameworks, we searched MEDLINE®, Cochrane, CINAHL, PsycINFO, and SocINDEX from 2010 to May 2018, using terms for care coordination (e.g., coordinated care, integrated care, theory, framework, model, concept). Databases and search terms were selected based on use in previous reviews of this topic 11 , 12 (see our report 8 for full search strategy). To identify additional frameworks, we also searched numerous other sources, including hand-searching reference lists and relevant journals, and queried experts selected to participate in the VA State-Of-The-Art Conference on Care Coordination. We used prespecified eligibility criteria developed in consultation with experts for study selection and data abstraction. We included frameworks referring explicitly to care coordination or related terms such as integration, which were developed with a purpose related to guiding or evaluating care coordination research or practice in adults. We limited the search to English-language articles involving human subjects.

Framework Assessment

For our initial categorization, we extracted data on several key characteristics and impacts, 8 including their theoretical underpinnings (e.g., none specified, organizational design theory), how care coordination was defined, objective, key components, setting, target population, and impact in terms of bibliometrics (i.e., numbers of forward citations), and whether the framework had led to development of initiatives or measurement tools.

In the second phase, we further assessed the extent to which a framework addressed each of the widely applied six World Health Organization (WHO) health system components (i.e., service delivery, leadership and governance, workforce, financing, technologies and medical products, information and research), micro (care team), meso (organizational infrastructure and resources), and macro levels (regulatory, market, and policy environment), and whether individuals and their environments were at the center.

Finally, to distill the frameworks to facilitate use in policy and practice, we summarized all the characteristics under three domains focused on a framework’s purpose as recommended by experts (see Table 1 ). These three expert-created purpose domains include contextual factors, an initiative’s locus, and its design elements. Second-reviewer checking was used for full-text review and data abstraction (Fig. 1 ). A third reviewer resolved disagreements. 8 As no standardized tool exists for assessing the validity of theoretical frameworks, we did not assess the risk of bias of individual studies or across studies.

figure 1

Literature flow chart

Our synthesis included quantitative analysis of the frequencies of key characteristics, as well as qualitative synthesis of similarities and differences among frameworks. We organized our discussion of frameworks based on their focus, including which frameworks addressed primary care, teams, measurement, implementation, and quality improvement/management. Within these categories, we generally highlighted the most comprehensive frameworks based on the number of components and the number of expert-created purpose domains they addressed. Although the goal of our original project was to broadly distinguish care coordination frameworks, regardless of the setting (e.g., primary care, intensive care), for this article, we additionally highlight the frameworks most relevant to primary care. Our original report provides additional details on all the theoretical frameworks that were reviewed. 8

Overview of Characteristics and Components

Among 7267 citations, we identified 37 original frameworks, including 16 recent frameworks unidentified by previous reviews. Frameworks reflected a wide range of conceptual and structural diversity (Table 3 ). Among the 33 frameworks for which we had full texts, 54.5% were developed in the USA, 63.6% addressed overall health versus a specific disease (e.g., communicable disease) or setting (e.g., hospice, palliative care, intensive care), and 33.3% were considered patient-centered (i.e., explicitly naming patients/individuals as a key component that was placed at the center of the framework). Only one-third of frameworks explicitly identified a formal definition for care coordination or integration, with the McDonald et al. AHRQ definition as the most frequently cited. 11 General theoretical bases for care coordination-specific frameworks were highly variable, with organizational design theory, 58 which describes organization structure, as the most commonly cited (24.2%). The process used to select components for frameworks ranged from being unclear in the majority of frameworks to being based on formal literature review plus key informant discussions in a quarter of the cases. Frameworks most commonly emphasized means of coordination (e.g., personal and relationship-oriented mechanisms) (38%, Table 2 ) and most commonly (97%) included service delivery concepts, such as organizational and structural integration, person-centering (Table 3 ).

Comprehensiveness of Frameworks

The SELFIE framework was the most comprehensive in terms of the number of care coordination concepts it included ( N  = 56; e.g., named coordinator, remote monitoring, shared information systems). 1 Most frameworks contained 50% or fewer of the SELFIE components. By structuring a wide range of care coordination concepts from micro to macro levels, the SELFIE framework offers a nomenclature that can be used as a starting point to describe and compare initiatives.

The 2018 framework by Singer et al. 36 uses the related term “integration” and most comprehensively addressed relationships among five types of integration: structural, functional, normative, interpersonal, and process integration. The three hypothetical relationships it proposes include the following: (1) contextual factors are precursors to organizational and social integration; (2) more versus less structural integration is associated with more functional integration and that these in turn are associated with more normative, interpersonal, and process integration; and (3) these five types of integration will impact outcomes. This framework can be used to distinguish the main emphasis of an initiative or identify which types of integration are most relevant in different circumstances.

Primary Care-Focus of Frameworks

Most relevant to US primary care were three frameworks 3 , 16 , 41 derived from primary care settings. The framework by Weaver and colleagues 3 is the most comprehensive, addressing context, locus, and design domains, as well as service delivery, leadership and governance, and workforce domains (see full report). Its main purpose was to examine the factors leading to improved patient outcomes by distinguishing relationships among coordination mechanisms, processes, integrating conditions, and outcomes across multi-team systems. The 2012 primary care-focused framework by Kates et al. 26 from Canada has a similar objective of describing key elements of high-performing primary care and supports required to attain it. 26 Benzer et al.’s 2015 framework provides insights into how to facilitate the integration of mental health and primary care. 16

Coordination of Care with External Partners as Focus of Frameworks

Most relevant to health care organizations coordinating care with external partners are five frameworks (15%) 1 , 3 , 22 , 23 , 39 that explicitly emphasized distinctions among coordination levels. The SELFIE framework provides a comprehensive framework of components across micro, meso, and macro levels. 1 By contrast, Gittell’s Relational Coordination Framework, 22 Gittell’s Multi-level Framework, 23 and the framework by Weaver et al. 3 provide details on mechanisms linking intra- and inter-organization coordination. The Rainbow Model of Integrated Care (RMIC) provides an overview of both the six WHO types of integration and how they interact with different levels of care (micro, meso, macro). 39

Team-Focus of Frameworks, Without Regard to Setting

Three care coordination frameworks were team-focused. 24 , 27 , 33 The frameworks were from Australia, 24 Canada, 27 and the UK. 33 Among these, the most comprehensive was the Integrated Team Effectiveness Model (ITEM), which addressed context, locus (setting and purpose), and design (mechanisms) domains and included service delivery, leadership and governance, workforce, and technologies and medical products primarily at the meso level. The team performance framework from Reader et al. was unique in that it focused on the intensive care unit. 33

Measurement-Focus of Frameworks

Four frameworks were self-described as measurement-focused. 34 , 35 , 36 , 41 Three are from the USA 35 , 36 , 41 and one is from the UK. 34 Among these, the 2018 framework by Singer et al. 36 is the most recent and most comprehensive, encompassing all expert-defined domains and subdomains of context, locus, and design and 12 SELFIE components in service delivery, leadership and governance, workforce, financing, and information and research. 36 The Singer 2018 framework provides clear definitions of five different types of integration (i.e., structural, functional, normative, interpersonal, and process), describe how they interrelate, and propose how to measure them. Among other measurement-focused frameworks, Shigayeva et al.’s was the second most comprehensive, describing examples of four general levels of increasing integration based on TB and HIV/AIDS program integration. 34 Other measurement-focused frameworks include Singer et al., which describes ideal targets for each of five coordination dimensions and two of patient-centeredness. 35 Zlateva et al. suggest short-term and long-term outcomes specific to five care coordination domains essential to the Patient Centered Medical Home (PCMH). 41

Measurement Tools or Initiatives Deriving from Frameworks

Minkman’s Development Model for Integrated Care (DMIC) is the only framework we identified that has both led to the development of a partially validated survey (face and construct validity) and formation of multidisciplinary teams incorporating the DMIC into stroke, acute myocardial infarction (AMI), or dementia care. 29 Otherwise, we identified measures or tools stemming from 39% of the included frameworks. 19 , 36 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 Most were surveys of health care providers, and most had some to extensive levels of validation. Other frameworks that showed potential for measure development or field use include several with qualitative assessments of a framework concept 21 , 23 , 24 , 29 and that hinted at future measures. 1 , 3 , 31 , 32 , 36 Oliver’s Integrative Model is the only other framework that we identified that has explicitly led to development of an initiative, which involved incorporating telemedicine for hospice patients and caregivers. 30

Several previous reviews have provided frameworks for summarizing care coordination measures. 8 , 15 , 30 These reviews identified improvement in measurement quality as a future research need. 11 , 15 , 38 The McDonald 2014 AHRQ Measures Atlas increased access to existing care coordination measures aligned with theoretical frameworks and noted that professional and system perspectives are missing in existing measures. 11

Implementation-Focus of Frameworks

Only three frameworks described implementation strategies for settings in Australia, 24 Canada, 26 and the UK. 18 Of these, the Kates et al. and Bradbury frameworks are the most comprehensive. 18 , 26 Kates et al. are unique in proposing an implementation strategy that includes incorporation of a quality improvement “coach,” an effective spread strategy, and description of system-level enablers. Bradbury is unique in describing their actual experiences translating theory into practice. 18

Quality Improvement or Management Focus of Frameworks

Three frameworks focused on quality improvement/quality management and highlighted conditions thought to be associated with effective integration. 21 , 26 , 29 All three address design concepts and variably address context and locus domains. The frameworks share several similar components, such as patient engagement, innovation, measurement and improvement, and partnerships. Among these, Minkman’s Development Model for Integrated Care (DMIC) is the most comprehensive, containing the greatest number of components. 29

Focus of Frameworks on Care Coordination in Specialty Settings

Several frameworks focused on coordination in specialty settings, 22 , 23 , 28 , 30 , 32 , 37 such as how to integrate family involvement into hospice interdisciplinary team meetings, 30 describing cognitive workflow in critical care, 28 consideration of patient’s need for coordination based on interdisciplinarity, biological susceptibility, and procedural intensity, 25 examples of best practices for care coordination approaches mapped to clinical, educational, and administrative work activities in surgery, 40 and providing an understanding of elements of PCMH coordination initiatives. 41 Although some situational factors addressed in these frameworks are unique to the specialty setting, such as the specific clinical workflow to manage an emergency in an intensive care unit, they demonstrate and reinforce the many mechanisms and mediation concepts that are common to all frameworks, such as trust, accountability, and communication.

This scoping review advanced previous work by making the theory-to-practice link more accessible. We did this by systematically classifying a large number of existing theoretical frameworks pertaining to care coordination in four ways: (1) comprehensiveness in terms of the number types of concepts they address (e.g., coordinators, remote monitoring); (2) the types of relationships they address (e.g., context, locus, design); (3) their intended use; and (4) their impact. Our intent was to enable users—including general internists and administrators—to more easily identify and select frameworks based on their needs and potential applications. To further support use of applicable care coordination theory, in a companion Perspectives article, subject matter experts lay out an approach for health care providers, researchers, and other stakeholders to apply relevant care coordination theory to four use cases.

Among the 37 care coordination theoretical frameworks we identified, few have led to development of measures (39%) or initiatives (6%). Although not all were intended to support measurement or initiative development, underuse of those that were may limit implementation of effective coordination approaches and their consistent measurement. The majority of frameworks identified the means of coordination as a major focus. This highlights their importance in implementing care coordination processes and the need to consider and understand their mediators and moderators. For example, a unique contribution of the primary care-based framework by Weaver et al. 3 was its identification of accountability, predictability, common understanding, and trust as potentially important mediators that may limit or enhance coordinator processes.

We also identified three implementation-focused frameworks that could prove especially useful to those seeking to implement care coordination initiatives. 18 , 24 , 26 As these frameworks describe underlying mechanisms of action, they may help implementers act according to an initiative’s intent. For care coordination implementers, designing an initiative or evaluating its impact, these three implementation-focused frameworks 18 , 24 , 26 could identify factors that may influence the success or failure of an initiative.

The 16 frameworks highlighting care coordination design features could also help those implementing to develop more comprehensive initiatives through expanding their knowledge of the diverse range of available care coordination types and mechanisms ({Leijten, 2018 #7001}{Owens, 2010 #8548}{Agency for Healthcare Research and Quality, 2014 #3}{Tushman, 1978 #8545}{Shigayeva, 2010 #5959}{Kates, 2012 #1807}{Andersen, 1995 #6992}{Calciolari, 2016 #2918}{Gittell, 2004 #6996}{Singer, 2011 #2156}{Valentijn, 2013 #6094}{Young, 1997 #8346}{Bradbury, 2014 #4635}{Evans, 2016 #6068}{Malhotra, 2007 #6988}{Minkman, 2012 #6978}). These frameworks can also assist in the evaluation of initiatives. Qualitative or mixed methods investigations could add understanding to existing mechanisms. Quantitative investigations could rely on relevant theory and supportive evidence so as to avoid making ineffective changes in care coordination processes 3 that lead to wasted resources.

We identified Minkman’s DMIC 29 as the only framework which has led to both the development of a partially validated survey and an initiative. Thus, the DMIC may be considered an example for how to apply a relevant theory in developing and measuring and initiative. However, the DMIC had a relatively narrow disease focus—on care coordination mechanisms within stroke, acute myocardial infarction (AMI), and dementia settings in the Netherlands. Therefore, for guidance on how to measure a broader range of care coordination concepts, users may consider the measurement-focused framework by Singer et al. that comprehensively addresses all context, locus, and design domains and 12 SELFIE components in “Service Delivery,” “Leadership and Governance,” “Workforce,” “Financing,” and “Information and Research.” 36

Potential limitations of our review methods include our literature search parameters and approach, sequential review process, and domain formation. For our literature search, limiting to English-language studies, coupled with the inconsistent terminology used in the literature on care coordination theoretical frameworks, may have increased our risk of missing relevant studies. We addressed this challenge by including a wide variety of terminology in our search strategy, as well as consulting with experts. Second, although sequential dual review is a widely used method, its comparison to independent dual review has not yet been empirically studied and may have increased the risk of error and bias. Third, as our process of developing the domains of context, locus, and design domains was somewhat informal (based on expert deliberations), it must be considered preliminary. Further development using more formalized processes may lead to domain refinement which could impact assessment of major focus, primary care relevance, and comprehensiveness. There are likely various ways of separating frameworks depending on the users’ needs. For example, separation by motivation—policy/government/regional versus operational/delivery system versus a mix—could be relevant in certain circumstances. Finally, as this review was not designed to identify all available measures—only those associated with frameworks—future research is needed to identify other measures that may exist in general and that provide system representation perspectives.

One of the main gaps in the care coordination frameworks reviewed herein was the limited guidance offered on comprehensive program implementation. We identified implementation-specific frameworks for use to guide implementation of initiatives from a locus and design perspective. None have incorporated contextual factors, which may be key in coordinating with external partners, and none were from US settings. Few of the frameworks identified in this review have led to development of initiatives for improving care coordination or have led to the development of measures that evaluate system representation perspective. These gaps were also identified, with a research agenda proposed for the VA, in several papers from a recent Special Issue on the Coordination of Chronic Care. 6 , 59 , 60 , 61

This review advanced previous reviews by comparing theoretical frameworks for care coordination in practical ways to increase their use. By distilling the care coordination theoretical frameworks into three expert-developed domains of context, locus, and design, we made theories more accessible to primary care settings in particular. Future research on care coordination frameworks should provide more guidance on how to implement care coordination in health systems and better maximize the use of existing frameworks for developing initiatives.

Leijten FRM , Struckmann V , van Ginneken E , et al. The SELFIE framework for integrated care for multi-morbidity: Development and description. Health Policy 2018;122(1):12–22.

Article   PubMed   Google Scholar  

Owens M. Medicine I of MR on E-B. Costs of uncoordinated care. In: Yong PL, Saunders RS, Olsen LA, eds. The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary. Washington (DC): National Academies Press (US); 2010:131–140.

Google Scholar  

Weaver SJ , Che XX , Petersen LA , Hysong SJ . Unpacking care coordination through a multiteam system lens: a conceptual framework and systematic review. Med Care 2018;56(3):247–259.

PubMed   Google Scholar  

Smith SM , Soubhi H , Fortin M , Hudon C , O'Dowd T . Managing patients with multimorbidity: systematic review of interventions in primary care and community settings. BMJ 2012;345:e5205.

Van Houdt S , Heyrman J , Vanhaecht K , Sermeus W , De Lepeleire J . An in-depth analysis of theoretical frameworks for the study of care coordination. Int J Integr Care 2013;13:e024-e024.

PubMed   PubMed Central   Google Scholar  

Sheinfeld Gorin S , Haggstrom D. The coordination of chronic care: an introduction. Transl Behav Med 2018;8(3):313-317.

Tricco AC , Lillie E , Zarin W , et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169(7):467–473.

Peterson K , Anderson J , Bourne D , Boundy E . Scoping Brief: Care Coordination Theoretical Models and Frameworks. 2018;VA ESP Project #09–199.

Agency for Healthcare Research and Quality. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(14)-EHC063-EF. Rockville, MD. 2014.

Tricco A , Langlois E , Straus S , editors. Rapid reviews to strengthen health policy and systems: a practical guide. Geneva: World Health Organization;2017. License: CC BY-NC-SA 3.0 IGO.

McDonald KM , Schultz E , Albin L , et al. Care Coordination Atlas Version 4 (Prepared by Stanford University under subcontract to American Institutes for Research on Contract No. HHSA290-2010-00005I). AHRQ Publication No. 14-0037- EF. Rockville, MD. 2014.

McDonald KM , Sundaram V , Bravata DM , Lewis R , Lin N , Kraft S , McKinnon M , Paguntalan H , Owens DK . Care Coordination. Vol 7 of: Shojania KG , McDonald KM , Wachter RM , Owens DK , editors. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. Technical Review 9 (Prepared by the Stanford University-UCSF Evidence-based Practice Center under contract 290-02-0017). AHRQ Publication No. 04(07)-0051-7. Rockville, MD. 2007.

Andersen RM . Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav 1995;36(1):1–10.

Article   CAS   PubMed   Google Scholar  

Bainbridge D , Brazil K , Krueger P , Ploeg J , Taniguchi A. A proposed systems approach to the evaluation of integrated palliative care. BMC Palliat Care 2010;9(1):8.

Article   PubMed   PubMed Central   Google Scholar  

Bautista MAC , Nurjono M , Lim YW , Dessers E , Vrijhoef HJ . Instruments measuring integrated care: a systematic review of measurement properties. Milbank Q 2016;94(4):862–917.

Benzer JK , Cramer IE , Burgess JF , Mohr DC , Sullivan JL , Charns MP . How personal and standardized coordination impact implementation of integrated care. BMC Health Serv Res 2015;15(1).

Billings J , Leichsenring K. Methodological development of the interactive INTERLINKS Framework for Long-term Care. Int J Integr Care 2014;14:e021.

Bradbury E. Integrated care communities: putting change theory into practice. J Integr Care 2014;22(4):132–141.

Article   Google Scholar  

Calciolari S , Ilinca S. Unraveling care integration: Assessing its dimensions and antecedents in the Italian health system. Health Policy 2016;120(1):129–138.

Donabedian A . Evaluating the quality of medical care. Milbank Mem Fund Q 1966;44(3, Suppl):166–206.

Evans JM , Grudniewicz A , Baker GR , Wodchis WP . Organizational Context and Capabilities for Integrating Care: A framework for improvement. Int J Integr Care 2016;16(3):15.

Gittell J. Coordinating mechanisms in care provider groups: Relational coordination as a mediator and input uncertainty as a moderator of performance effects. Manag Sci 2002;48(11):1408–1426.

Gittell JH , Weiss L. Coordination networks within and across organizations: A multi-level Framework. J Manag Stud 2004;41(1):127–153.

Hepworth J , Marley JE . Healthcare teams - a practical framework for integration. Aust Fam Physician 2010;39(12):969–971.

Hodgson A , Etzkorn L , Everhart A , Nooney N , Bestrashniy J. Exploring the validity of developing an interdisciplinarity score of a patient's needs: care coordination, patient complexity, and patient safety indicators. J Healthc Qual 2017;39(2):107–121.

Kates N , Hutchison B , O'Brien P , Fraser B , Wheeler S , Chapman C . Framework for advancing improvement in primary care. Healthc Pap 2012;12(2):8–21.

Lemieux-Charles L , McGuire WL . What do we know about health care team effectiveness? A review of the literature. Med Care Res Rev 2006;63(3):263–300.

Malhotra S , Jordan D , Shortliffe E , Patel VL . Workflow modeling in critical care: piecing together your own puzzle. J Biomed Inform 2007;40(2):81–92.

Minkman MM . Developing integrated care. Towards a development model for integrated care. Int J Integr Care 2012;12.

Oliver D , Demiris G , Wittenberg-Lyles E , Porock D. The use of videophones for patient and family participation in hospice interdisciplinary team meetings: a promising approach. Eur J Cancer Care 2010;19(6):729–735.

Palmer K , Marengoni A , Forjaz MJ , et al. Multimorbidity care model: recommendations from the consensus meeting of the Joint Action on Chronic Diseases and Promoting Healthy Ageing across the Life Cycle (JA-CHRODIS). Health Policy 2018;122(1):4–11.

Radwin LE , Castonguay D , Keenan CB , Hermann C. An expanded theoretical framework of care coordination across transitions in care settings. J Nurs Care Qual 2016;31(3):269–274.

Reader TW , Flin R , Mearns K , Cuthbertson BH . Developing a team performance framework for the intensive care unit. Crit Care Med 2009;37(5):1787–1793.

Shigayeva A , Atun R , McKee M , Coker R . Health systems, communicable diseases and integration. Health Policy Plan 2010;25(suppl_1):i4-i20.

Singer SJ , Burgers J , Friedberg M , Rosenthal MB , Leape L , Schneider E. Defining and measuring integrated patient care: promoting the next frontier in health care delivery. Med Care Res Rev 2011;68(1):112–127.

Singer SJ , Kerrissey M , Friedberg M , Phillips R. A comprehensive theory of integration. Med Care Res Rev. 2018:1077558718767000.

Siouta N , Van Beek K , Van der Eerden ME , et al. Integrated palliative care in Europe: a qualitative systematic literature review of empirically-tested models in cancer and chronic disease. BMC Palliat Care 2016;15:56.

Strandberg-Larsen M , Krasnik A. Measurement of integrated healthcare delivery: a systematic review of methods and future research directions. Int J Integr Care 2009;9:e01.

Valentijn PP , Schepman SM , Opheij W , Bruijnzeels MA . Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care. Int J Integr Care 2013;13:e010.

Young GJ , Charns MP , Daley J , Forbes MG , Henderson W , Khuri SF . Best practices for managing surgical services: the role of coordination. Health Care Manag Rev 1997;22(4):72–81.

Article   CAS   Google Scholar  

Zlateva I , Anderson D , Coman E , Khatri K , Tian T , Fifield J. Development and validation of the Medical Home Care Coordination Survey for assessing care coordination in the primary care setting from the patient and provider perspectives. BMC Health Serv Res 2015;15:226.

Alter C , Hage J . Organizations working together. Vol 191. Newbury Park: Sage Publications, Inc; 1993.

Klein G . Features of team coordination. In: McNeese M, Salas E, Endesley M, eds. New trends in cooperative activities: Understanding system dynamics in complex environments. Santa Monica: Human Factors & Ergonomics Society; 2001:68–95.

Nadler D , Tushman M . Strategic organization design: Concepts, tools & processes. Glenview: Scott Foresman & Co; 1988.

Watzlawick P , Beavin JH , Jackson DD. Menschliche Kommunikation: Formen, Störungen, Paradoxien. Huber; 2000.

Advancing Quality Alliance. System Integration Framework Assessment. 2014. Available from: https://www.aquanw.nhs.uk/resources/integration/integrated-caretoolkit/AQuA%20Framework%20Assessment.pdf .

Agency for Healthcare Research and Quality. Care Coordination Measure for Primary Care Survey. Prepared under Contract No. HHS290–2010-00005I. AHRQ Publication No. 16–0042-1-EF2016, Rockville: Agency for Healthcare Research and Quality.

Angus L , Valentijn PP . From micro to macro: assessing implementation of integrated care in Australia. Aust J Prim Health 2017.

Bainbridge D , Brazil K , Krueger P , Ploeg J , Taniguchi A , Darnay J. Measuring horizontal integration among health care providers in the community: an examination of a collaborative process within a palliative care network. J Interprof Care 2015;29(3):245–252.

Gittell JH , Fairfield KM , Bierbaum B , et al. Impact of relational coordination on quality of care, postoperative pain and functioning, and length of stay: a nine-hospital study of surgical patients. Med Care 2000;38(8):807–819.

Nurjono M , Valentijn PP , Bautista MA , Wei LY , Vrijhoef HJ . A prospective validation study of a rainbow model of integrated care measurement tool in Singapore. Int J Integr Care 2016;16(1):1.

Oliver DP , Wittenberg-Lyles EM , Day M. Measuring interdisciplinary perceptions of collaboration on hospice teams. Am J Hosp Palliat Med 2007;24(1):49–53.

Singer SJ , Friedberg MW , Kiang MV , Dunn T , Kuhn DM . Development and preliminary validation of the Patient Perceptions of Integrated Care survey. Med Care Res Rev 2013;70(2):143–164.

Valentijn P , Angus L , Boesveld I , Nurjono M , Ruwaard D , Vrijhoef H. Validating the Rainbow Model of Integrated Care Measurement Tool: results from three pilot studies in the Netherlands, Singapore and Australia. Int J Integr Care 2017;17(3).

Van Dijk-de Vries AN , Duimel-Peeters IG , Muris JW , Wesseling GJ , Beusmans GH , Vrijhoef HJ . Effectiveness of teamwork in an integrated care setting for patients with copd: development and testing of a self-evaluation instrument for interprofessional teams. Int J Integr Care 2016;16(1):9.

Young GJ , Charns MP , Desai K , et al. Patterns of coordination and clinical outcomes: a study of surgical services. Health Serv Res 1998;33(5 Pt 1):1211–1236.

CAS   PubMed   PubMed Central   Google Scholar  

World Health Organization. Key components of a well functioning health system. 2010. Available from: https://www.who.int/healthsystems/EN_HSSkeycomponents.pdf?ua=1 .

Tushman ML , Nadler DA . Information processing as an integrating concept in organizational design. Acad Manag Rev 1978;3(3):613–624.

Kilbourne AM , Hynes D , O’Toole T , Atkins D . A research agenda for care coordination for chronic conditions: aligning implementation, technology, and policy strategies. Transl Behav Med 2018;8(3):515–521.

Weaver SJ , Jacobsen PB . Cancer care coordination: opportunities for healthcare delivery research. Transl Behav Med 2018;8(3):503–508.

Elwood WN , Huss K , Roof RA , et al. NIH research opportunities for the prevention and treatment for chronic conditions. Transl Behav Med 2018;8(3):509–514.

Download references

Acknowledgements

We would like to thank Julia Haskin, MA, for editorial support.

This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Quality Enhancement Research Initiative (QUERI), Evidence-Based Synthesis Program (ESP). Dr. Yano’s time was supported by a VA HSR&D Senior Research Career Scientist Award (project no. RCS 05-195).

Author information

Authors and affiliations.

Department of Veterans Affairs, VA Portland Health Care System, Evidence-based Synthesis Program (ESP) Coordinating Center, Portland, OR, USA

Kim Peterson MS, Johanna Anderson MPH & Donald Bourne MPH

VA HSR&D Center for Healthcare Organization and Implementation Research (CHOIR), VA Boston Healthcare System, Boston, MA, USA

Martin P. Charns DBA

Boston University School of Public Health, Boston, MA, USA

New York Physicians against Cancer (NYPAC), New York, NY, USA

Sherri Sheinfeld Gorin PhD

The University of Michigan Medical School, Ann Arbor, MI, USA

Department of Veterans Affairs, VA Portland Health Care System, Portland, OR, USA

Denise M. Hynes MPH, PhD, RN

Oregon State University, Corvallis, OR, USA

Stanford University School of Medicine, Stanford, CA, USA

Kathryn M. McDonald MM, PhD & Sara J. Singer MBA, PhD

Stanford University Graduate School of Business, Stanford, CA, USA

Sara J. Singer MBA, PhD

VA HSR&D Center for the Study of Healthcare Innovation, Implementation & Policy, VA Greater Los Angeles Healthcare System, Boston, MA, USA

Elizabeth M. Yano PhD, MSPH

Department of Health Policy & Management, UCLA Fielding School of Public Health, Los Angeles, CA, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kim Peterson MS .

Ethics declarations

Conflict of interest.

KP, JA, DB, DH, SSG, KM, SS, and BY declare no conflicts of interest. MC wishes to disclose a VA HSR&D IIR 12-346 Patient Experienced Integrated Care for Veterans with Multiple Chronic Conditions grant that ended 9/30/17 as a potential conflict of interest.

The views expressed in this article are those of the authors and do not necessarily represent the position or policy of the Department of Veterans Affairs or the United States government.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Prior presentations: None

Rights and permissions

Reprints and permissions

About this article

Peterson, K., Anderson, J., Bourne, D. et al. Health Care Coordination Theoretical Frameworks: a Systematic Scoping Review to Increase Their Understanding and Use in Practice. J GEN INTERN MED 34 (Suppl 1), 90–98 (2019). https://doi.org/10.1007/s11606-019-04966-z

Download citation

Published : 16 May 2019

Issue Date : 15 May 2019

DOI : https://doi.org/10.1007/s11606-019-04966-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • care coordination
  • integrated care
  • theoretical model
  • theoretical framework
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Theoretical Framework Download Scientific Diagram

    theoretical framework in medical research

  2. Conceptual framework of structure, process and outcome of a health care

    theoretical framework in medical research

  3. 🎉 Example of theoretical framework in nursing research. Theoretical

    theoretical framework in medical research

  4. Research Theoretical Framework

    theoretical framework in medical research

  5. What Is a Theoretical Framework: Definition & Writing Guide

    theoretical framework in medical research

  6. How to Write the Best Theoretical Framework for Your Dissertation

    theoretical framework in medical research

VIDEO

  1. Theoretical and Conceptual Framework

  2. Research theoretical framework part 03

  3. Theoretical Framework vs Conceptual Framework

  4. Research

  5. Ch 1 Theoretical Framework

  6. theoretical study #theory #practicle #anatomyclass #medicalstudent #surgeryinstruments #mrcs_videos

COMMENTS

  1. Disclosures

    Introduction. Calls for improved rigor in health professions education (HPE) research have often focused on the need to incorporate theoretical and conceptual frameworks in research design, implementation, and reflective critique. 1,2 Theories, which explain how/why things are related to each other, and frameworks, which explain where a study originates and the implications on study design ...

  2. An in-depth analysis of theoretical frameworks for the study of care

    All the newly retrieved theoretical frameworks were developed by a research agency located in the US, ... Seven theoretical frameworks address patient outcome including the patient's perception or the patient's evaluation of healthcare professional performance regarding patient health status, patient satisfaction, the continuity of care ...

  3. The Distinctions Between Theory, Theoretical Framework, and ...

    concepts. Further problematizing this situation is the fact that theory, theoretical framework, and conceptual framework are terms that are used in different ways in different research approaches. In this article, the authors set out to clarify the meaning of these terms and to describe how they are used in 2 approaches to research commonly used in HPE: the objectivist deductive approach (from ...

  4. The Distinctions Between Theory, Theoretical Framework, and ...

    Abstract. Health professions education (HPE) researchers are regularly asked to articulate their use of theory, theoretical frameworks, and conceptual frameworks in their research. However, all too often, these words are used interchangeably or without a clear understanding of the differences between these concepts.

  5. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    A literature review may reach beyond BER and include other education research fields. A theoretical framework does not rationalize the need for the study, and a theoretical framework can come from different fields. ... One focuses on learning loss in general and examines a variety of studies and meta-analyses from the disciplines of medical ...

  6. PDF The Distinctions Between Theory, Theoretical Framework & Conceptual

    A theoretical framework is a reflection of the work the researcher engages in to use a theory in a given study. Varpio, L., Paradis, E., Uijtdehaage, S., & Young, M. (2019). The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework. Academic medicine: journal of the Association of American Medical Colleges.

  7. What is a Theoretical Framework? How to Write It (with Examples)

    A theoretical framework guides the research process like a roadmap for the study, so you need to get this right. Theoretical framework 1,2 is the structure that supports and describes a theory. A theory is a set of interrelated concepts and definitions that present a systematic view of phenomena by describing the relationship among the variables for explaining these phenomena.

  8. Acceptability of healthcare interventions: an overview of reviews and

    Background It is increasingly acknowledged that 'acceptability' should be considered when designing, evaluating and implementing healthcare interventions. However, the published literature offers little guidance on how to define or assess acceptability. The purpose of this study was to develop a multi-construct theoretical framework of acceptability of healthcare interventions that can be ...

  9. Theoretical Models and Frameworks

    Mixed method is characterized by a focus on research problems that require, 1) an examination of real-life contextual understandings, multi-level perspectives, and cultural influences; 2) an intentional application of rigorous quantitative research assessing magnitude and frequency of constructs and rigorous qualitative research exploring the ...

  10. Integration of a theoretical framework into your research study

    Often the most difficult part of a research study is preparing the proposal based around a theoretical or philosophical framework. Graduate students '…express confusion, a lack of knowledge, and frustration with the challenge of choosing a theoretical framework and understanding how to apply it'.1 However, the importance in understanding and applying a theoretical framework in research ...

  11. Making theory explicit

    For over a decade, there have been expressions of concern about medical education research publications lacking an explicit theoretical basis [1-5].Although there are signs of an increase in use of theory in medical education [], it is of interest to not only identifying the issue, but to better understand and remedy it.The aim of this paper is to help researchers make better use of theory ...

  12. Medical Education Research and Scholarship: Theories

    Theories, theoretical frameworks, and conceptual frameworks: Podcast. Associated article: The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework; Slide deck: The Distinctions Between Theory, Theoretical Framework, and Conceptual Framework

  13. Health Care Coordination Theoretical Frameworks: a Systematic Scoping

    Many theoretical frameworks exist to provide guidance in improving, implementing, and evaluating care coordination. ... leadership and governance, workforce, financing, technologies and medical products, information and research), micro (care team), meso (organizational infrastructure and resources), and macro levels (regulatory, market, and ...

  14. Using a theoretical framework to inform implementation of the patient

    Using a theoretical framework can help with this, giving a better understanding of how and why interventions work or do not work. ... Maeng DD, Friedberg MW, et al. Synthesis of research on patient-centered medical homes brings systematic differences into relief. Health Aff (Millwood). 2017;36(3):500-8. Article PubMed Google Scholar ...

  15. Health Care Coordination Theoretical Frameworks: a Systematic ...

    6 The University of Michigan Medical School, Ann Arbor, MI, USA. 7 Department of Veterans Affairs ... We systematically identified and categorized existing care coordination theoretical frameworks in new ways to make the theory-to-practice link more accessible. ... Future research should emphasize implementation-focused frameworks that better ...

  16. What Is a Theoretical Framework?

    A theoretical framework is a foundational review of existing theories that serves as a roadmap for developing the arguments you will use in your own work. Theories are developed by researchers to explain phenomena, draw connections, and make predictions. In a theoretical framework, you explain the existing theories that support your research ...

  17. Applying Conceptual and Theoretical Frameworks to Health ...

    Introduction. Calls for improved rigor in health professions education (HPE) research have often focused on the need to incorporate theoretical and conceptual frameworks in research design, implementation, and reflective critique. 1,2 Theories, which explain how/why things are related to each other, and frameworks, which explain where a study originates and the implications on study design ...

  18. Theoretical Frameworks in Medical Education: Using a Systematic Review

    Background: Theoretical frameworks provide a lens to examine questions and interpret results; however, they are underutilized in medical education. Objective: To systematically evaluate the use of theoretical frameworks in ophthalmic medical education and present a theory of change model to guide educational initiatives. Methods: Six electronic databases were searched for peer-reviewed ...

  19. Theoretical Frameworks in Medical Education: Using a Systematic Review

    However, theoretical frameworks are underutilized in medical education research. 3, 6 Many educational initiatives, especially within subspecialty medical education, continue to be developed based on the traditional teacher-apprentice model. 2, 7 Lack of theory-based educational initiatives can preclude meaningful interpretation of study ...

  20. Theoretical framework (Chapter 1)

    Summary. This chapter presents the conceptual framework underpinning our research. It looks at elements of the legal and political context that influence the role of medical doctors in healthcare reforms. It then analyses scholarly work on the sociology of professions and the interface between professions and organisations in order to better ...

  21. Health Care Coordination Theoretical Frameworks: a ...

    Many theoretical frameworks exist to provide guidance in improving, implementing, and evaluating care coordination. ... leadership and governance, workforce, financing, technologies and medical products, information and research), micro (care team), meso (organizational infrastructure and resources), and macro levels (regulatory, market, and ...

  22. PDF Theoretical Frameworks and Philosophies of Care

    guide study design. This provides a framework to compare and integrate the find-ings in relation to other research. Theory also drives the formation of hypotheses and subsequent interpretation of the findings. Finally, theory provides a framework for linking variables: they must have empirical or theoretical support for coexis-tence and testing.

  23. Theoretical Framework Example for a Thesis or Dissertation

    Theoretical Framework Example for a Thesis or Dissertation. Published on October 14, 2015 by Sarah Vinz . Revised on July 18, 2023 by Tegan George. Your theoretical framework defines the key concepts in your research, suggests relationships between them, and discusses relevant theories based on your literature review.

  24. Theoretical Framework

    Theoretical Framework. Definition: Theoretical framework refers to a set of concepts, theories, ideas, and assumptions that serve as a foundation for understanding a particular phenomenon or problem. It provides a conceptual framework that helps researchers to design and conduct their research, as well as to analyze and interpret their findings.