• Search Menu
  • Advance articles
  • Editor's Choice
  • Supplements
  • French Abstracts
  • Portuguese Abstracts
  • Spanish Abstracts
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Journal for Quality in Health Care
  • About the International Society for Quality in Health Care
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ISQua
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Quality improvement in healthcare: the need for valid, reliable and efficient methods and indicators.

  • Article contents
  • Figures & tables
  • Supplementary Data

Mohaimenul Islam, Yu-Chuan (Jack) Li, Quality improvement in healthcare: the need for valid, reliable and efficient methods and indicators, International Journal for Quality in Health Care , Volume 31, Issue 7, August 2019, Pages 495–496, https://doi.org/10.1093/intqhc/mzz077

  • Permissions Icon Permissions

Quality of care and patient’s safety are now recognized globally as a healthcare priority. While adverse events (AEs) are a serious issue related to the patient’s safety, concern has been raised on the quality of care provided globally. It is reported that AEs reckon additional 13–16% costs alone due to only prolonged hospital stay. The annual cost of prolonging hospital stay because of AEs is ~£2 billion in the UK [ 1 ]. Moreover, other issues like pain and suffering, loss of independence and productivity of patients or costs of litigation and settlement of medical negligence claims are often ignored while calculating the total economic burden of AEs. An increased number of AEs always have detrimental effects on both patients and healthcare providers including physical and mental harm, reducing credibility of the healthcare system. It is therefore important to identify and measure AEs for prioritizing problems to work on and making sophisticated ideas for better patient care as they generate substantial burden to patients and healthcare providers [ 2 ]. Although there is no gold standard for measuring AEs, a significant number of studies used the Harvard Medical Practice Study (HMPS) approach as a standard methodology for measuring AEs [ 3 ]. Trigger tools like the global trigger tool (GTT) (introduced by the institute for healthcare improvement) have been developed to identify and measure the AEs. It is an easy and less labor intensive two-stage method of retrospectively manual review of the patient’s chart. Firstly, two nurses individually screen patients’ reports for specific triggers and ascertain AEs regarding these triggers before making any decision. Secondly, physicians verify them based on the standard definition [ 4 ].

A study based on the local analysis at hospital reported that the GTT method had both high sensitivity and high specificity than other methodologies like HMPS [ 5 ]. Moreover, the GTT method correctly detected most of the AEs that were missed out by the Agency for Healthcare Research and Quality’s Patients Safety Indicators. Mevik et al . [ 6 ] evaluated a modified GTT method with a manual review of automatically triggered records to measure AEs using the original GTT method as a gold standard. However, the modified GTT method was more reliable and efficient when it came to monitoring and accurately identifying AEs. While comparing time to identify AEs, modified GTT took less time than original GTT (a total of 23 h to complete the manual review of 658 automatic triggered records with modified GTT compared to 411 h of review of 1233 records with the original GTT) but both of the methods identified same amount AEs (35 AEs) per 1000 patient’s days. Modified GTT would be the right choice for the higher sample size as it provides an effective alternative to valid, efficient and time-consuming approaches to identify and monitor AEs. In the future, an automatic trigger-identifying system with electronic health records might enhance the utility and assess triggers in real time to lessen the risk of AEs.

The health of pregnant women and child still remains a serious public health issue. Despite comprehensive efforts and investments in the healthcare sector; maternal and child mortality is unacceptably high [ 7 ]. Antenatal care (ANC) is of paramount importance for ensuring optimal care of pregnant women and reducing the risk of stillbirths and neonatal deaths. Quality of care during pregnancy and childbirth can help to reduce pregnancy complications and improve the survivals and health of babies. According to the updated recommendation by WHO, women need first ANC visit within the first trimester and an additional seven visits are recommended [ 8 ]. However, raised awareness, trained healthcare workers, strong national information and surveillance systems are needed for proper monitoring and timely and respectful care.

Morón-Duarte et al . [ 9 ] conducted a systematic review to describe indicators used for the assessment of ANC quality globally under the WHO framework. A total of 86 original studies were included, which described the ANC model and ANC quality indicators such as the use of services, clinical or laboratory diagnostic procedure and educational and prophylactic intervention. The quality of the included studies was evaluated according to the ‘Checklist for Measuring Quality’ proposed by Downs and Black [ 10 ]. A highly diverse and region-specific description of indicators was observed while relevance and use depend on the country-specific context. However, 8.7% of the articles reported healthy eating counseling and 52.2% iron and folic acid supplementation on the basis of updated WHO recommendation. The evaluation indicators on maternal and fetal interventions were syphilis testing (55.1%), HIV testing (47.8%), gestational diabetes mellitus screening (40.6%) and ultrasound (27.5%). Essential ANC activity assessment ranged from 26.1% report of fetal heart sound, 50.7% of maternal weight and 63.8% of blood pressure. Concern has been raised due to the quality assessment of ANC content especially in the utilization of services across countries. It is important to use health indicators based on the international guidelines but the appropriateness of suggested indicators and the construction of structured and standardized indices are necessary to be implemented in the different countries allowing international comparability and monitoring.

In summary, various trigger tools have been implemented and evaluated to improve drug therapy assessment and monitoring in hospitalized patients, nutrition support practice and tertiary care hospitals, but these tools need to be validated in varied patients’ populations. Moreover, improved quality of care and patient’s safety initiatives are essential to reduce AEs and maternal and newborn mortality.

Rafter N , Hickey A , Condell S et al.  Adverse events in healthcare: learning from mistakes . QJM 2014 ; 108 : 273 – 7 .

Google Scholar

Jha A , Pronovost P . Toward a safer health care system: the critical need to improve measurement . JAMA 2016 ; 315 : 1831 – 2 .

Brennan TA , Leape LL , Laird NM et al.  Incidence of adverse events and negligence in hospitalized patients: results of the Harvard medical practice study I . N Engl J Med 1991 ; 324 : 370 – 6 .

Naessens JM , O’byrne TJ , Johnson MG et al.  Measuring hospital adverse events: assessing inter-rater reliability and trigger performance of the global trigger tool . Int J Qual Health Care 2010 ; 22 : 266 – 74 .

Classen DC , Resar R , Griffin F et al.  ‘Global trigger tool’shows that adverse events in hospitals may be ten times greater than previously measured . Health Aff 2011 ; 30 : 581 – 9 .

Mevik K , Hansen TE , Deilkås EC et al.  Is a modified Global Trigger Tool method using automatic trigger identification valid when measuring adverse events? A comparison of review methods using automatic and manual trigger identification . Int J Qual Health Care 2018 .

Moller A-B , Petzold M , Chou D et al.  Early antenatal care visit: a systematic analysis of regional and global levels and trends of coverage from 1990 to 2013 . Lancet Glob Health 2017 ; 5 : e977 – 83 .

World Health Organization . WHO Recommendations on Antenatal Care for a Positive Pregnancy Experience . World Health Organization , 20 Avenue Appia, 1211 Geneva 27, Switzerland, 2016 .

Google Preview

Morón-Duarte LS , Ramirez Varela A , Segura O et al.  Quality assessment indicators in antenatal care worldwide: a systematic review . Int J Qual Health Care 2018 .

Downs SH , Black N . The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions . J Epidemiol Community Health 1998 ; 52 : 377 – 84 .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3677
  • Print ISSN 1353-4505
  • Copyright © 2024 International Society for Quality in Health Care and Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Research article
  • Open access
  • Published: 04 October 2019

Can quality improvement improve the quality of care? A systematic review of reported effects and methodological rigor in plan-do-study-act projects

  • Søren Valgreen Knudsen   ORCID: orcid.org/0000-0002-3792-8983 1 , 3 ,
  • Henrik Vitus Bering Laursen 3 ,
  • Søren Paaske Johnsen 1 ,
  • Paul Daniel Bartels 4 ,
  • Lars Holger Ehlers 3 &
  • Jan Mainz 1 , 2 , 5 , 6  

BMC Health Services Research volume  19 , Article number:  683 ( 2019 ) Cite this article

76k Accesses

76 Citations

79 Altmetric

Metrics details

The Plan-Do-Study-Act (PDSA) method is widely used in quality improvement (QI) strategies. However, previous studies have indicated that methodological problems are frequent in PDSA-based QI projects. Furthermore, it has been difficult to establish an association between the use of PDSA and improvements in clinical practices and patient outcomes. The aim of this systematic review was to examine whether recently published PDSA-based QI projects show self-reported effects and are conducted according to key features of the method.

A systematic literature search was performed in the PubMed, Embase and CINAHL databases. QI projects using PDSA published in peer-reviewed journals in 2015 and 2016 were included. Projects were assessed to determine the reported effects and the use of the following key methodological features; iterative cyclic method, continuous data collection, small-scale testing and use of a theoretical rationale.

Of the 120 QI projects included, almost all reported improvement (98%). However, only 32 (27%) described a specific, quantitative aim and reached it. A total of 72 projects (60%) documented PDSA cycles sufficiently for inclusion in a full analysis of key features. Of these only three (4%) adhered to all four key methodological features.

Even though a majority of the QI projects reported improvements, the widespread challenges with low adherence to key methodological features in the individual projects pose a challenge for the legitimacy of PDSA-based QI. This review indicates that there is a continued need for improvement in quality improvement methodology.

Peer Review reports

Plan-Do-Study-Act (PDSA) cycles are widely used for quality improvement (QI) in most healthcare systems where tools and models inspired by industrial management have become influential [ 1 ]. The essence of the PDSA cycle is to structure the process of improvement in accordance with the scientific method of experimental learning [ 2 , 3 , 4 , 5 ]. It is used with consecutive iterations of the cycle constituting a framework for continuous learning through testing of changes [ 6 , 7 , 8 , 9 , 10 ].

The concept of improvement through iterative cycles has formed the basis for numerous structured QI approaches including Total Quality Management, Continuous Quality Improvement, Lean, Six Sigma and the Model for Improvement [ 4 , 6 , 10 ]. These “PDSA models” have different approaches but essentially consist of improvement cycles as the cornerstone combined with a bundle of features from the management literature. Especially within healthcare, several PDSA models have been proposed for QI adding other methodological features to the basic principles of iterative PDSA cycles. Key methodological features include the use of continuous data collection [ 2 , 6 , 8 , 9 , 10 , 11 , 12 , 13 ], small-scale testing [ 6 , 8 , 10 , 11 , 14 , 15 , 16 ] and use of a theoretical rationale [ 5 , 9 , 17 , 18 , 19 , 20 , 21 , 22 ]. Most projects are initiated in the complex social context of daily clinical work [ 12 , 23 ]. In these settings, focus on use of these key methodological features ensures quality and consistency by supporting adaptation of the project to the specific context and minimizing the risk of introducing harmful or wasteful unintended consequences [ 10 ]. Thus, the PDSA cycle is not sufficient as a standalone method [ 4 ] and integration of the full bundle of key features is often simply referred to as the PDSA method (Fig.  1 ).

figure 1

Plan-Do-Study-Act (PDSA) based quality improvement. Each cycle informs the subsequent cycle. Ideally, the complexity and size of the intervention is upscaled iteratively as time pass, knowledge is gained and quality of care is improved

Since its introduction to healthcare in the 1990s, numerous QI projects have been based on the PDSA method [ 10 , 24 ]. However, the scientific literature indicates that the evidence for effect is limited [ 10 , 25 , 26 , 27 , 28 , 29 , 30 ]. The majority of the published PDSA projects have been hampered with severe design limitations, insufficient data analysis and incomplete reporting [ 12 , 31 ]. A 2013 systematic review revealed that only 2/73 projects reporting use of the PDSA cycle applied the PDSA method in accordance with the methodological recommendations [ 10 ]. These methodological limitations have led to an increased awareness of the need for more methodological rigor when conducting and reporting PDSA-based projects [ 4 , 10 ]. This challenge is addressed by the emergent field of Improvement Science (IS) which attempts to systematically examine methods and factors that best facilitate QI by drawing on a range of academic disciplines and encourage rigorous use of scientific methods [ 5 , 12 , 32 , 33 ]. It is important to make a distinction between local QI projects, where the primary goal is to secure a change, and IS, where the primary goal is directed at evaluation and scientific advancement [ 12 ].

In order to improve local QI projects, Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines have been developed to provide a framework for reporting QI projects [ 18 , 34 ]. Still, it remains unclear to what extent the increasing methodological awareness is reflected in PDSA-based QI projects published in recent years. Therefore, we performed a systematic review of recent peer-reviewed publications reporting QI projects using the PDSA methodology in healthcare and focused on the use of key features in the design and on the reported effects of the projects.

The key features of PDSA-based QI projects were identified, and a simple but comprehensive framework was constructed. The review was conducted in adherence with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [ 35 ].

The framework

Informed by recommendations for key features in use and support of PDSA from literature specific to QI in healthcare the following key features were identified:

Use of an iterative cyclic method [ 6 , 7 , 8 , 9 , 10 ]

Use of continuous data collection [ 2 , 6 , 8 , 9 , 10 , 11 , 12 , 13 ]

Small-scale testing [ 6 , 8 , 10 , 11 , 14 , 15 , 16 ]

Explicit description of the theoretical rationale of the projects [ 5 , 9 , 17 , 18 , 19 , 20 , 21 , 22 ]

Aiming for conceptual simplicity, we established basic minimum requirements for the presence of the key features operationalizing them into binary (yes/no) variables. General characteristics and supplementary data that elaborated the use of the key features were operationalized and registered as categorical variables. See Table  1 for an overview of the framework and Additional file  1 for a more in-depth elaboration of the definitions used for the key features. Since a theoretical rationale can take multiple forms, the definition for this feature was taken from the recent version of the SQUIRE guidelines [ 18 ].

Since no formal standardized requirements for reporting PDSA-based QI projects across journals are established, not all report the individual PDSA cycles in detail. To ensure that variation in use of key features were inherent in the conduct of the projects and not just due to differences in the reporting, sufficient documentation of PDSA cycles was set as a requirement for analysis against the full framework.

Self-reported effects

A pre-specified, quantitative aim can assist to facilitate evaluation of whether the changes represent clinically relevant improvements when using the PDSA method [ 16 ]. Self-reported effects of the projects were registered using four categories: 1) Quantitative aim set and reached; 2) No quantitative aim set, improvement registered; 3) Quantitative aim set but not reached; 4) No quantitative aim and no improvement registered.

Systematic review of the literature

The target of the literature search was peer-reviewed publications that applied the PDSA cycle as the main method for a QI project in a healthcare setting. The search consisted of the terms ([ ‘PDSA’ OR ‘ plan-do-study-act’ ] AND [‘ quality ’ OR ‘ improvement ’]). The terms were searched for in title and abstract. No relevant MeSH terms were available. To get a contemporary status of the QI field, the search was limited to QI projects published in 2015 and 2016. PubMed, Embase and CINAHL databases were searched with the last search date being 2nd of March 2017.

Study selection

The following inclusion criteria were used: Peer-reviewed publications reporting QI projects using the PDSA methodology in healthcare, published in English. Exclusion criteria were: IS studies, editorials, conference abstracts, opinions and audit articles, reviews or projects solely involving teaching the PDSA method.

Two reviewers (SVK and HVBL) performed the screening process independently. Title and abstract were screened for inclusion followed by an assessment of the full text according to the eligibility criteria. This was performed in a standardized manner with the Covidence software. Disagreements were resolved by consensus.

Data collection process

A data collection sheet was developed and pilot tested. The subsequent refinement resulted in a standardized sheet into which data were extracted independently by SVK and HVBL.

Data from the key and supplementary features were extracted in accordance with the framework. The binary data were used to grade QI projects on a scale of 0–4, based on how many of the four key features were applied. Data were analyzed in STATA (version 15.0, StataCorp LLC).

Selection process

The search identified 311 QI projects of which 195 remained after duplicate removal. A total of 40 and 35 projects were discarded after screening abstracts and full texts, respectively. Hence, a total of 120 projects met the inclusion criteria and were included in the review (see Fig.  2 ).

figure 2

PRISMA diagram

An overview of general characteristics, supplementary features and self-reported effects of the included projects are presented in Table  2 .

General characteristics

Country and journal.

The included QI projects originated from 18 different countries including the USA ( n  = 52), the UK ( n  = 43), Canada ( n  = 6), Singapore ( n  = 5), Saudi Arabia ( n  = 4), Australia ( n  = 2) and one each from eight other countries. Fifty different journals had published QI projects with the vast majority ( n  = 53) being from BMJ Quality Improvement Reports. See Additional file  2 for a full summery of the findings.

Area and specialty

In terms of reach, most were local ( n  = 103) followed by regional ( n  = 13) and nationwide ( n  = 3). The areas of healthcare were primarily at departmental ( n  = 68) and hospital level ( n  = 36). Many different specialties were represented, the most common being pediatrics ( n  = 28), intensive or emergency care ( n  = 13), surgery ( n  = 12), psychiatry ( n  = 11) and internal medicine ( n  = 10).

Supporting framework

Most QI projects did not state using a supporting framework ( n  = 70). However, when stated, most used The Model for Improvement ( n  = 40). The last ( n  = 10) used Lean, Six-sigma or other frameworks.

Reported effects

All 120 projects included were assessed for the self-reported effects. Overall, 118/120 (98%) projects reported improvement. Thirty-two (27%) achieved a pre-specified aim set in the planning process, whereas 68 (57%) reported an improvement without a pre-specified aim. Eighteen projects (15%) reported setting an aim and not reaching it while two (2%) projects did not report a pre-specified aim and did not report any improvement.

Documentation

Seventy-two projects had sufficient documentation of the PDSA cycles. Sixty of these contained information on individual stages of cycles, while 12 in addition presented detailed information on the four stages of the PDSA cycles.

Application of key features of PDSA

The application of the key PDSA features appeared to be highly inconsistent. The iterative method was used in 75 projects (79%), continuous data collection in 48 (67%), an explicit theoretical rational was present in 26 (36%) projects and small-scale testing was carried out by 10 (14%) (Fig.  3 a). All key features of the method were applied in 3/72 projects (4%), while 20 (28%), 26 (36%), and 18 (25%) used three, two, and one feature respectively. Five projects (7%) lacked all features (Fig. 3 b). See Additional file  3 for a full summary of the findings.

figure 3

a ) Bar-chart depicting how often the four key features were used across the projects. b ) Bar-chart depicting the number of projects, which had used zero to four key features

Iterative cycles

Fifty-seven projects (79%) had a sequence of cycles where one informed the actions of the next. A single iterative chain of cycles was used in 41 (57%), while four (5%) had multiple isolated iterative chains and 12 (17%) had a mix of iterative chains and isolated cycles. Of the 15 projects using non-iterative cycles, two reported a single cycle while 13 used multiple isolated cycles. The majority (55/72) (76%) tested one change per cycle.

Small scale testing

The testing of changes in a small scale was carried out by 10 projects (14%), of which seven did so in an increasing scale, while two kept testing at the same scale. It was unclear which type of scaling was used in the remaining project. Sixty-two projects (86%) carried out testing on an entire department or engaged in full-scale implementation before having tested the improvement intervention.

Continuous data collection

Continuous measurements over time with three or more data points at regular intervals were used by 48 (67%) out of 72 projects. Of these 48, half used run charts, while the other half used control charts. Other types of data measurement such as before and after or per PDSA cycle or having a single data point as outcome after cycle(s) was done by 18 (25%) and 5 (7%), respectively. One project did not report their data. Sixty-five projects (90%) used a baseline measurement for comparison.

Theoretical rationale

Twenty-six (36%) out of 72 projects explicitly stated the theoretical rationale of the project describing why it was predicted to lead to improvement in their specific clinical context. In terms of inspiration for the need for improvement 68 projects (94%) referred to scientific literature. For the QI interventions used in the projects 26 (36%) found inspiration in externally existing knowledge in forms of scientific literature, previous QI projects or benchmarking. Twenty-one (29%) developed the projects themselves, 10 (14%) used existing knowledge in combination with own ideas while 15 (21%) did not state the source.

In this systematic review nearly all PDSA-based QI projects reported improvements. However, only approximately one out of four projects had defined a specific quantitative aim and reached it. In addition, only a small minority of the projects reported to have adhered to all four key features recommended in the literature to ensure the quality and adaptability of a QI project.

The claim that PDSA leads to improvement should be interpreted with caution. The methodological limitations in many of the projects makes it difficult to draw firm conclusions about the size and the causality of the reported improvements in quality of care. The methodological limitations question the legitimacy of PDSA as an effective improvement method in health care. The widespread lack of theoretical rationale and continuous data collection in the projects makes it difficult to track and correct the process as well as to relate an improvement to the use of the method [ 10 , 11 ]. The apparent limited use of the iterative approach and small-scale-testing constitute an additional methodological limitation. Without these tools of testing and adapting one can risk introducing unintended consequences [ 1 , 36 ]. Hence, QI initiatives may potentially tamper with the system in unforeseen ways creating more harm and waste than improvement. The low use of small-scale-testing could perhaps originate in a widespread misunderstanding that one should test large-scale to get a proper statistical power. However, this is not necessarily the case with PDSA [ 15 ].

There is no simple answer to this lack of adherence to the key methodological features. Some scholars claim that even though the concept of PDSA is relatively simple it is difficult to master in reality [ 4 ]. Some explanations to this have been offered including an urge to favour action over evidence [ 36 ], an inherent messiness in the actual use of the method [ 11 ], its inability to address “big and hairy” problems [ 37 ], an oversimplification of the method, and an underestimation of the required resources and support needed to conduct a PDSA-based project [ 4 ].

In some cases, it seems reasonable that the lack of adherence to the methodological recommendations is a problem with documentation rather than methodological rigor, e.g. the frequent lack of small-scale pilot testing may be due to the authors considering the information too irrelevant, while still having performed it in the projects.

Regarding our framework one could argue that it has too many or too few key features to encompass the PDSA method. The same can be said about the supplementary features where additional features could also have been assessed e.g. the use of Specific, Measurable, Attainable, Relevant and Timebound (SMART) goals [ 14 ]. It has been important for us to operationalize the key features so their presence easily and accurately can be identified. Simplification carries the risk of loss of information but can be outweighed by a clear and applicable framework.

This review has some limitations. We only included PDSA projects reported in peer-reviewed journals, which represents just a fraction of all QI projects being conducted around the globe. Further, it might be difficult to publish projects that do not document improvements. This may introduce potential publication bias. Future studies could use the framework to examine the grey literature of evaluation reports etc. to see if the pattern of methodological limitations is consistent. The fact that a majority of the projects reported positive change could also indicate a potential bias. For busy QI practitioners the process of translating a clinical project into a publication could well be motivated by a positive finding with projects with negative effects not being reported. However, we should not forget that negative outcome of a PDSA project may still contribute with valuable learning and competence building [ 4 , 6 ].

The field of IS and collaboration between practitioners and scholars has the potential to deliver crucial insight into the complex process of QI, including the difficulties with replicating projects with promising effect [ 5 , 12 , 20 , 32 ]. Rigorous methodological adherence may be experienced as a restriction on practitioners, which could discourage engagement in QI initiatives. However, by strengthening the use of the key features and improving documentation the PDSA projects will be more likely to contribute to IS, including reliable meta-analyses and systematic reviews [ 10 ]. This could in return provide QI practitioners with evidence-based knowledge [ 5 , 38 ]. In this way rigor in performing and documenting QI projects benefits the whole QI community in the long run. It is important that new knowledge becomes readily available and application oriented, in order for practitioners to be motivated to use it. An inherent part of using the PDSA method consists of acknowledging the complexity of creating lasting improvement. Here the scientific ideals about planning, executing, hypothesizing, data managing and documenting with rigor and high quality should serve as inspiration.

Our framework could imply that the presence of all four features will inevitably result in the success of an improvement project. This it clearly not the case. No “magic bullets” exist in QI [ 39 ]. QI is about implementing complex projects in complex social contexts. Here adherence to the key methodological recommendations and rigorous documentation can help to ensure better quality and reproducibility. This review can serve as a reminder of these features and how rigor in the individual QI projects can assist the work of IS, which in return can offer new insight for the benefit of practitioners.

This systematic review documents that substantial methodological challenges remain when reporting from PDSA projects. These challenges pose a problem for the legitimacy of the method. Individual improvement projects should strive to contribute to a scientific foundation for QI by conducting and documenting with a higher rigor. There seems to be a need for methodological improvement when conducting and reporting from QI initiatives.

Availability of data and materials

All data generated or analysed during this review are included in this published article and its supplementary information files.

Abbreviations

Improvement Science

Plan-Do-Study-Act

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Quality Improvement

Specific, Measurable, Attainable, Relevant and Timebound

Standards for QUality Improvement Reporting Excellence

Nicolay CR, Purkayastha S, Greenhalgh A, Benn J, Chaturvedi S, Phillips N, et al. Systematic review of the application of quality improvement methodologies from the manufacturing industry to surgical healthcare. Br J Surg. 2012;99(3):324–35.

Article   CAS   Google Scholar  

Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Qual Manag Health Care. 2004;13(1):17–32.

Article   Google Scholar  

Moen R. Foundation and history of the PDSA cycle. Assoc Process Improv. 2009; Available from: https://deming.org/uploads/paper/PDSA_History_Ron_Moen.pdf .

Reed JE, Card AJ. The problem with plan-do-study-act cycles. BMJ Qual Saf. 2016;25(3):147–52.

Portela MC, Lima SML, Martins M, Travassos C. Improvement Science: conceptual and theoretical foundations for its application to healthcare quality improvement. Cad Saude Publica. 2016;32(sup 2):e00105815.

Google Scholar  

Langley GJ, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide, 2nd edition. Jossey-Bass. 2009.

Berwick DM. The science of improvement. JAMA - J Am Med Assoc. 2008;299(10):1182–4.

Berwick DM, Nolan TW. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128(1):651–6.

Perla RJ, Provost LP, Parry GJ. Seven propositions of the science of improvement: exploring foundations. Qual Manag Health Care. 2013;22(3):170–86.

Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf 2013;0:1–9.

Ogrinc G. Building knowledge, asking questions. BMJ Qual Saf. 2014;23(4):265–7.

Portela MC, Pronovost PJ, Woodcock T, Carter P, Dixon-Woods M. How to study improvement interventions: a brief overview of possible study types. Postgrad Med J. 2015;91(1076):343–54.

Thor J, Lundberg J, Ask J, Olsson J, Carli C, Härenstam KP, et al. Application of statistical process control in healthcare improvement: systematic review. Qual Saf Heal Care. 2007;16(5):387–99.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

Etchells E, Ho M, Shojania KG. Value of small sample sizes in rapid-cycle quality improvement projects. BMJ Qual Saf. 2016;25(3):202–6.

Berwick DM. A primer on leading the improvement of systems. BMJ Br Med J. 1996;312(7031):619.

Speroff T, James BC, Nelson EC, Headrick LA, Brommels M. Guidelines for appraisal and publication of PDSA quality improvement. Qual Manag Health Care. 2004;13(1):33–9.

Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. Standards for QUality Improvement Reporting Excellence 2.0: revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25:986–92.

Moonesinghe SR, Peden CJ. Theory and context: putting the science into improvement. Br J Anaesth. 2017;118(4):482–4.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Foy R, Ovretveit J, Shekelle PG, Pronovost PJ, Taylor SL, Dy S, et al. The role of theory in research to develop and evaluate the implementation of patient safety practices. BMJ Qual Saf. 2011;20(5):453–9.

Walshe K. Pseudoinnovation: the development and spread of healthcare quality improvement methodologies. Int J Qual Heal Care. 2009;21(3):153–9.

Walshe K. Understanding what works-and why-in quality improvement: the need for theory-driven evaluation. Int J Qual Heal Care. 2007;19(2):57–9.

Powell AE, Rushmer RK, Davies HT. A systematic narrative review of quality improvement models in health care. Glasgow: Quality Improvement Scotland (NHS QIS ). 2009.

Groene O. Does quality improvement face a legitimacy crisis? Poor quality studies, small effects. J Heal Serv Res Policy. 2011;16(3):131–2.

Dixon-Woods M, Martin G. Does quality improvement improve quality? Futur Hosp J. 2016;3(3):191–4.

Blumenthal D, Kilo CM. A report Card on continuous quality improvement. Milbank Q. 1998;76(4):625–48.

Dellifraine JL, Langabeer JR, Nembhard IM. Assessing the evidence of six sigma and lean in the health care industry. Qual Manag Health Care. 2010;19(3):211–25.

D’Andreamatteo A, Ianni L, Lega F, Sargiacomo M. Lean in healthcare: a comprehensive review. Health Policy (New York). 2015;119(9):1197–209.

Moraros J, Lemstra M, Nwankwo C. Lean interventions in healthcare: do they actually work? A systematic literature review. Int J Qual Heal Care. 2016:150–65.

Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. Health Aff. 2005;24(1):138–50.

Marshall M, Pronovost P, Dixon-Woods M. Promotion of improvement as a science. Lancet. 2013;381(9881):419–21.

The Health Foundation. Improvement science. Heal Found Heal Scan. 2011. Available from: http://www.health.org.uk/sites/health/files/ImprovementScience.pdf .

Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney S. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Heal Care. 2008;17(SUPPL. 1):i3–9.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. BMJ Br Med J. 2009;339:b2700.

Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357(6):608–13.

Dixon-Woods M, Martin G, Tarrant C, Bion J, Goeschel C, Pronovost P, et al. Safer clinical systems: evaluation findings. Heal Found. 2014; Available from: https://www.health.org.uk/publications/safer-clinical-systems-evaluation-findings .

Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q. 2011;89(2):167–205.

Oxman AD, Thomson MA, Davis DA, Haynes B. No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. Cmaj. 1995;153(10):1423–31.

CAS   PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

Not applicable.

The review was funded by Aalborg University, Denmark. The funding body had no influence on the study design, on the collection, analysis and interpretation of data or on the design of the manuscript.

Author information

Authors and affiliations.

Danish Center for Clinical Health Services Research (DACS), Department of Clinical Medicine, Aalborg University, Mølleparkvej 10, 9000, Aalborg, Denmark

Søren Valgreen Knudsen, Søren Paaske Johnsen & Jan Mainz

Psychiatry, Aalborg University Hospital, The North Denmark Region Mølleparkvej 10, 9000, Aalborg, Denmark

Danish Center for Healthcare Improvements (DCHI), Aalborg University, Fibigerstræde 11, 9220, Aalborg Øst, Denmark

Søren Valgreen Knudsen, Henrik Vitus Bering Laursen & Lars Holger Ehlers

Danish Clinical Registries, Denmark, Nrd. Fasanvej 57, 2000, Frederiksberg, Denmark

Paul Daniel Bartels

Department for Community Mental Health, Haifa University, Haifa, Israel

Department of Health Economics, University of Southern Denmark, Odense, Denmark

You can also search for this author in PubMed   Google Scholar

Contributions

All authors have made substantive intellectual contributions to the review. SVK and HVBL have been the primary authors and have made substantial contributions to conception and design, acquisition, analysis and interpretation of the data as well as developing drafts of the manuscript. LHE and JM have been primary supervisors and have contributed substantially with intellectual feedback and manuscript revision. SPJ and PB have made substantial contributions by revising the manuscript critically for intellectual content. Each author agrees to be accountable for all aspects of the work and all authors have given final approval of the version to be published.

Corresponding author

Correspondence to Søren Valgreen Knudsen .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:.

Description of variables and coding. (DOCX 24 kb)

Additional file 2:

Projects identified in the search that used PDSA method. (DOCX 204 kb)

Additional file 3:

Projects identified in search that describes PDSA method in sufficient detail to be included for full analysis for framework. (DOCX 145 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Knudsen, S.V., Laursen, H.V.B., Johnsen, S.P. et al. Can quality improvement improve the quality of care? A systematic review of reported effects and methodological rigor in plan-do-study-act projects. BMC Health Serv Res 19 , 683 (2019). https://doi.org/10.1186/s12913-019-4482-6

Download citation

Received : 07 January 2019

Accepted : 28 August 2019

Published : 04 October 2019

DOI : https://doi.org/10.1186/s12913-019-4482-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Plan-do-study-act
  • Health services research
  • Quality improvement

BMC Health Services Research

ISSN: 1472-6963

quality improvement in healthcare research paper

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Quality improvement...

Quality improvement into practice

Read the full collection.

  • Related content
  • Peer review
  • Adam Backhouse , quality improvement programme lead 1 ,
  • Fatai Ogunlayi , public health specialty registrar 2
  • 1 North London Partners in Health and Care, Islington CCG, London N1 1TH, UK
  • 2 Institute of Applied Health Research, Public Health, University of Birmingham, B15 2TT, UK
  • Correspondence to: A Backhouse adam.backhouse{at}nhs.net

What you need to know

Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about ( a ) the contribution QI offers to staff and patients, ( b ) how to differentiate it from other approaches, ( c ) the benefits of using QI together with other change approaches

QI is not a silver bullet for all changes required in healthcare: it has great potential to be used together with other change approaches, either concurrently (using audit to inform iterative tests of change) or consecutively (using QI to adapt published research to local context)

As QI becomes established, opportunities for these collaborations will grow, to the benefit of patients.

The benefits to front line clinicians of participating in quality improvement (QI) activity are promoted in many health systems. QI can represent a valuable opportunity for individuals to be involved in leading and delivering change, from improving individual patient care to transforming services across complex health and care systems. 1

However, it is not clear that this promotion of QI has created greater understanding of QI or widespread adoption. QI largely remains an activity undertaken by experts and early adopters, often in isolation from their peers. 2 There is a danger of a widening gap between this group and the majority of healthcare professionals.

This article will make it easier for those new to QI to understand what it is, where it fits with other approaches to improving care (such as audit or research), when best to use a QI approach, making it easier to understand the relevance and usefulness of QI in delivering better outcomes for patients.

How this article was made

AB and FO are both specialist quality improvement practitioners and have developed their expertise working in QI roles for a variety of UK healthcare organisations. The analysis presented here arose from AB and FO’s observations of the challenges faced when introducing QI, with healthcare providers often unable to distinguish between QI and other change approaches, making it difficult to understand what QI can do for them.

How is quality improvement defined?

There are many definitions of QI ( box 1 ). The BMJ ’s Quality Improvement series uses the Academy of Medical Royal Colleges definition. 6 Rather than viewing QI as a single method or set of tools, it can be more helpful to think of QI as based on a set of principles common to many of these definitions: a systematic continuous approach that aims to solve problems in healthcare, improve service provision, and ultimately provide better outcomes for patients.

Definitions of quality improvement

Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3

The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4

Using a systematic change method and strategies to improve patient experience and outcome. 5

To make a difference to patients by improving safety, effectiveness, and experience of care by using understanding of our complex healthcare environment, applying a systematic approach, and designing, testing, and implementing changes using real time measurement for improvement. 6

In this article we discuss QI as an approach to improving healthcare that follows the principles outlined in box 2 ; this may be a useful reference to consider how particular methods or tools could be used as part of a QI approach.

Principles of QI

Primary intent— To bring about measurable improvement to a specific aspect of healthcare delivery, often with evidence or theory of what might work but requiring local iterative testing to find the best solution. 7

Employing an iterative process of testing change ideas— Adopting a theory of change which emphasises a continuous process of planning and testing changes, studying and learning from comparing the results to a predicted outcome, and adapting hypotheses in response to results of previous tests. 8 9

Consistent use of an agreed methodology— Many different QI methodologies are available; commonly cited methodologies include the Model for Improvement, Lean, Six Sigma, and Experience-based Co-design. 4 Systematic review shows that the choice of tools or methodologies has little impact on the success of QI provided that the chosen methodology is followed consistently. 10 Though there is no formal agreement on what constitutes a QI tool, it would include activities such as process mapping that can be used within a range of QI methodological approaches. NHS Scotland’s Quality Improvement Hub has a glossary of commonly used tools in QI. 11

Empowerment of front line staff and service users— QI work should engage staff and patients by providing them with the opportunity and skills to contribute to improvement work. Recognition of this need often manifests in drives from senior leadership or management to build QI capability in healthcare organisations, but it also requires that frontline staff and service users feel able to make use of these skills and take ownership of improvement work. 12

Using data to drive improvement— To drive decision making by measuring the impact of tests of change over time and understanding variation in processes and outcomes. Measurement for improvement typically prioritises this narrative approach over concerns around exactness and completeness of data. 13 14

Scale-up and spread, with adaptation to context— As interventions tested using a QI approach are scaled up and the degree of belief in their efficacy increases, it is desirable that they spread outward and be adopted by others. Key to successful diffusion of improvement is the adaption of interventions to new environments, patient and staff groups, available resources, and even personal preferences of healthcare providers in surrounding areas, again using an iterative testing approach. 15 16

What other approaches to improving healthcare are there?

Taking considered action to change healthcare for the better is not new, but QI as a distinct approach to improving healthcare is a relatively recent development. There are many well established approaches to evaluating and making changes to healthcare services in use, and QI will only be adopted more widely if it offers a new perspective or an advantage over other approaches in certain situations.

A non-systematic literature scan identified the following other approaches for making change in healthcare: research, clinical audit, service evaluation, and clinical transformation. We also identified innovation as an important catalyst for change, but we did not consider it an approach to evaluating and changing healthcare services so much as a catch-all term for describing the development and introduction of new ideas into the system. A summary of the different approaches and their definition is shown in box 3 . Many have elements in common with QI, but there are important difference in both intent and application. To be useful to clinicians and managers, QI must find a role within healthcare that complements research, audit, service evaluation, and clinical transformation while retaining the core principles that differentiate it from these approaches.

Alternatives to QI

Research— The attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods. 17

Clinical audit— A way to find out if healthcare is being provided in line with standards and to let care providers and patients know where their service is doing well, and where there could be improvements. 18

Service evaluation— A process of investigating the effectiveness or efficiency of a service with the purpose of generating information for local decision making about the service. 19

Clinical transformation— An umbrella term for more radical approaches to change; a deliberate, planned process to make dramatic and irreversible changes to how care is delivered. 20

Innovation— To develop and deliver new or improved health policies, systems, products and technologies, and services and delivery methods that improve people’s health. Health innovation responds to unmet needs by employing new ways of thinking and working. 21

Why do we need to make this distinction for QI to succeed?

Improvement in healthcare is 20% technical and 80% human. 22 Essential to that 80% is clear communication, clarity of approach, and a common language. Without this shared understanding of QI as a distinct approach to change, QI work risks straying from the core principles outlined above, making it less likely to succeed. If practitioners cannot communicate clearly with their colleagues about the key principles and differences of a QI approach, there will be mismatched expectations about what QI is and how it is used, lowering the chance that QI work will be effective in improving outcomes for patients. 23

There is also a risk that the language of QI is adopted to describe change efforts regardless of their fidelity to a QI approach, either due to a lack of understanding of QI or a lack of intention to carry it out consistently. 9 Poor fidelity to the core principles of QI reduces its effectiveness and makes its desired outcome less likely, leading to wasted effort by participants and decreasing its credibility. 2 8 24 This in turn further widens the gap between advocates of QI and those inclined to scepticism, and may lead to missed opportunities to use QI more widely, consequently leading to variation in the quality of patient care.

Without articulating the differences between QI and other approaches, there is a risk of not being able to identify where a QI approach can best add value. Conversely, we might be tempted to see QI as a “silver bullet” for every healthcare challenge when a different approach may be more effective. In reality it is not clear that QI will be fit for purpose in tackling all of the wicked problems of healthcare delivery and we must be able to identify the right tool for the job in each situation. 25 Finally, while different approaches will be better suited to different types of challenge, not having a clear understanding of how approaches differ and complement each other may mean missed opportunities for multi-pronged approaches to improving care.

What is the relationship between QI and other approaches such as audit?

Academic journals, healthcare providers, and “arms-length bodies” have made various attempts to distinguish between the different approaches to improving healthcare. 19 26 27 28 However, most comparisons do not include QI or compare QI to only one or two of the other approaches. 7 29 30 31 To make it easier for people to use QI approaches effectively and appropriately, we summarise the similarities, differences, and crossover between QI and other approaches to tackling healthcare challenges ( fig 1 ).

Fig 1

How quality improvement interacts with other approaches to improving healthcare

  • Download figure
  • Open in new tab
  • Download powerpoint

QI and research

Research aims to generate new generalisable knowledge, while QI typically involves a combination of generating new knowledge or implementing existing knowledge within a specific setting. 32 Unlike research, including pragmatic research designed to test effectiveness of interventions in real life, QI does not aim to provide generalisable knowledge. In common with QI, research requires a consistent methodology. This method is typically used, however, to prove or disprove a fixed hypothesis rather than the adaptive hypotheses developed through the iterative testing of ideas typical of QI. Both research and QI are interested in the environment where work is conducted, though with different intentions: research aims to eliminate or at least reduce the impact of many variables to create generalisable knowledge, whereas QI seeks to understand what works best in a given context. The rigour of data collection and analysis required for research is much higher; in QI a criterion of “good enough” is often applied.

Relationship with QI

Though the goal of clinical research is to develop new knowledge that will lead to changes in practice, much has been written on the lag time between publication of research evidence and system-wide adoption, leading to delays in patients benefitting from new treatments or interventions. 33 QI offers a way to iteratively test the conditions required to adapt published research findings to the local context of individual healthcare providers, generating new knowledge in the process. Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34

Scenario: QI for translational research

Newly published research shows that a particular physiotherapy intervention is more clinically effective when delivered in short, twice-daily bursts rather than longer, less frequent sessions. A team of hospital physiotherapists wish to implement the change but are unclear how they will manage the shift in workload and how they should introduce this potentially disruptive change to staff and to patients.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this article?

Adopting a QI approach, the team realise that, although the change they want to make is already determined, the way in which it is introduced and adapted to their wards is for them to decide. They take time to explain the benefits of the change to colleagues and their current patients, and ask patients how they would best like to receive their extra physiotherapy sessions.

The change is planned and tested for two weeks with one physiotherapist working with a small number of patients. Data are collected each day, including reasons why sessions were missed or refused. The team review the data each day and make iterative changes to the physiotherapist’s schedule, and to the times of day the sessions are offered to patients. Once an improvement is seen, this new way of working is scaled up to all of the patients on the ward.

The findings of the work are fed into a service evaluation of physiotherapy provision across the hospital, which uses the findings of the QI work to make recommendations about how physiotherapy provision should be structured in the future. People feel more positive about the change because they know colleagues who have already made it work in practice.

QI and clinical audit

Clinical audit is closely related to QI: it is often used with the intention of iteratively improving the standard of healthcare, albeit in relation to a pre-determined standard of best practice. 35 When used iteratively, interspersed with improvement action, the clinical audit cycle adheres to many of the principles of QI. However, in practice clinical audit is often used by healthcare organisations as an assurance function, making it less likely to be carried out with a focus on empowering staff and service users to make changes to practice. 36 Furthermore, academic reviews of audit programmes have shown audit to be an ineffective approach to improving quality due to a focus on data collection and analysis without a well developed approach to the action section of the audit cycle. 37 Clinical audits, such as the National Clinical Audit Programme in the UK (NCAPOP), often focus on the management of specific clinical conditions. QI can focus on any part of service delivery and can take a more cross-cutting view which may identify issues and solutions that benefit multiple patient groups and pathways. 30

Audit is often the first step in a QI process and is used to identify improvement opportunities, particularly where compliance with known standards for high quality patient care needs to be improved. Audit can be used to establish a baseline and to analyse the impact of tests of change against the baseline. Also, once an improvement project is under way, audit may form part of rapid cycle evaluation, during the iterative testing phase, to understand the impact of the idea being tested. Regular clinical audit may be a useful assurance tool to help track whether improvements have been sustained over time.

Scenario: Audit and QI

A foundation year 2 (FY2) doctor is asked to complete an audit of a pre-surgical pathway by looking retrospectively through patient documentation. She concludes that adherence to best practice is mixed and recommends: “Remind the team of the importance of being thorough in this respect and re-audit in 6 months.” The results are presented at an audit meeting, but a re-audit a year later by a new FY2 doctor shows similar results.

Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this paper?

Contrast the above with a team-led, rapid cycle audit in which everyone contributes to collecting and reviewing data from the previous week, discussed at a regular team meeting. Though surgical patients are often transient, their experience of care and ideas for improvement are captured during discharge conversations. The team identify and test several iterative changes to care processes. They document and test these changes between audits, leading to sustainable change. Some of the surgeons involved work across multiple hospitals, and spread some of the improvements, with the audit tool, as they go.

QI and service evaluation

In practice, service evaluation is not subject to the same rigorous definition or governance as research or clinical audit, meaning that there are inconsistencies in the methodology for carrying it out. While the primary intent for QI is to make change that will drive improvement, the primary intent for evaluation is to assess the performance of current patient care. 38 Service evaluation may be carried out proactively to assess a service against its stated aims or to review the quality of patient care, or may be commissioned in response to serious patient harm or red flags about service performance. The purpose of service evaluation is to help local decision makers determine whether a service is fit for purpose and, if necessary, identify areas for improvement.

Service evaluation may be used to initiate QI activity by identifying opportunities for change that would benefit from a QI approach. It may also evaluate the impact of changes made using QI, either during the work or after completion to assess sustainability of improvements made. Though likely planned as separate activities, service evaluation and QI may overlap and inform each other as they both develop. Service evaluation may also make a judgment about a service’s readiness for change and identify any barriers to, or prerequisites for, carrying out QI.

QI and clinical transformation

Clinical transformation involves radical, dramatic, and irreversible change—the sort of change that cannot be achieved through continuous improvement alone. As with service evaluation, there is no consensus on what clinical transformation entails, and it may be best thought of as an umbrella term for the large scale reform or redesign of clinical services and the non-clinical services that support them. 20 39 While it is possible to carry out transformation activity that uses elements of QI approach, such as effective engagement of the staff and patients involved, QI which rests on iterative test of change cannot have a transformational approach—that is, one-off, irreversible change.

There is opportunity to use QI to identify and test ideas before full scale clinical transformation is implemented. This has the benefit of engaging staff and patients in the clinical transformation process and increasing the degree of belief that clinical transformation will be effective or beneficial. Transformation activity, once completed, could be followed up with QI activity to drive continuous improvement of the new process or allow adaption of new ways of working. As interventions made using QI are scaled up and spread, the line between QI and transformation may seem to blur. The shift from QI to transformation occurs when the intention of the work shifts away from continuous testing and adaptation into the wholesale implementation of an agreed solution.

Scenario: QI and clinical transformation

An NHS trust’s human resources (HR) team is struggling to manage its junior doctor placements, rotas, and on-call duties, which is causing tension and has led to concern about medical cover and patient safety out of hours. A neighbouring trust has launched a smartphone app that supports clinicians and HR colleagues to manage these processes with the great success.

This problem feels ripe for a transformation approach—to launch the app across the trust, confident that it will solve the trust’s problems.

Before continuing reading think about your own organisation— What do you think will happen, and how would you use the QI principles described in this article for this situation?

Outcome without QI

Unfortunately, the HR team haven’t taken the time to understand the underlying problems with their current system, which revolve around poor communication and clarity from the HR team, based on not knowing who to contact and being unable to answer questions. HR assume that because the app has been a success elsewhere, it will work here as well.

People get excited about the new app and the benefits it will bring, but no consideration is given to the processes and relationships that need to be in place to make it work. The app is launched with a high profile campaign and adoption is high, but the same issues continue. The HR team are confused as to why things didn’t work.

Outcome with QI

Although the app has worked elsewhere, rolling it out without adapting it to local context is a risk – one which application of QI principles can mitigate.

HR pilot the app in a volunteer specialty after spending time speaking to clinicians to better understand their needs. They carry out several tests of change, ironing out issues with the process as they go, using issues logged and clinician feedback as a source of data. When they are confident the app works for them, they expand out to a directorate, a division, and finally the transformational step of an organisation-wide rollout can be taken.

Education into practice

Next time when faced with what looks like a quality improvement (QI) opportunity, consider asking:

How do you know that QI is the best approach to this situation? What else might be appropriate?

Have you considered how to ensure you implement QI according to the principles described above?

Is there opportunity to use other approaches in tandem with QI for a more effective result?

How patients were involved in the creation of this article

This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link between understanding the issues raised and better patient care.

Contributors: This work was initially conceived by AB. AB and FO were responsible for the research and drafting of the article. AB is the guarantor of the article.

Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • Olsson-Brown A
  • Dixon-Woods M ,
  • Batalden PB ,
  • Berwick D ,
  • Øvretveit J
  • Academy of Medical Royal Colleges
  • Nelson WA ,
  • McNicholas C ,
  • Woodcock T ,
  • Alderwick H ,
  • ↵ NHS Scotland Quality Improvement Hub. Quality improvement glossary of terms. http://www.qihub.scot.nhs.uk/qi-basics/quality-improvement-glossary-of-terms.aspx .
  • McNicol S ,
  • Solberg LI ,
  • Massoud MR ,
  • Albrecht Y ,
  • Illingworth J ,
  • Department of Health
  • ↵ NHS England. Clinical audit. https://www.england.nhs.uk/clinaudit/ .
  • Healthcare Quality Improvement Partnership
  • McKinsey Hospital Institute
  • ↵ World Health Organization. WHO Health Innovation Group. 2019. https://www.who.int/life-course/about/who-health-innovation-group/en/ .
  • Sheffield Microsystem Coaching Academy
  • Davidoff F ,
  • Leviton L ,
  • Taylor MJ ,
  • Nicolay C ,
  • Tarrant C ,
  • Twycross A ,
  • ↵ University Hospitals Bristol NHS Foundation Trust. Is your study research, audit or service evaluation. http://www.uhbristol.nhs.uk/research-innovation/for-researchers/is-it-research,-audit-or-service-evaluation/ .
  • ↵ University of Sheffield. Differentiating audit, service evaluation and research. 2006. https://www.sheffield.ac.uk/polopoly_fs/1.158539!/file/AuditorResearch.pdf .
  • ↵ Royal College of Radiologists. Audit and quality improvement. https://www.rcr.ac.uk/clinical-radiology/audit-and-quality-improvement .
  • Gundogan B ,
  • Finkelstein JA ,
  • Brickman AL ,
  • Health Foundation
  • Johnston G ,
  • Crombie IK ,
  • Davies HT ,
  • Hillman T ,
  • ↵ NHS Health Research Authority. Defining research. 2013. https://www.clahrc-eoe.nihr.ac.uk/wp-content/uploads/2014/04/defining-research.pdf .

quality improvement in healthcare research paper

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 09 April 2024

The potential for artificial intelligence to transform healthcare: perspectives from international health leaders

  • Christina Silcox 1 ,
  • Eyal Zimlichmann 2 , 3 ,
  • Katie Huber   ORCID: orcid.org/0000-0003-2519-8714 1 ,
  • Neil Rowen 1 ,
  • Robert Saunders 1 ,
  • Mark McClellan 1 ,
  • Charles N. Kahn III 3 , 4 ,
  • Claudia A. Salzberg 3 &
  • David W. Bates   ORCID: orcid.org/0000-0001-6268-1540 5 , 6 , 7  

npj Digital Medicine volume  7 , Article number:  88 ( 2024 ) Cite this article

2843 Accesses

39 Altmetric

Metrics details

  • Health policy
  • Health services

Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. There is also universal concern about the ability to monitor health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change. The Future of Health (FOH), an international community of senior health care leaders, collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise around this topic. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers across the globe that FOH members identified as important for fully realizing AI’s potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.

Similar content being viewed by others

quality improvement in healthcare research paper

Guiding principles for the responsible development of artificial intelligence tools for healthcare

quality improvement in healthcare research paper

A short guide for medical professionals in the era of artificial intelligence

quality improvement in healthcare research paper

Reporting guidelines in medical artificial intelligence: a systematic review and meta-analysis

Introduction.

Artificial intelligence (AI), supported by timely and accurate data and evidence, has the potential to transform health care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care 1 , 2 . AI integration is critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. This is true across the international community, although there is variable progress within individual countries. There is also universal concern about monitoring health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change.

The Future of Health (FOH) is an international community of senior health care leaders representing health systems, health policy, health care technology, venture funding, insurance, and risk management. FOH collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise. In total, 46 senior health care leaders were engaged in this work, from eleven countries in Europe, North America, Africa, Asia, and Australia. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers that FOH members identified as important for fully realizing AI’s potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.

Powering AI through high-quality data

“Going forward, data are going to be the most valuable commodity in health care. Organizations need robust plans about how to mobilize and use their data.”

AI algorithms will only perform as well as the accuracy and completeness of key underlying data, and data quality is dependent on actions and workflows that encourage trust.

To begin to improve data quality, FOH members agreed that an initial priority is identifying and assuring reliable availability of high-priority data elements for promising AI applications: those with the most predictive value, those of the highest value to patients, and those most important for analyses of performance, including subgroup analyses to detect bias.

Leaders should also advocate for aligned policy incentives to improve the availability and reliability of these priority data elements. There are several examples of efforts across the world to identify and standardize high-priority data elements for AI applications and beyond, such as the multinational project STANDING Together, which is developing standards to improve the quality and representativeness of data used to build and test AI tools 3 .

Policy incentives that would further encourage high-quality data collection include (1) aligned payment incentives for measures of health care quality and safety, and ensuring the reliability of the underlying data, and (2) quality measures and performance standards focused on the reliability, completeness, and timeliness of collection and sharing of high-priority data itself.

Trust and verify

“Your AI algorithms are only going to be as good as the data and the real-world evidence used to validate them, and the data are only going to be as good as the trust and privacy and supporting policies.”

FOH members stressed the importance of showing that AI tools are both effective and safe within their specific patient populations.

This is a particular challenge with AI tools, whose performance can differ dramatically across sites and over time, as health data patterns and population characteristics vary. For example, several studies of the Epic Sepsis Model found both location-based differences in performance and degradation in performance over time due to data drift 4 , 5 . However, real-world evaluations are often much more difficult for algorithms that are used for longer-term predictions, or to avert long-term complications from occurring, particularly in the absence of connected, longitudinal data infrastructure. As such, health systems must prioritize implementing data standards and data infrastructure that can facilitate the retraining or tuning of algorithms, test for local performance and bias, and ensure scalability across the organization and longer-term applications 6 .

There are efforts to help leaders and health systems develop consensus-based evaluation techniques and infrastructure for AI tools, including HealthAI: The Global Agency for Responsible AI in Health, which aims to build and certify validation mechanisms for nations and regions to adopt; and the Coalition for Health AI (CHAI), which recently announced plans to build a US-wide health AI assurance labs network 7 , 8 . These efforts, if successful, will assist manufacturers and health systems in complying with new laws, rules, and regulations being proposed and released that seek to ensure AI tools are trustworthy, such as the EU AI Act and the 2023 US Executive Order on AI.

Sharing data for better AI

“Underlying these challenges is the investment required to standardize business processes so that you actually get data that’s usable between institutions and even within an institution.”

While high-quality internal data may enable some types of AI-tool development and testing, this is insufficient to power and evaluate all AI applications. To build truly effective AI-enabled predictive software for clinical care and predictive supports, data often need to be interoperable across health systems to build a diverse picture of patients’ health across geographies, and reliably shared.

FOH members recommended that health care leaders work with researchers and policymakers to connect detailed encounter data with longitudinal outcomes, and pilot opportunities across diverse populations and systems to help assure valid outcome evaluations as well as address potential confounding and population subgroup differences—the ability to aggregate data is a clear rate-limiting step. The South African National Digital Health Strategy outlined interventions to improve the adoption of digital technologies while complying with the 2013 Protection of Personal Information Act 9 . Although challenges remain, the country has made progress on multiple fronts, including building out a Health Patient Registration System as a first step towards a portable, longitudinal patient record system and releasing a Health Normative Standards Framework to improve data flow across institutional and geographic boundaries 10 .

Leaders should adopt policies in their organizations, and encourage adoption in their province and country, that simplify data governance and sharing while providing appropriate privacy protections – including building foundations of trust with patients and the public as previously discussed. Privacy-preserving innovations include ways to “share” data without movement from protected systems using approaches like federated analyses, data sandboxes, or synthetic data. In addition to exploring privacy-preserving approaches to data sharing, countries and health systems may need to consider broad and dynamic approaches to consent 11 , 12 . As we look to a future where a patient may have thousands of algorithms churning away at their data, efforts to improve data quality and sharing should include enabling patients’ access to and engagement with their own data to encourage them to actively partner in their health and provide transparency on how their data are being used to improve health care. For example, the Understanding Patient Data program in the United Kingdom produces research and resources to explain how the National Health Service uses patients’ data 13 . Community engagement efforts can further assist with these efforts by building trust and expanding understanding.

FOH members also stressed the importance of timely data access. Health systems should work together to establish re-usable governance and privacy frameworks that allow stakeholders to clearly understand what data will be shared and how it will be protected to reduce the time needed for data use agreements. Trusted third-party data coordinating centers could also be used to set up “precertification” systems around data quality, testing, and cybersecurity to support health organizations with appropriate data stewardship to form partnerships and access data rapidly.

Incentivizing progress for AI impact

“Unless it’s tied to some kind of compensation to the organization, the drive to help implement those tools and overcome that risk aversion is going to be very high… I do think that business driver needs to be there.”

AI tools and data quality initiatives have not moved as quickly in health care due to the lack of direct payment, and often, misalignment of financial incentives and supports for high-quality data collection and predictive analytics. This affects both the ability to purchase and safely implement commercial AI products as well as the development of “homegrown” AI tools.

FOH members recommended that leaders should advocate for paying for value in health – quality, safety, better health, and lower costs for patients. This better aligns the financial incentives for accelerating the development, evaluation, and adoption of AI as well as other tools designed to either keep patients healthy or quickly diagnose and treat them with the most effective therapies when they do become ill. Effective personalized health care requires high-quality, standardized, interoperable datasets from diverse sources 14 . Within value-based payments themselves, data are critical to measuring quality of care and patient outcomes, adjusted or contextualized for factors outside of clinical control. Value-based payments therefore align incentives for (1) high-quality data collection and trusted use, (2) building effective AI tools, and (3) ensuring that those tools are improving patient outcomes and/or health system operations.

Data have become the most valuable commodity in health care, but questions remain about whether there will be an AI “revolution” or “evolution” in health care delivery. Early AI applications in certain clinical areas have been promising, but more advanced AI tools will require higher quality, real-world data that is interoperable and secure. The steps health care organization leaders and policymakers take in the coming years, starting with short-term opportunities to develop meaningful AI applications that achieve measurable improvements in outcomes and costs, will be critical in enabling this future that can improve health outcomes, safety, affordability, and equity.

Data availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

Abernethy, A. et al. The promise of digital health: then, now, and the future. NAM Perspect. 6 (2022).

Akpakwu, E. Four ways AI can make healthcare more efficient and affordable. World Economic Forum https://www.weforum.org/agenda/2018/05/four-ways-ai-is-bringing-down-the-cost-of-healthcare/ (2018).

STANDING Together. https://www.datadiversity.org/home .

Wong, A. et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med 181 , 1065–1070 (2021).

Article   PubMed   Google Scholar  

Ross, C. STAT and MIT rooted out the weaknesses in health care algorithms. Here’s how we did it. STAT https://www.statnews.com/2022/02/28/data-drift-machine-learning/ (2022).

Locke, T., Parker, V., Thoumi, A., Goldstein, B. & Silcox, C. Preventing bias and inequities in AI-enabled health tools . https://healthpolicy.duke.edu/publications/preventing-bias-and-inequities-ai-enabled-health-tools (2022).

Introducing HealthAI. The International Digital Health and AI Research Collaborative (I-DAIR) https://www.i-dair.org/news/introducing-healthai (2023).

Shah, N. H. et al. A nationwide network of health AI assurance laboratories. JAMA 331 , 245 (2024).

Singh, V. AI & Data in South Africa’s Health Sector . https://policyaction.org.za/sites/default/files/PAN_TopicalGuide_AIData6_Health_Elec.pdf (2020).

Zharima, C., Griffiths, F. & Goudge, J. Exploring the barriers and facilitators to implementing electronic health records in a middle-income country: a qualitative study from South Africa. Front. Digit. Health 5 , 1207602 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Lee, A. R. et al. Identifying facilitators of and barriers to the adoption of dynamic consent in digital health ecosystems: a scoping review. BMC Med. Ethics 24 , 107 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Stoeklé, H. C., Hulier-Ammar, E. & Hervé, C. Data medicine: ‘broad’ or ‘dynamic’ consent? Public Health Ethics 15 , 181–185 (2022).

Article   Google Scholar  

Understanding Patient Data. Understanding Patient Data http://understandingpatientdata.org.uk/ .

Chén, O. Y. & Roberts, B. Personalized health care and public health in the digital age. Front. Digit. Health 3 , 595704 (2021).

Download references

Acknowledgements

The authors acknowledge Oranit Ido and Jonathan Gonzalez-Smith for their contributions to this work. This study was funded by The Future of Health, LLC. The Future of Health, LLC, was involved in all stages of this research, including study design, data collection, analysis and interpretation of data, and the preparation of this manuscript.

Author information

Authors and affiliations.

Duke-Margolis Institute for Health Policy, Duke University, Washington, DC, USA &, Durham, NC, USA

Christina Silcox, Katie Huber, Neil Rowen, Robert Saunders & Mark McClellan

Sheba Medical Center, Ramat Gan, Israel

Eyal Zimlichmann

Future of Health, Washington, DC, USA

Eyal Zimlichmann, Charles N. Kahn III & Claudia A. Salzberg

Federation of American Hospitals, Washington, DC, USA

Charles N. Kahn III

Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, MA, USA

David W. Bates

Harvard Medical School, Boston, MA, USA

Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

C.S., K.H., N.R., and R.S. conducted initial background research and analyzed qualitative data from stakeholders. All authors (C.S., E.Z., K.H., N.R., R.S., M.M., C.K., C.A.S., and D.B.) assisted with conceptualization of the project and strategic guidance. C.S., K.H., and N.R. wrote initial drafts of the manuscript. All authors contributed to critical revisions of the manuscript and read and approved the final manuscript.

Corresponding author

Correspondence to David W. Bates .

Ethics declarations

Competing interests.

C.S., K.H., N.R., and C.A.S. declare no competing interests. E.Z. reports personal fees from Arkin Holdings, personal fees from Statista and equity from Valera Health, Profility and Hello Heart. R.S. has been an external reviewer for The John A. Hartford Foundation, and is a co-chair for the Health Evolution Summit Roundtable on Value-Based Care for Specialized Populations. M.M. is an independent director on the boards of Johnson & Johnson, Cigna, Alignment Healthcare, and PrognomIQ; co-chairs the Guiding Committee for the Health Care Payment Learning and Action Network; and reports fees for serving as an adviser for Arsenal Capital Partners, Blackstone Life Sciences, and MITRE. C.K. is a Profility Board member and additionally reports equity from Valera Health and MDClone. D.W.B. reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from Valera Health, equity from Clew, equity from MDClone, personal fees and equity from AESOP, personal fees and equity from Feelbetter, equity from Guided Clinical Solutions, and grants from IBM Watson Health, outside the submitted work. D.W.B. has a patent pending (PHC-028564 US PCT), on intraoperative clinical decision support.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Silcox, C., Zimlichmann, E., Huber, K. et al. The potential for artificial intelligence to transform healthcare: perspectives from international health leaders. npj Digit. Med. 7 , 88 (2024). https://doi.org/10.1038/s41746-024-01097-6

Download citation

Received : 30 October 2023

Accepted : 29 March 2024

Published : 09 April 2024

DOI : https://doi.org/10.1038/s41746-024-01097-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

quality improvement in healthcare research paper

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 11, Issue 2
  • Quality improvement research: understanding the science of change in health care
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • R Baker 1 ,
  • 1 Guest Editors, Quality Improvement Research Series
  • 2 Editor in Chief, Quality and Safety in Health Care
  • Correspondence to:
 Professor R Grol, Centre for Research on Quality in Health Care (WOK), PO Box 9101, 6500 HB Nijmegen, The Netherlands;
 R.Grol{at}hsv.kun.nl

https://doi.org/10.1136/qhc.11.2.110

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • quality improvement research
  • change management

Essential for all who want to improve health care.

Expectations of healthcare services are ever increasing and those delivering care no longer hold the monopoly of opinion on what constitutes good or best care. To earn the label “good enough”, care must meet standards expected by consumers as well those of expert providers. Headlines in newspapers, statements in policy documents, and many analyses, surveys and reports repeatedly highlight serious problems in healthcare delivery related to underuse, overuse, or misuse of care. 1 Health systems are sometimes unsafe and frequently we harm patients who have trusted us with their care. There is an endemic failure to engage patients with decisions about their care. We know there are problems; we just need to change so that care can be made safer and better.

Everyone—authorities, policy makers, and professionals—seems to accept the need for change. New initiatives aiming to cure our ailing systems come in droves. This is an international phenomenon. Many initiatives are linked to programmes that capture a particular approach—for example, evidence based medicine; accreditation and (external) accountability; total quality management; professional development and revalidation; risk management and error prevention; organisational development and leadership enhancement; disease management and managed care; complex adaptive systems; and patient empowerment. They may differ in perspective. Some focus on changing professionals, others on changing organisations or interactions between parts of the system; some emphasise self-regulation, others external control and incentives; some advocate “bottom up” and others “top down” methods. Despite their differences, however, each aims to contribute to better patient care—and they might, but the evidence for understanding their likely impact is not robust and many seem based more on belief than rigorous research of value, efficacy, or feasibility. 2 From what we know, no quality improvement programme is superior and real sustainable improvement might require implementation of some aspects of several approaches—perhaps together, perhaps consecutively. We just do not know which to use, when to use them, or what to expect.

More evidence and understanding is required. At least 40 good systematic reviews and numerous controlled trials are available, 3, 4 but many of the trials can be criticised because, for example, randomisation or analysis was conducted at the patient level while the intervention focused on professionals or teams, and outcome parameters are often poorly chosen or are difficult to compare. Most studies were conducted in the USA, limiting generalisations to other systems. Some strategies are better studied than others. We know more about CME, audit and feedback, reminders and computerised decision support than about organisational, economic, administrative and patient mediated interventions. New methods including the effects of problem based education or portfolio learning, TQM, breakthrough projects, risk management methods, business process redesign, leadership enhancement, or sharing decisions with patients are not well studied. Studying the effects of specific strategies in controlled trials will provide some answers to some questions about effective change, but will not address some of the basic questions about the critical success factors in change processes. They need to be complemented by observational and qualitative studies.

Health care is becoming increasingly complex and the problems are large. It is unrealistic to expect that one specific approach can solve everything. A qualitative study by Solberg et al 5 of critical factors supporting implementation of change showed that a mixture of professional and organisational factors is crucial. “Give attention to many different factors and use multiple strategies” is the message. 6 Although we may know that multifaceted strategies combining different actions and measures linked to specific obstacles to change are usually more successful than single interventions, 7 we know little about which components of such complex interventions are effective in different target groups. So, while there is some general knowledge, there is little detailed understanding of the “black box” of change.

We need to learn about change in the real world of health care and the crucial determinants of successful improvement. New thinking about healthcare settings as complex adaptive systems emphasises the importance of experimenting with multiple approaches and discovering what works best. 8 Small changes can sometimes have large effects—but we have little understanding about which small changes to use in which settings and their likely impact.

For real change and sustained improvement a tailored research methodology is essential. The full range of methodology has yet to be established, but will include contributions from epidemiology, behavioural sciences, educational research, organisational and management studies, economics, and statistics (box 1). Theoretical models of evaluations of complex interventions propose a phased approach (theoretical phase, definition of the components of the intervention, small scale explanatory trial, followed by larger trials and research into long term implementation). Clearly, different research methods are required for different phases, 9 but it is essential that, despite the eclectic base of the research, researchers from different faculties and disciplines come together to collaborate in this complex field and that the vogue for “quick fixes” is replaced with sustained research.

Box 1 Some research approaches for quality improvement research

Observational studies of existing change processes

In-depth qualitative studies on critical success factors and barriers to change improvement programmes

Systematic reviews of both the impact of different strategies and the influence of specific factors on change

Well designed cluster randomised trials

Systematic sampling and interpretation of experiences of change

Methods for developing valid and sensitive indicators for measuring change

Meta-analyses of large samples of improvement projects

Methods for evaluation of large scale implementation and change programmes

Economic analyses of resources needed for effective change and improvement of care

Statistical process control

To stimulate and support debate about research on quality improvement and change management in health care we have commissioned a series of papers to provide an overview of some relevant methodologies. The first two papers are published in this issue and more will follow. Pope et al 10 explore some of the qualitative methods that can be used to gather information about the delivery of good quality care, and Wensing and Elwyn 11 consider some of the key issues related to measurement of patients' views. Forthcoming issues of QSHC will include papers that describe research methods for indicator development in primary care; a methodology for evaluating small scale improvement projects; methods for evaluating quality improvement programmes; research designs for randomised controlled trials in quality improvement; and economic evaluations of change management.

There is a recognised process for the development of new drugs, their introduction into routine practice, and their establishment in the treatment of defined conditions. As knowledge about a drug is accrued, new and better patterns of treatment gradually become established. Similar measured approaches are needed to help develop and establish better, safer systems of care. “Change management” is a discipline central to health care. The academic base that supports change management and quality improvement in health care should underpin all clinical and managerial learning programmes. The science of change management is not new, but there is a long way to go before we will understand enough to be able to transform care so that it is “good enough” to meet everyone's expectations of quality and safety.

  • ↵ Bodenheimer T . The American health care system. The movement for improved quality in health care. N Engl J Med 1999 ; 340 : 488 –92. OpenUrl CrossRef PubMed Web of Science
  • ↵ Grol R . Beliefs and evidence in changing clinical practice. BMJ 1997 ; 315 : 418 –21. OpenUrl FREE Full Text
  • ↵ Grol R . Improving the quality of medical care. Building bridges among professional price, payer profit, and patient satisfaction. JAMA 2001 ; 286 : 2578 –85. OpenUrl CrossRef PubMed Web of Science
  • ↵ Grimshaw JM , Shirran L, Thomas R, et al . Changing provider behavior: an overview of systematic reviews of interventions. Med Care 2001 ; 39 (8 Suppl 2): II2 –45. OpenUrl CrossRef PubMed Web of Science
  • ↵ Solberg L , Brekke M, Fasio J, et al . Lessons from experienced guideline implementers: attend to many factors and use multiple strategies. Jt Comm J Qual Improv 2000 ; 26 : 171 –88. OpenUrl PubMed
  • ↵ Solberg L . Guideline implementations: what the literature doesn't tell us. Jt Comm J Qual Improv 2000 ; 26 : 525 –37. OpenUrl PubMed
  • ↵ Wensing M , van der Weijden T, Grol R. Implementing guidelines and innovations in general practice: which interventions are effective? Br J Gen Pract 1998 ; 48 : 991 –7. OpenUrl Abstract / FREE Full Text
  • ↵ Plsek PE , Greenhalgh T. The challenge of compexity in health care. BMJ 2001 ; 323 : 625 –8. OpenUrl FREE Full Text
  • ↵ Campbell M , Fitzpatrick R, Haines A, et al . Framework for design and evaluation of complex interventions to improve health. BMJ 2000 ; 321 : 694 –6. OpenUrl FREE Full Text
  • ↵ Pope C , van Royen P, Baker R. Qualitative methods in research on healthcare quality. Qual Saf Health Care 2002 ; 11 : 148 –52. OpenUrl Abstract / FREE Full Text
  • ↵ Wensing M , Elwyn G. Research on patients' views on the evaluation and improvement of quality of care. Qual Saf Health Care 2002 ; 11 : 153 –7. OpenUrl Abstract / FREE Full Text

Read the full text or download the PDF:

This paper is in the following e-collection/theme issue:

Published on 23.4.2024 in Vol 16 (2024)

The Effect of Using a Client-Accessible Health Record on Perceived Quality of Care: Interview Study Among Parents and Adolescents

Authors of this article:

Author Orcid Image

Original Paper

  • Janine Benjamins 1, 2, 3 , MD, PhD   ; 
  • Emely de Vet 1, 4 , PhD   ; 
  • Chloe A de Mortier 5, 6, 7 , MSc   ; 
  • Annemien Haveman-Nies 1, 8 , PhD  

1 Chairgroup Consumption and Healthy Lifestyles, Wageningen University & Research, Wageningen, Netherlands

2 Icare JGZ, Meppel, Netherlands

3 Stichting Jeugd Noord Veluwe, Nunspeet, Netherlands

4 University Collega Tilburg, Tilburg University, Tilburg, Netherlands

5 Department of Health Services Research, Care and Public Health Research Institute, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands

6 School of Health Professions Education, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands

7 Knowledge Instiute of Medical Specialists, Utrecht, Netherlands

8 GGD Noord-en Oost Gelderland, Warnsveld, Netherlands

Corresponding Author:

Janine Benjamins, MD, PhD

Stichting Jeugd Noord Veluwe

Stationsplein 18E

Nunspeet, 8071 CH

Netherlands

Phone: 31 612329494

Email: [email protected]

Background: Patient-accessible electronic health records (PAEHRs) are assumed to enhance the quality of care, expressed in terms of safety, effectiveness, timeliness, person centeredness, efficiency, and equity. However, research on the impact of PAEHRs on the perceived quality of care among parents, children, and adolescents is largely lacking. In the Netherlands, a PAEHR (Iuvenelis) was developed for preventive child health care and youth care. Parents and adolescents had access to its full content, could manage appointments, ask questions, and comment on written reports.

Objective: This study aims to assess whether and how using this PAEHR contributes to perceived quality of care from a client’s perspective.

Methods: We chose a qualitative design with a phenomenological approach to explore how parents and adolescents perceived the impact of using a PAEHR on quality of care. In-depth interviews that simultaneously included 1 to 3 people were conducted in 2021. In total, 20 participants were included in the study, representing parents and adolescents, both sexes, different educational levels, different native countries, and all participating municipalities. Within this group, 7 of 13 (54%) parents had not previously been informed about the existence of a client portal. Their expectations of using the client portal, in relation to quality of care, were discussed after a demonstration of the portal.

Results: Parents and adolescents perceived that using Iuvenelis contributed to the quality of care because they felt better informed and more involved in the care process than before the introduction of Iuvenelis. Moreover, they experienced more control over their health data, faster and simpler access to their health information, and found it easier to manage appointments or ask questions at their convenience. Parents from a migratory background, among whom 6 of 7 (86%) had not previously been informed about the portal, expected that portal access would enhance their understanding of and control over their care processes. The parents expressed concerns about equity because parents from a migratory background might have less access to the service. Nevertheless, portal usability was regarded as high. Furthermore, both parents and adolescents saw room for improvement in the broader interdisciplinary use of Iuvenelis and the quality of reporting.

Conclusions: Using Iuvenelis can contribute to the client-experienced quality of care, more specifically to perceived person centeredness, timeliness, safety, efficiency, and integration of care. However, some quality aspects, such as equity, still need addressing. In general, client information about the portal needs to be improved, specifically focusing on people in vulnerable circumstances, such as those from migratory backgrounds. In addition, to maximize the potential benefit of using Iuvenelis, stimulating a person-centered attitude among professionals is important. Considering the small number of adolescent participants (n=7), adding quantitative data from a structured survey could strengthen the available evidence.

Introduction

In the implementation and optimization of health care services, assessing the quality of care is an important topic. Quality of care is a broad concept, and it encompasses various aspects of health care. Most commonly used is the Institute of Medicine’s definition of quality of care, which distinguishes 6 different domains: safety, effectiveness, timeliness, patient centeredness, efficiency, and equity [ 1 ]. Patient safety refers to the notion that provided care should prevent patients from harm [ 1 ]. Effectiveness reflects the use of appropriate interventions and treatments [ 1 ]. Timeliness refers to delivering health care services on time [ 1 ]. Patient centeredness is about tailoring care to the unique patient’s needs and preferences and engaging them and their proxies in decision-making [ 1 , 2 ]. Efficiency deals with how well resources are used and about avoiding waste [ 1 ]. Equity ensures everyone has equal access to the best possible care, independent of personal characteristics or geographic location [ 1 ]. Traditionally, quality of care has been approached from a professional’s perspective, aiming to increase the likelihood of desired health outcomes. In 2015, the World Health Organization (WHO) reformulated the term patient centeredness into person centeredness, emphasizing that patients are more than just their health condition and proposing a broadened scope for health and well-being [ 3 ]. With this pivot shift from conventional biomedical health care models to a more holistic approach, patient experiences have become an important health care quality outcome, and patient-reported experiences have evolved into important indicators for quality of care [ 4 , 5 ].

Patient-accessible electronic health records (PAEHRs) are assumed to enhance the quality of care because they provide users with information about their health and health care [ 6 - 8 ]. Information can be provided in a one-way manner, by sharing health data in a patient portal or interactively when the system supports messaging between patient and care provider [ 9 - 12 ]. Either way, providing patients with their health data promotes empowerment and enhances people’s engagement in their care plans [ 6 , 7 , 13 ]. Consequently, health consciousness (ie, the inclination to take health actions), therapy adherence, and self-management of health improve, all of which contribute to better health outcomes [ 8 , 9 , 13 - 16 ]. Moreover, transparency of PAEHRs is reported to enhance patient safety, for instance, because patients can identify errors in their health records and have them corrected [ 12 , 17 , 18 ].

PAEHRs in Adolescent Health Care

The growing body of literature reporting the effect of using PAEHRs on quality of care predominantly stems from adult health care. Research on the impact of using PAEHRs on the quality of care among children, adolescents, and their parents is limited because the development of PAEHRs for these target groups is delayed by age-specific challenges regarding autonomy and confidentiality [ 19 , 20 ]. Meeting these challenges during the development of PAEHRs is important because research shows that adolescents only share information with professionals who assure their confidentiality [ 21 - 23 ].

The protection of confidentiality and access to health information differs depending on the country or state. While there are different legal measures in place to safeguard confidentiality, all health care systems face the challenge of transferring access rights from parents to adolescents [ 20 , 24 , 25 ]. Initially, parents have the right to their child’s health information, but as children grow into adolescence, and therefore in capacity and autonomy, these rights are transferred to the adolescent [ 26 , 27 ]. This transfer, varying across and within countries, can be gradual, with both parents and adolescents having access, or occur at a specific age [ 20 , 24 , 25 ]. Solutions for the emerging autonomy and confidentiality issues aim to balance adolescent autonomy and confidentiality with parental involvement [ 26 - 28 ]. In the United States, laws explicitly safeguard parents’ rights to access their children’s health information [ 25 , 29 ]. Contrastingly, countries such as Canada, the United Kingdom, the Netherlands, and most Scandinavian nations more strongly emphasize the rights of adolescents, depending on their capacity and maturity [ 24 , 30 ]. Restrictions on access to health information for both parents and adolescents vary globally, from shared access during a specified period to no access at all during adolescence [ 20 , 24 ]. The age at which adolescents can access their health information differs from any age in Finland and Estonia to 18 years in Austria and New Zealand [ 20 , 24 ]. Consent from either the adolescent or the parent may be necessary, with certain jurisdictions permitting adolescents to restrict parental access [ 20 , 25 ].

In the Netherlands, a PAEHR named Iuvenelis has been developed for children, adolescents, and their parents. Iuvenelis is used in an interdisciplinary manner in preventive child health care and youth care. It is accessible to adolescents aged ≥12 years and to parents of children aged from 0 to 16 years. Investigating the impact of using Iuvenelis on perceived quality of care among adolescents and parents will contribute to knowledge about using PAEHRs in an age group that is evolving toward autonomous adulthood. This study aimed to investigate how Dutch parents and adolescents visiting preventive health care and youth care perceived the impact of using a client-accessible interdisciplinary health record on quality of care, exploring both the experiences of active users and the expectations or first impressions of nonusers.

Research Design

A qualitative design with a phenomenological approach was chosen to explore how parents and adolescents perceived the impact of using Iuvenelis on the quality of care [ 31 ]. A total of 12 in-depth interviews with 1 to 3 people simultaneously were conducted between October 11 and November 25, 2021. We reported our qualitative study according to the COREQ (Consolidated Criteria for Reporting Qualitative Studies) [ 32 ]. Multimedia Appendix 1 contains the completed COREQ checklist for this study.

Study Setting

The Dutch North Veluwe region consists of 6 municipalities. These municipalities commissioned 2 organizations providing preventive child health care to children aged 0 to 3 years and children aged 4 to 18 years and 1 organization providing youth care to integrate their services in the Centre for Youth and Family (CJG). The CJG is a network organization that houses professionals from the 3 parent organizations involved. Since 2015, the CJG has provided preventive health care to all 38,000 children aged from 0 to 18 years in the region and provided additional youth care for children and families with behavioral or sociopsychological problems [ 33 ]. Both preventive child health care and youth care refer to parents, children, and adolescents as clients rather than as patients. Using a participatory approach, the CJG in 2016 developed a quality standard for their services, following the European “Quality 4 Children” protocol [ 34 ]. In dialogue sessions with parents and adolescents, they jointly wrote a document that defined quality of care from a client’s perspective [ 35 ]. The document establishes 3 core values for quality—“child-centredness,” “partnership between family and professionals,” and “families in charge when decisions are made”—and describes the corresponding supportive professional behavior for each value [ 35 ]. Supporting the integration of services, the electronic health record “Iuvenelis” was built, to which all CJG professionals report. Furthermore, to support client autonomy and collaboration between professionals and families, Iuvenelis includes a tethered client portal in which parents and adolescents can read everything professionals report, such as visit notes, measurements, test results, and referrals. They can manage appointments, send secure messages to professionals, ask questions, comment on written reports, and request corrections of errors. Compliant with Dutch legislation, adolescents receive automatic access to the portal at the age of 12 years [ 36 ]. At the same moment, the portal closes for parents, who have a legal right to access Iuvenelis until their child is 16 years of age. However, this right can only be effectuated when their child personally grants permission. When parents are granted access to their child’s record between 12 and 16 years of age, their child can still have single visit reports shielded from them. Iuvenelis was introduced in September 2019.

Study Population and Inclusion

The study included the parents of children aged 0 to 16 years and adolescents aged ≥12 years, living in the North Veluwe region, further referred to as clients. Clients who visited the CJG in September 2021 were invited personally by CJG professionals, and some general characteristics were reported, such as sex, age, educational level, and native country. Clients who expressed interest in participating were contacted by email or phone to explain the nature and purpose of the interview and to make an appointment. Where feasible, clients were invited to join focus group interviews at a CJG location. Those unable to attend a group session were offered an individual or dual interview live at the location of their choice or on the web. Purposive sampling ensured a varied group representing both sexes, parents and adolescents, various educational levels, active users of Iuvenelis and nonusers, both visitors of preventive health care and youth care, and inhabitants from all participating municipalities. We included parents from native Dutch and migratory backgrounds. In this paper, we use the term migratory background for immigrants who moved to the Netherlands, regardless of their command of the Dutch language. In total, 12 interviews were conducted with 20 participants. Apart from 7 (58%) individual interviews, 2 (17%) double and 3 (25%) triple interviews were conducted. Except for 1 (8%) triple interview with a mother and her 2 teenage children, group interviews consisted of only parents or only adolescents, and respondents did not know each other.

Data Collection

To create an interview topic guide ( Multimedia Appendix 2 ), a working session was convened with an interdisciplinary expert panel of 8 professionals. On the basis of the CJG quality standard and the overarching Institute of Medicine framework [ 1 ], they explored what aspects of client-perceived quality of care could be influenced by using Iuvenelis. Textbox 1 presents the main topics from the semistructured interview guide.

  • Are participants acquainted with Iuvenelis?
  • How have their experiences been in general?
  • If they were not acquainted, what are their first impressions?
  • How do participants feel about security of their data?
  • How do participants feel about detecting errors?
  • How do participants value the view log?

Effectiveness

  • How do participants experience completeness and understandability of reports in Iuvenelis?
  • How do participants value professional expertise?
  • How do participants experience the possibility of 24/7 access to their health data?
  • How do participants experience the possibility to manage their own appointments?
  • How do participants experience the possibility to ask questions at their convenience?

Person centeredness

  • To what extent do participants perceive an influence of using Iuvenelis on client-professional collaboration or communication?
  • To what extent do participants perceive an influence of using Iuvenelis on equal relationship?
  • To what extent do participants perceive an influence of using Iuvenelis on sense of ownership?
  • How do participants experience collaboration between disciplines through Iuvenelis?
  • How do participants experience the use of interdisciplinary shared care plans?
  • How do participants experience ease of access and ease of use?
  • How do participants experience comprehensibility of record content?
  • Were participants informed about the existence of Iuvenelis?

All participants were interviewed once by an experienced female interviewer (JB). For the first 6 of the 12 (50%) interviews, a female research assistant (CAdM) assisted as an observer and note-taker. Individual interviews lasted 30 to 60 minutes, and double and triple interviews lasted 90 minutes. When the participants were not acquainted with the client portal, the first part of the interview was used to demonstrate its functionalities in real time, followed by the main interview, which then focused on expectations and first impressions instead of experiences. Every interview was audio recorded, supplemented by note-taking, and by video recorded for web-based interviews.

Data Analysis

The interviewers transcribed all interviews verbatim for analysis. A member check was conducted with all participants to affirm transcript accuracy. Data were analyzed in ATLAS.ti (version 9; ATLAS.ti Scientific Software Development GmbH). On the basis of the topic list with the 6 domains of quality of care as a framework, a preliminary codebook was written. In accordance with best practices, data collection and analysis were conducted in an iterative, cyclical process, checking for data saturation. The interviewing authors (JB and CAdM) conducted a thematic analysis, rereading and coding all transcripts independently [ 37 , 38 ]. After coding a full transcript, the 2 researchers discussed discrepancies in coding until consensus was reached. Simultaneously, in a continuous process, additional codes were added to the codebook, coding definitions were refined, and transcripts were recoded when necessary. Saturation was discussed during analysis and was reached after 12 interviews. Subsequently, JB and CAdM grouped all codes into major themes and discussed the interpretation of themes with all authors.

Research Team and Reflexivity

The interviews were conducted by a researcher working as a policy advisor at the CJG and a research assistant, both trained in qualitative research. Although 1 interviewer worked in the CJG, no working relationship had been established with any of the participants before the study. Every interview started with an introduction of the interviewers and an explanation of the study goal. Combining an experienced researcher with inside knowledge of the CJG and Iuvenelis (JB) with a young researcher from outside the CJG (CAdM) had 2 advantages: first, when present during the interviews with adolescents, the younger researcher could identify easily with the participants and vice versa; second, during analysis, comparing observations and discussing interpretations from both inside and outside perspectives enriched the process of interpretation and limited the risk of bias.

Ethical Considerations

The study was carried out following relevant guidelines and regulations, complying with the Netherlands Code of Conduct for Scientific Practice. On these grounds, the research protocol was approved by the Social Sciences Ethics Committee of Wageningen University (2018-24-Benjamins). All participants received an invitation beforehand with information about the study and gave explicit verbal consent at the beginning of the interview. Each interview was recorded and transcribed verbatim, including verbal consent.

General Characteristics

Of the 20 participants, 13 (65%) parents and 7 (35%) adolescents were interviewed individually (n=7, 35%), in pairs (n=4, 20%) or in triplets (n=9, 45%). Initially, 23 participants were included, of whom 3 (13%) dropped out due to agenda mismatches. The participants represented both sexes, parents, and adolescents from different educational levels, from native Dutch and migratory backgrounds, and from all involved municipalities and also represented those making use of preventive child health care and youth care services. All adolescents were making use of youth care services. ( Table 1 ).

A total of 35% (7/20) of the participants were not acquainted with the client portal before the interview, and 85% (6/7) of them were from a migratory background. Of the participants who were acquainted with the client portal, 46% (6/13) had received information from a CJG professional and, 54% (7/13) had discovered the portal through a questionnaire about Iuvenelis. In total, 30% (6/20) of the participants came to the CJG office, 50% (10/20) of them were interviewed in their own homes, and 20% (4/20) of the participants had web-based interviews.

Interview Outcomes

A code tree ( Multimedia Appendix 3 ) was created with branches for all 6 aspects of quality of care: safety, effectiveness, timeliness, person centeredness, efficiency, and equity [ 1 ]. One additional theme emerged, related to professional attitude and behavior. Because this theme is linked with person centeredness, we divided the theme of person centeredness into 2 subthemes: client perspective and professional attitude. Most expressions from the participants could be coded in the domain of person centeredness (668/1749, 38.19%), followed by safety (382/1749, 21.84%), equity (337/1749, 19.27%), timeliness (158/1749, 9.03%), and efficiency (135/1749, 7.72%), whereas effectiveness was mentioned the least (69/1749, 3.95%). When experiences across quality-of-care domains were compared, it appeared that positive experiences were expressed for person centeredness, safety, and timeliness, whereas the domains equity and effectiveness evoked predominantly expressions of concerns. The participants expressed mixed feelings about the domain efficiency. In the following paragraph, more in-depth analyses of the participants’ reflections on individual dimensions of quality of care will be presented, starting with the domain that generated the highest number of codes.

Person Centeredness

Subtheme a: client perspective.

Both parents and adolescents reported that rereading information in the client portal contributed to person centeredness because it helped them to recollect what had been discussed during a visit, to get an overview over a longer period, and to prepare for the next visit:

Sometimes it is so crowded in my head. Then I start thinking: what was it all about? [Mother, 2 children, respondent 7.2]
It’s more like when I am struggling with something that we have discussed earlier that I think: Hey, wait a minute. Didn’t we already talk about this once? And I can reread our conversation. [Female adolescent, aged 17 years, respondent 10]

Using the client portal to get an overview was even more important for the parents with a migratory background, although only 1 of them had been using the portal before the interview. However, after watching the portal demonstration and accessing their own child’s health record, all parents from a migratory background considered access to the client portal to be very valuable. They expected that both rereading and reading with others would be vital. Rereading, and using a web-based translation tool when they did not comprehend the Dutch text, would help them to get a better understanding of what was discussed during a previous visit. A total of 50% (3/6) of the mothers with a migratory background had partners who understood Dutch better than they did. Rereading together after a mother’s visit to the CJG would provide the father with all relevant information and would help the mother recollect what was discussed or provide her with information that she had not grasped yet during the visit:

This one (client portal), this is good! My husband always asks: “How big was his head, how tall was he and how many kilos.” And then I go: “Oh my goodness, I forgot! Do I need to memorize that?” Now I can say: “Hey, you can log in and see for yourself what has happened.” [Mother, 1 child, respondent 2.1]

Involving relatives in one’s care was an aspect of person centeredness that not only the parents with a migratory background reported as a benefit from access to the client portal. Most parents valued that a partner who had not been present at the physician’s visit could read the notes afterward. For adolescents, it felt easier to have parents read a visit report than to recall the whole conversation themselves, although they also valued the possibility of actively withholding information from their parents if they wanted to. Finally, rereading with relatives or friends was reported as helpful as well, when preparing for a next visit, or when decisions had to be made about the care process:

I have a Syrian friend who does not speak Dutch. Her daughter has a growth problem. I helped her and we took the information from the growth chart in this portal, bringing it with us to the hospital. [Mother, 3 children, respondent 11]

Being able to reread information, the parents and the adolescents felt well informed and engaged in their care plan. They also valued being part of the reporting process, discussing beforehand what should be reported and how. The combination of reporting together and rereading information enhanced their sense of ownership and contributed to equal client-professional collaboration :

Now I know, because I can check myself, when my children need vaccinations [Father, 5 children, respondent 3.2]
You construct the report together, so to speak, and you can both navigate the plan a little. [Mother, 2 children, respondent 7.3]

Both parents and adolescents would like to have more ownership than was facilitated by the client portal. Some parents expressed the need to add more information to Iuvenelis to create a full overview of all health and welfare issues concerning their child. Adolescents wanted to be more in control of who accessed their health records; they wanted to actively give access to professionals or at least be able to see beforehand who had access to their record instead of reading afterward in their view log who had accessed their health information:

At least I want to see beforehand which professional is authorized to access my health record, instead of seeing who has accessed my record afterwards. [Male adolescent, aged 17 years, respondent 5.1]

Subtheme B: Professional Attitude

Numerous participants emphasized that a professional attitude was an important underlying condition to deliver person-centered care and to experience the possible benefits of using Iuvenelis. The transparency of Iuvenelis contributed to a sense of trust , but only if professionals reported respectfully, showing that they did take clients seriously. Being able to see in a view log who accessed your health record was considered reassuring and enhanced trust. A mother stated the following:

You should consider very carefully how you report, because you are inviting me: “Go ahead, read it.” You are giving full access to the health record. [Mother, 2 children, respondent 7.3]

On the other hand, trust could be damaged if professionals did not report respectfully or did not respect a client’s privacy. After experiencing numerous instances where professionals were speaking about her, 1 parent chose not to access the client portal, to protect herself from losing trust in her current care provider:

I have decided that I trust “X” completely. Why should I read my health record when I do not need to and take the risk to read something that might harm that trust? [Mother, 2 children, Respondent 9]

Both parents and adolescents were satisfied with the security of their health data and the way professional authorization was organized. They generally valued the possibility to see in their view log who accessed their health record. Adolescents all valued their right to decide about access for their parents. Knowing how safety was warranted was an important factor contributing to their trust in the system:

This afternoon I saw that someone had accessed my daughter’s record. But I remembered I approved that person. It’s nice to know that my approval is needed beforehand. [Mother, 4 children, respondent 7.1]
I had problems with my parents, and I don’t know if that’s still in all those documents. Then it is nice indeed that you can decide, what they can and can’t see. [Male adolescent, 17 years, respondent 5.1]

However, half of the portal-using participants were well informed about the privacy and data security measures, and knew where to find the view log. For 1 adolescent, the view log was a reminder that professionals were discussing her situation without her being present, which she did not appreciate:

Although I like seeing who has accessed my health information, it also gives me stress. Because once they discussed my condition in a meeting with several people and I was not there. They were talking about me without me, so to speak, and that’s not okay. When I check the view log that situation comes back in mind. [Female adolescent, aged 18 years, respondent 5.2]
Can other people [outside the CJG] see my child’s record? How do I know that you don’t give it to other people? Because everything is web-based. [Mother, 1 child, respondent 2.1]

Correcting errors is generally considered a part of the element “safety” [ 12 ]. Throughout the interviews, 2 adolescents and 3 parents encountered registration errors or missed appointments without follow-up when checking their portal. They said identifying errors did not upset them. Quite the reverse; they appreciated the possibility to detect errors, report them, and have them corrected. Moreover, being able to correct mistakes increased their sense of ownership over their care process. The parents said it was important to correct found errors, whereas adolescents said they would not ask for correction:

Sometimes things go wrong. For example, E had missed a vaccination. So now we can check the record ourselves and see which vaccination he needs. [Father, 5 children, respondent 3.2]

Independent of their native country and educational level, participants thought very positively of the client portal’s usability . The portal was experienced as easy-to-use and intuitive. The parents and the adolescents could log on to the system easily using digital ID, because people had familiarized themselves with this verification procedure during the COVID-19 pandemic. Usability on mobile phones was also considered good:

Logging in with DigiD makes things easier actually, solving the whole hassle of passwords. [Mother, 4 children, respondent 7.1]
For me, it must be well-organised and then it’s good. The way it is constructed right now, it’s clear, uncluttered and you can read everything. I think I will look more often. [Mother, 2 children, respondent 7.3]

The parents and the adolescents also considered most recorded content comprehensible . However, some portal features, for example, vaccination overview and planning appointments, required explanation, and the parents and the adolescents sometimes encountered jargon or incomprehensible abbreviations:

I understood most things I read. But I thought about some information from when I was a little kid, some expressions: that must be only for doctors. [Female adolescent, aged 18 years, respondent 12]

The most serious concern expressed by parents was that not all clients were informed equally about the existence of Iuvenelis. A total of 7 (35%) out of 20 participants had not received any information about Iuvenelis before the interview, and 86% (6/7) of them were from a migratory background. One parent from a migratory background did use the client portal to manage appointments but was not aware that she could also reread visit reports:

If I had not been here, I would not have known anything about it at all, and that’s a shame. [Mother, 1 child, respondent 2.3]

The parents presented many options for improving communication. Emphasizing the importance of providing more equal information to all population groups, 1 parent offered to participate in information meetings with mothers from migratory backgrounds:

Some mothers (with a migratory background) are unsure about their language proficiency. For them, it is easier to do it through the internet. [Mother, 3 children, respondent 11]

The client portal’s 24/7 accessibility did not contribute to faster access to care. However, it did provide parents and adolescents with the opportunity to ask questions or schedule appointments easily and at their convenience . Especially, parents valued this opportunity as time saving , including the immediate access to their health information without the interference of a CJG professional:

Suppose I get very anxious during the weekend about certain behaviour I observed. I would prefer to search for information right then and there, instead of sending an email and waiting several days until someone responds. I think it’s a plus that I can check the client portal and ask my questions immediately. [Mother, 2 children, Respondent 7.3]
I rescheduled my appointment once through the portal. Very convenient and timesaving! [Mother, 2 children, respondent 7.2]

In Iuvenelis, all CJG professionals had access to all relevant information stored in the same place, which was considered an advantage contributing to efficiency. Consequently, the parents and the adolescents did not have to repeat their stories when visiting a new professional in the CJG:

I think it is very convenient when you visit several people in the same period that all information is in one place. So, they can make use of each other’s information. [Female adolescent, 15 years, respondent 6]

However, both parents and adolescents saw room for improvement in expanding Iuvenelis toward other care providers and in a more active role for themselves in uploading information from other care providers in their client portal. They felt that if all their health data were stored in one place and accessible to all their care providers, it would be easier for both care providers and clients themselves to create a clear overview and manage their care:

I hope lines between all professionals will be shorter. Eventually, I hope my children will have all their health data in this record, that this will be their complete and only health record. [Mother, 2 children, respondent 8.3]

Parents and adolescents did not associate using Iuvenelis with effectiveness. Although a fully accessible health record allows clients to engage in the management of their care process, none of the participants commented on the actual care process and whether the right choices had been made.

Parents and adolescents did comment on the process and quality of reporting: they felt that reporting quality could be improved. Some reports contained mistakes, and some were incomplete or missing. One parent expressed the concern that reports were sometimes prejudiced, elaborating on risk factors and neglecting protective factors:

They only report what is wrong. Do you know what could really help? If you would read in your child’s record what is going well if someone would write down what a lovely little boy he is. [Mother, 2 children, respondent 9]

Principal Findings

With this study, we explored how parents and adolescents visiting preventive health care and social care perceived the quality of care when using Iuvenelis. Both the experiences of active users and the expectations or first impressions of nonusers were included. The results suggest that using Iuvenelis contributed to some, but not all, aspects of quality of care. On the positive side, parents and adolescents felt better informed and expressed more engagement in the care process than before introduction of Iuvenelis. They felt more in control of their health data, reported having faster and simpler access to their health information, and found it easier to manage appointments or ask questions at their convenience. Portal usability and data safety were regarded as high, and interdisciplinary collaboration in Iuvenelis was considered to enhance efficiency. The parents from a migratory background expected that portal access would give them a better understanding of and more control over their care processes.

However, parents expressed concerns about possible unequal access due to a lack of information for the parents from a migratory background. Furthermore, both parents and adolescents saw room for improvement in the broader interdisciplinary use of Iuvenelis. Finally, they felt that effectiveness could be improved by more complete reporting regarding protective factors as well as risk factors.

Comparison With Prior Work

Overall contribution to quality of care.

Previous research investigating quality of care in relation to using PAEHRs predominantly focused on adult health care. These studies reported largely the same outcomes as our study, although described from a care provider’s perspective. Using a PAEHR was reported to contribute to person centeredness [ 7 , 39 , 40 ], safety, and efficiency [ 16 , 39 , 40 ]. Contrary to this study, prior studies also show a positive impact of using a PAEHR on effectiveness [ 16 , 39 , 40 ]. Some studies report that patient portals enhance timeliness through messaging functionalities or quicker access to results [ 41 - 45 ].

Person Centeredness and Professional Perspective

Some participants emphasized the importance of a person-centered professional attitude, which they considered fundamental for Iuvenelis’ contribution to quality of care. When professionals reported respectfully in Iuvenelis, this enhanced the client’s trust in their care providers, whereas earlier experiences with professionals not respecting a client’s privacy damaged that trust. An extensive review by Scholl et al [ 46 ] generated a patient-centered care model that places a professional’s attitude central in the delivery of person-centered care. In this model, delivering patient-centered care relies on professionals embracing a person-centered attitude characterized by respecting a patient’s unique preferences and needs, building a professional-patient relationship based on equality, and viewing a patient’s health from a biopsychosocial perspective [ 46 ]. Leeuwis and Aarts [ 47 ] stated that complex interventions, such as technological innovations, usually require change on different levels. These changes, on a technological, organizational, and professional level, are considered interdependent [ 47 ]. In this case, implementing a PAEHR to enhance person centeredness is not only about introducing the technological tool; the implementation needs to address professional attitude and behavior as well. In turn, changes in professional behavior and attitude require adjustments at the organizational or institutional level. These interdependencies should be anticipated when organizations start implementing a PAEHR, and the necessary changes on an organizational and professional level should be planned and facilitated in addition to the development and implementation of the tool itself.

Equity emerged in this study as an issue of concern because most participants with a migratory background appeared to be unaware of the existence of a client portal, as opposed to 1 participant with a native background. Diving a bit deeper into this, anecdotal evidence may suggest that professionals hesitated to inform clients about the existence of the client portal when they noticed that a client’s knowledge of Dutch was limited. Unawareness of the existence of a patient portal has been reported as a main barrier for using a patient portal [ 48 , 49 ] and could be resolved by provider encouragement, which is an important contributor to portal use [ 50 - 52 ]. However, when providers selectively encourage certain groups of people to use a patient portal and neglect others, they could enhance disparity. Previous research shows that persons living in vulnerable circumstances, such as lower-educated people or persons from a migratory background, make less use of patient portals than average [ 17 , 42 , 53 - 58 ]. The literature on the digital divide reports that social exclusion can lead to digital exclusion and that the introduction of new technology then might unintentionally reinforce already existing health disparities [ 59 - 61 ]. In total, 2 studies investigating a provider’s role in patient portal use reported that professionals play a role in this reinforcement: higher-educated and White patients were more likely to report being encouraged by health care providers to use a client portal than lower-educated patients and patients from migratory backgrounds [ 50 , 51 ]. Antonio et al [ 62 ] stated in a review that “healthcare providers’ prejudgments may further exclude populations that are already underserved.”

This is an important issue to address because research shows that people, especially those living in vulnerable circumstances, experience benefits from using a PAEHR [ 42 , 43 , 63 , 64 ]. In our study, parents from migratory backgrounds reported that rereading their health information and sharing it with family members or friends would provide them with a better understanding of the care process and would increase their engagement in care. We concluded that ensuring that all clients are equally informed about the existence of a client portal is not only necessary to prevent further disparities but could even diminish existing disparities [ 65 ]. This may require adapted measures for specific population groups, for example, using informal meetings with the parents from migrant backgrounds to inform them in their language about Iuvenelis. In addition, professionals need to be made aware of the risk of the digital divide and of their crucial role in conquering this phenomenon.

Confidentiality

On the basis of the known bottlenecks to developing PAEHRs for adolescents [ 26 , 27 ], we expected data safety, confidentiality, and privacy to be an issue of concern for at least some of our participants. However, surprisingly, participants did not express concerns about their data safety. Adolescents did value highly how their confidentiality was protected and reported that this contributed to their trust in their care provider. Comparably, recent studies investigating adolescent use of PAEHRs suggest that adolescents are not concerned about their confidentiality when using a PAEHR [ 8 , 13 , 18 , 23 ]. A recent review investigated the experiences of parents and adolescents using a PAEHR in hospital, primary, and mental health care settings versus the expectations of parents and adolescents without access to a PAEHR. In this review, the authors found that parents and adolescents without access to a PAEHR anticipated confidentiality issues when using a PAEHR, whereas parents and adolescents using a PAEHR did not experience these issues [ 66 ]. In a similar vein, research that compared professionals’ general concerns about using PAEHRs beforehand with experiences after a period of using a PAEHR shows that anticipated worries were not always justified. For example, an expected increase in workload and excessively anxious patients did not occur after introducing PAEHRs [ 67 - 69 ]. Confidentiality issues could have been one of the expected problems that did not evolve. Another explanation of the contrast between expected bottlenecks and real experiences may be that the explicit focus in the literature on confidentiality issues has initiated specific awareness for this topic during the development of Iuvenelis and has led to the implementation of successful solutions.

Integrated Care

The participants considered the interdisciplinary use of Iuvenelis a contribution to efficiency and even expressed a need to expand the use of Iuvenelis to other disciplines outside the CJG. This would allow them to view all their health data in one place. Parents and adolescents stated that, in their opinion, this would contribute to efficiency. However, with their remarks, participants draw upon an additional aspect of quality of care, integrated care, that the WHO has added recently [ 1 , 70 ]. The WHO defines integrated care as “providing care that is coordinated across levels and providers and makes available the full range of health services throughout the life course.” The parents and the adolescents even challenged the CJG organizations to extend opportunities for interdisciplinary collaboration within Iuvenelis, facilitating them to gather all their health information here. With that challenge, the parents and the adolescents confirmed the value of the Dutch aim for integrated care in child health care and youth care [ 71 ]. This aim is also reflected in the recently established Healthy and Active Living Agreement between the Dutch government, municipalities, and public health associations [ 72 ], although it is not yet common practice throughout the country.

Differences Between Parents’ and Adolescents’ Experiences

Although parents’ and adolescents’ perceptions were similar in many aspects, differences were reported as well. Parents considered it more important to correct errors than adolescents and valued the web-based option to ask questions and manage appointments more highly than adolescents. Comparably, recent studies among adolescent patients show that adolescents are less likely to speak up about mistakes in their records than their parents [ 73 , 74 ], and are more reluctant to send direct messages in the PAEHR to their caregivers than adults [ 13 , 23 ]. Both parents and adolescents liked to share record content with their close ones, but adolescents also valued the opportunity to shield specific content from their parents when needed. Adolescents considered deciding who had access to their health information vital to exercising ownership over their health information. In line with this, a recent review reports that teens believe they should have control over what remains confidential in their medical records and what their parents can access through proxy portal accounts [ 23 ].

Strengths and Limitations

Recruiting a well-balanced group of participants in this qualitative study was a strength of this study, compared to our previous studies on Iuvenelis, where adolescents were represented in small numbers and participants with migratory backgrounds could not be included [ 75 , 76 ]. The inclusion of the most important characteristics in this study enabled us to explore different client perspectives. Choosing a qualitative research design made it possible to collect rich, in-depth information about the client’s expectations of and actual experiences with using Iuvenelis.

Due to the COVID-19 pandemic, organizing focus groups proved to be difficult. Although some triple interviews could be organized, most participants were interviewed individually or in couples. Consequently, our study lacked some of the interaction that is usually generated in larger groups, which could be considered a limitation [ 77 ]. We partly managed to overcome this limitation because we collected and analyzed data in a continuously iterative process. This meant that topics that were brought up in the first interview could be explored further in the following interviews.

As JB had a role as a policy advisor in the CJG, she was able to introduce the participants to Iuvenelis who were not yet acquainted with the client portal, which allowed us to include more parents with a migratory background and to add valuable information to our data. However, combining a portal demonstration with an interview about how clients perceived the quality of care using this portal might have created a respondent bias: the interviewer’s positive attitude toward the client portal could have evoked socially desirable answers. To enhance trustworthiness, the interviewers followed the interview guide as closely as possible, allowing some adaptation to the conversational flow. A member check was conducted, transcripts were coanalyzed with a researcher with no connections with Iuvenelis or the CJG, and reporting followed the COREQ checklist [ 32 , 78 ].

Conclusions

Using Iuvenelis is expected to contribute to experienced quality of care from the perspectives of both parents and adolescents, specifically to the aspects of person centeredness, timeliness, and safety. Parents and adolescents feel better informed, experience a greater sense of ownership, and are satisfied with data security and portal usability. Clients also report that using Iuvenelis contributes to integrated care. Some quality aspects, however, such as equity in portal access, still need addressing. In general, client information about the portal needs to be improved, specifically focusing on people in vulnerable circumstances, such as those from migratory backgrounds. In addition, to maximize the potential benefit of using Iuvenelis, stimulating a person-centered attitude among professionals is important. With our study, we have investigated parents’ and adolescents’ perspectives regarding all domains of quality of care. However, considering the small number of adolescent participants, adding quantitative data from a structured survey could strengthen the available evidence.

Data Availability

As interview transcripts contain sensitive information, these will not be published in a separate data set.

Authors' Contributions

JB, EdV, and AH-N conceived and designed the study. JB and CAdM collected and analyzed the data. JB drafted the manuscript as first author. All authors provided critical feedback, helped shape the analysis and manuscript, and have read and approved the final manuscript.

Conflicts of Interest

None declared.

Completed COREQ (Consolidated Criteria for Reporting Qualitative Studies) checklist.

Interview topic list.

Codetree, displaying all applied codes, grouped in colours around every single aspect of quality ofcare.

  • Committee on Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. New York, NY. The National Academies Press; 2001.
  • WHO global strategy on people-centred and integrated health services: interim report. World Health Organization. 2015. URL: https:/​/www.​afro.who.int/​sites/​default/​files/​2017-07/​who-global-strategy-on-pcihs-main-document_final.​pdf [accessed 2024-04-05]
  • People-centred and integrated health services: an overview of the evidence. World Health Organization. 2015. URL: https://apps.who.int/iris/bitstream/handle/10665/155004/WHO_HIS_SDS_2015.7_eng.pdf?sequence=1 [accessed 2024-04-05]
  • Price D, Edwards M, Davies F, Cooper A, McFadzean J, Carson-Stevens A, et al. Patients' experiences of attending emergency departments where primary care services are located: qualitative findings from patient and clinician interviews from a realist evaluation. BMC Emerg Med. Jan 22, 2022;22(1):12. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Baumhauer JF, Bozic KJ. Value-based healthcare: patient-reported outcomes in clinical decision making. Clin Orthop Relat Res. Jun 2016;474(6):1375-1378. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis Giardina T, Menon S, Parrish DE, Sittig DF, Singh H. Patient access to medical records and healthcare outcomes: a systematic review. J Am Med Inform Assoc. 2014;21(4):737-741. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Benjamins J, Haveman-Nies A, Gunnink M, Goudkuil A, de Vet E. How the use of a patient-accessible health record contributes to patient-centered care: scoping review. J Med Internet Res. Jan 11, 2021;23(1):e17655. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Dohil I, Cruz R, Sweet H, Huang JS. Sharing notes with adolescents and young adults admitted to an inpatient psychiatry unit. J Am Acad Child Adolesc Psychiatry. Mar 2021;60(3):317-320. [ CrossRef ] [ Medline ]
  • Earnest MA, Ross SE, Wittevrongel L, Moore LA, Lin CT. Use of a patient-accessible electronic medical record in a practice for congestive heart failure: patient and physician experiences. J Am Med Inform Assoc. Sep 01, 2004;11(5):410-417. [ CrossRef ]
  • Cimino JJ, Patel VL, Kushniruk AW. The patient clinical information system (PatCIS): technical solutions for and experience with giving patients access to their electronic medical records. Int J Med Inform. Dec 18, 2002;68(1-3):113-127. [ CrossRef ] [ Medline ]
  • Honeyman A, Cox B, Fisher B. Potential impacts of patient access to their electronic care records. Inform Prim Care. 2005;13(1):55-60. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mold F, de Lusignan S, Sheikh A, Majeed A, Wyatt JC, Quinn T, et al. Patients' online access to their electronic health records and linked online services: a systematic review in primary care. Br J Gen Pract. Mar 2015;65(632):e141-e151. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hong MK, Wilcox L, Feustel C, Wasileski-Masker K, Olson TA, Simoneaux SF. Adolescent and caregiver use of a tethered personal health record system. AMIA Annu Symp Proc. 2016;2016:628-637. [ FREE Full text ] [ Medline ]
  • Bao C, Bardhan IR, Singh H, Meyer BA, Kirksey K. Patient–provider engagement and its impact on health outcomes: a longitudinal study of patient portal use. MIS Q. Jun 1, 2020;44(2):699-723. [ CrossRef ]
  • Kruse CS, Bolton K, Freriks G. The effect of patient portals on quality outcomes and its implications to meaningful use: a systematic review. J Med Internet Res. Feb 10, 2015;17(2):e44. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Carini E, Villani L, Pezzullo AM, Gentili A, Barbara A, Ricciardi W, et al. The impact of digital patient portals on health outcomes, system efficiency, and patient attitudes: updated systematic literature review. J Med Internet Res. Sep 08, 2021;23(9):e26189. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • de Lusignan S, Mold F, Sheikh A, Majeed A, Wyatt JC, Quinn T, et al. Patients' online access to their electronic health records and linked online services: a systematic interpretative review. BMJ Open. Sep 08, 2014;4(9):e006021. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Huang JS, Yueh R, Ma S, Cruz R, Bauman L, Choi LJ. Adolescents' and young adults' satisfaction with and understanding of medical notes from a pediatric gastroenterology practice: a cross-sectional cohort study. J Pediatr. Dec 2019;215:264-266. [ CrossRef ] [ Medline ]
  • Sarabu C, Pageler N, Bourgeois F. OpenNotes: toward a participatory pediatric health system. Pediatrics. Oct 18, 2018;142(4):e20180601. [ CrossRef ] [ Medline ]
  • Hagström J, Scandurra I, Moll J, Blease C, Haage B, Hörhammer I, et al. Minor and parental access to electronic health records: differences across four countries. Stud Health Technol Inform. May 25, 2022;294:495-499. [ CrossRef ] [ Medline ]
  • Bergman DA, Brown NL, Wilson S. Teen use of a patient portal: a qualitative study of parent and teen attitudes. Perspect Health Inf Manag. 2008;5(13):13. [ FREE Full text ] [ Medline ]
  • Klein JD, McNulty M, Flatau CN. Adolescents' access to care: teenagers' self-reported use of services and perceived access to confidential care. Arch Pediatr Adolesc Med. Jul 1998;152(7):676-682. [ CrossRef ] [ Medline ]
  • Sethness JL, Golub S, Evans YN. Adolescent patient portals and concerns about confidentiality. Curr Opin Pediatr. Aug 01, 2023;35(4):430-435. [ CrossRef ] [ Medline ]
  • Essén A, Scandurra I, Gerrits R, Humphrey G, Johansen MA, Kierkegaard P, et al. Patient access to electronic health records: differences across ten countries. Health Policy Technol. Mar 2018;7(1):44-56. [ CrossRef ]
  • Sharko M, Wilcox L, Hong MK, Ancker JS. Variability in adolescent portal privacy features: how the unique privacy needs of the adolescent patient create a complex decision-making process. J Am Med Inform Assoc. Aug 01, 2018;25(8):1008-1017. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bayer R, Santelli J, Klitzman R. New challenges for electronic health records: confidentiality and access to sensitive health information about parents and adolescents. JAMA. Jan 06, 2015;313(1):29-30. [ CrossRef ] [ Medline ]
  • Bourgeois FC, DesRoches CM, Bell SK. Ethical challenges raised by OpenNotes for pediatric and adolescent patients. Pediatrics. Jun 18, 2018;141(6):e20172745. [ CrossRef ] [ Medline ]
  • Calman N, Pfister HR, Lesnewski R, Hauser D, Shroff N. Electronic access to adolescents' health records: legal, policy, and practice implications. Fam Pract Manag. 2015;22(2):11-14. [ FREE Full text ] [ Medline ]
  • Does the HIPAA privacy rule allow parents the right to see their children’s medical records? US Department of Health and Human Services. URL: https:/​/www.​hhs.gov/​hipaa/​for-professionals/​faq/​227/​can-i-access-medical-record-if-i-have-power-of-attorney/​index.​html [accessed 2024-04-05]
  • Nehel A. Privacy of a child’s healthcare: do parents have a right to access a child’s healthcare records? Procido LLP. URL: https:/​/procido.​com/​2023/​09/​20/​privacy-of-a-childs-healthcare-do-parents-have-a-right-to-access-a-childs-healthcare-records/​ [accessed 2024-04-05]
  • Moustakas C. Phenomenological Research Methods. Thousand Oaks, CA. Sage Publications; 1999.
  • Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. Dec 2007;19(6):349-357. [ CrossRef ] [ Medline ]
  • Transformatieplan samenwerking noord veluwe 2018-2021. Regio Noord Veluwe. 2017. URL: https://vng.nl/files/vng/noord_veluwe.pdf [accessed 2024-03-04]
  • van Beek F, Rutjes L. Beschrijving van de Kwaliteitsstandaarden Jeugdzorg Q4C. In: van Beek F, Rutjes L, editors. Kwaliteitsstandaarden Jeugdzorg Q4C: Wat Kinderen en Jongeren Belangrijk Vinden als ze Niet Thuis Wonen. Houten, the Netherlands. Bohn Stafleu van Loghum; 2009;29-85.
  • Kwaliteitswaarden CJG. Centrum voor Jeugd en Gezin, Noord Veluwe. URL: https://www.cjgoldebroek.nl/documents/230/20161202_CJG_Kwaliteitswaarden_def_sQO5BpM.pdf [accessed 2024-03-04]
  • Wet geneeskundige behandel overeenkomst (WGBO). Dutch Ministry of Justice and Security. 2006. URL: https://wetten.overheid.nl/BWBR0005290/2020-07-01/ [accessed 2024-03-04]
  • Braun V, Clarke V. What can "thematic analysis" offer health and wellbeing researchers? Int J Qual Stud Health Well-being. Oct 16, 2014;9:26152. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jan 2006;3(2):77-101. [ CrossRef ]
  • Neves AL, Freise L, Laranjo L, Carter AW, Darzi A, Mayer E. Impact of providing patients access to electronic health records on quality and safety of care: a systematic review and meta-analysis. BMJ Qual Saf. Dec 12, 2020;29(12):1019-1032. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lear R, Freise L, Kybert M, Darzi A, Neves AL, Mayer EK. Perceptions of quality of care among users of a web-based patient portal: cross-sectional survey analysis. J Med Internet Res. Nov 17, 2022;24(11):e39973. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hefner JL, MacEwan SR, Biltz A, Sieck CJ. Patient portal messaging for care coordination: a qualitative study of perspectives of experienced users with chronic conditions. BMC Fam Pract. May 03, 2019;20(1):57. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gerard M, Fossa A, Folcarelli PH, Walker J, Bell SK. What patients value about reading visit notes: a qualitative inquiry of patient experiences with their health information. J Med Internet Res. Jul 14, 2017;19(7):e237. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jackson SL, Mejilla R, Darer JD, Oster NV, Ralston JD, Leveille SG, et al. Patients who share transparent visit notes with others: characteristics, risks, and benefits. J Med Internet Res. Nov 12, 2014;16(11):e247. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rief JJ, Hamm ME, Zickmund SL, Nikolajski C, Lesky D, Hess R, et al. Using health information technology to foster engagement: patients' experiences with an active patient health record. Health Commun. Mar 2017;32(3):310-319. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rexhepi H, Åhlfeldt RM, Cajander Å, Huvila I. Cancer patients' attitudes and experiences of online access to their electronic medical records: a qualitative study. Health Informatics J. Jun 2018;24(2):115-124. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Scholl I, Zill JM, Härter M, Dirmaier J. An integrative model of patient-centeredness - a systematic review and concept analysis. PLoS One. Sep 17, 2014;9(9):e107828. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Leeuwis C, Aarts N. Rethinking communication in innovation processes: creating space for change in complex systems. J Agric Educ Ext. Feb 2011;17(1):21-36. [ CrossRef ]
  • Mishuris RG, Stewart M, Fix GM, Marcello T, McInnes DK, Hogan TP, et al. Barriers to patient portal access among veterans receiving home-based primary care: a qualitative study. Health Expect. Dec 12, 2015;18(6):2296-2305. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ronda MC, Dijkhorst-Oei LT, Rutten GE. Reasons and barriers for using a patient portal: survey among patients with diabetes mellitus. J Med Internet Res. Nov 25, 2014;16(11):e263. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mukhopadhyay S, Basak R, Khairat S, Carney TJ. Revisiting provider role in patient use of online medical records. Appl Clin Inform. Oct 15, 2021;12(5):1110-1119. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shimoga SV, Lu YZ. Role of provider encouragement on patient engagement via online portals. J Am Med Inform Assoc. Oct 01, 2019;26(10):968-976. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Patel V, Johnson C. Individuals' use of online medical records and technology for health needs. ONC Data Brief. URL: http://resource.nlm.nih.gov/9918332985706676 [accessed 2024-03-04]
  • Yamin CK, Emani S, Williams DH, Lipsitz SR, Karson AS, Wald JS, et al. The digital divide in adoption and use of a personal health record. Arch Intern Med. Mar 28, 2011;171(6):568-574. [ CrossRef ] [ Medline ]
  • Aljabri D, Dumitrascu A, Burton MC, White L, Khan M, Xirasagar S, et al. Patient portal adoption and use by hospitalized cancer patients: a retrospective study of its impact on adverse events, utilization, and patient satisfaction. BMC Med Inform Decis Mak. Jul 27, 2018;18(1):70. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cromer R, Denneson LM, Pisciotta M, Williams H, Woods S, Dobscha SK. Trust in mental health clinicians among patients who access clinical notes online. Psychiatr Serv. May 01, 2017;68(5):520-523. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Crouch PB, Rose CD, Johnson M, Janson SL. A pilot study to evaluate the magnitude of association of the use of electronic personal health records with patient activation and empowerment in HIV-infected veterans. PeerJ. 2015;3:e852. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fossa AJ, Bell SK, DesRoches C. OpenNotes and shared decision making: a growing practice in clinical transparency and how it can support patient-centered care. J Am Med Inform Assoc. Sep 01, 2018;25(9):1153-1159. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grossman LV, Masterson Creber RM, Benda NC, Wright D, Vawdrey DK, Ancker JS. Interventions to increase patient portal use in vulnerable populations: a systematic review. J Am Med Inform Assoc. Aug 01, 2019;26(8-9):855-870. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Goedhart NS, Zuiderent-Jerak T, Woudstra J, Broerse JE, Betten AW, Dedding C. Persistent inequitable design and implementation of patient portals for users at the margins. J Am Med Inform Assoc. Feb 15, 2021;28(2):276-283. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Helsper EJ. A corresponding fields model for the links between social and digital exclusion. Commun Theor. Oct 15, 2012;22(4):403-426. [ CrossRef ]
  • Latulippe K, Hamel C, Giroux D. Social health inequalities and eHealth: a literature review with qualitative synthesis of theoretical and empirical studies. J Med Internet Res. Apr 27, 2017;19(4):e136. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Antonio MG, Petrovskaya O, Lau F. Is research on patient portals attuned to health equity? A scoping review. J Am Med Inform Assoc. Aug 01, 2019;26(8-9):871-883. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bell SK, Mejilla R, Anselmo M, Darer JD, Elmore JG, Leveille S, et al. When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient-doctor relationship. BMJ Qual Saf. Apr 2017;26(4):262-270. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gerard M, Chimowitz H, Fossa A, Bourgeois F, Fernandez L, Bell SK. The importance of visit notes on patient portals for engaging less educated or nonwhite patients: survey study. J Med Internet Res. May 24, 2018;20(5):e191. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Helsper EJ, van Deursen AJ. Do the rich get digitally richer? Quantity and quality of support for digital engagement. Inf Commun Soc. Jun 29, 2016;20(5):700-714. [ CrossRef ]
  • Hagström J, Blease C, Haage B, Scandurra I, Hansson S, Hägglund M. Views, use, and experiences of web-based access to pediatric electronic health records for children, adolescents, and parents: scoping review. J Med Internet Res. Nov 22, 2022;24(11):e40328. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Delbanco T, Walker J, Bell SK, Darer JD, Elmore JG, Farag N, et al. Inviting patients to read their doctors' notes: a quasi-experimental study and a look ahead. Ann Intern Med. Oct 02, 2012;157(7):461-470. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pell JM, Mancuso M, Limon S, Oman K, Lin CT. Patient access to electronic health records during hospitalization. JAMA Intern Med. May 2015;175(5):856-858. [ CrossRef ] [ Medline ]
  • Petersson L, Erlingsdóttir G. Open notes in swedish psychiatric care (part 2): survey among psychiatric care professionals. JMIR Ment Health. Jun 21, 2018;5(2):e10521. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Delivering quality health services: a global imperative for universal health coverage. World Health Organization, Organisation for Economic Co-operation and Development, and The World Bank. URL: https:/​/www.​worldbank.org/​en/​topic/​universalhealthcoverage/​publication/​delivering-quality-health-services-a-global-imperative-for-universal-health-coverage [accessed 2024-03-04]
  • Beleidskader transformatie jeugdzorg noord-veluwe, 'in een keer goed', 2015-2018. De raad der gemeente Elburg. 2015. URL: https://lokaleregelgeving.overheid.nl/CVDR340352/1 [accessed 2024-03-04]
  • GALA - gezond en actief leven akkoord. Dutch Ministry of Public Health, Welfare and Sports. 2023. URL: https://open.overheid.nl/documenten/ronl-e8e739b2e77bf92b7bfed78d4569ae4ecbce8dac/pdf [accessed 2024-03-04]
  • Lam BD, Bourgeois F, DesRoches CM, Dong Z, Bell SK. Attitudes, experiences, and safety behaviours of adolescents and young adults who read visit notes: opportunities to engage patients early in their care. Future Healthc J. Nov 29, 2021;8(3):e585-e592. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hagström J, Blease C, Kharko A, Scandurra I, Hägglund M. Adolescents identifying errors and omissions in their electronic health records: a national survey. Stud Health Technol Inform. May 18, 2023;302:242-246. [ CrossRef ] [ Medline ]
  • Benjamins J, de Vet E, Jordaan G, Haveman-Nies A. Effect of using client-accessible youth health records on experienced autonomy among parents and adolescents in preventive child healthcare and youth care: a mixed methods intervention study. J Child Health Care. May 25, 2023.:13674935231177782. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Benjamins J, Duinkerken JG, den Hamer-Jordaan G, Canfijn R, Koster R, de Vet E, et al. Implementation of EPR-youth, a client-accessible and multidisciplinary health record; a mixed-methods process evaluation. Int J Integr Care. 2023;23(2):26. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kitzinger J. Focus groups. In: Pope C, Mays N, editors. Qualitative Research in Health Care. 3rd edition. Williston, VT. BMJ Books; 2006;21-31.
  • Korstjens I, Moser A. Series: practical guidance to qualitative research. Part 4: trustworthiness and publishing. Eur J Gen Pract. Dec 05, 2018;24(1):120-124. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by S Woods; submitted 10.07.23; peer-reviewed by J Hagström, Y Chu; comments to author 30.08.23; revised version received 11.12.23; accepted 20.03.24; published 23.04.24.

©Janine Benjamins, Emely de Vet, Chloe A de Mortier, Annemien Haveman-Nies. Originally published in Journal of Participatory Medicine (https://jopm.jmir.org), 23.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in Journal of Participatory Medicine, is properly cited. The complete bibliographic information, a link to the original publication on https://jopm.jmir.org, as well as this copyright and license information must be included.

This paper is in the following e-collection/theme issue:

Published on 29.4.2024 in Vol 26 (2024)

This is a member publication of University College London (Jisc)

Digital Self-Management Platform for Adult Asthma: Randomized Attention-Placebo Controlled Trial

Authors of this article:

Author Orcid Image

Original Paper

  • Aaron Kandola 1, 2 , BSc, MSc, PhD   ; 
  • Kyra Edwards 3 , BSc, MSc   ; 
  • Joris Straatman 2 , BA, MSc   ; 
  • Bettina Dührkoop 2 , DipKFM   ; 
  • Bettina Hein 2 , MA, MSc, Ref.iur   ; 
  • Joseph Hayes 2, 3, 4 , MBChB, MSc, MRCPsych, PhD  

1 Medical Research Council Unit of Lifelong Health and Aging, University College London, London, United Kingdom

2 juli Health, Hull, MA, United States

3 Division of Psychiatry, University College London, London, United Kingdom

4 Camden and Islington NHS Foundation Trust, London, United Kingdom

Corresponding Author:

Joseph Hayes, MBChB, MSc, MRCPsych, PhD

Division of Psychiatry

University College London

Maple House

149 Tottenham Court Road

London, W1T 7BN

United Kingdom

Phone: 44 2089288300

Email: [email protected]

Background: Asthma is one of the most common chronic conditions worldwide, with a substantial individual and health care burden. Digital apps hold promise as a highly accessible, low-cost method of enhancing self-management in asthma, which is critical to effective asthma control.

Objective: We conducted a fully remote randomized controlled trial (RCT) to assess the efficacy of juli, a commercially available smartphone self-management platform for asthma.

Methods: We conducted a pragmatic single-blind, RCT of juli for asthma management. Our study included participants aged 18 years and older who self-identified as having asthma and had an Asthma Control Test (ACT) score of 19 or lower (indicating uncontrolled asthma) at the beginning of the trial. Participants were randomized (1:1 ratio) to receive juli for 8 weeks or a limited attention-placebo control version of the app. The primary outcome measure was the difference in ACT scores after 8 weeks. Secondary outcomes included remission (ACT score greater than 19), minimal clinically important difference (an improvement of 3 or more points on the ACT), worsening of asthma, and health-related quality of life. The primary analysis included participants using the app for 8 weeks (per-protocol analysis), and the secondary analysis used a modified intention-to-treat (ITT) analysis.

Results: We randomized 411 participants between May 2021 and April 2023: a total of 152 (37%) participants engaged with the app for 8 weeks and were included in the per-protocol analysis, and 262 (63.7%) participants completed the week-2 outcome assessment and were included in the modified ITT analysis. Total attrition between baseline and week 8 was 259 (63%) individuals. In the per-protocol analysis, the intervention group had a higher mean ACT score (17.93, SD 4.72) than the control group (16.24, SD 5.78) by week 8 (baseline adjusted coefficient 1.91, 95% CI 0.31-3.51; P =.02). Participants using juli had greater odds of achieving or exceeding the minimal clinically important difference at 8 weeks (adjusted odds ratio 2.38, 95% CI 1.20-4.70; P =.01). There were no between group differences in the other secondary outcomes at 8 weeks. The results from the modified ITT analyses were similar.

Conclusions: Users of juli had improved asthma symptom control over 8 weeks compared with users of a version of the app with limited functionality. These findings suggest that juli is an effective digital self-management platform that could augment existing care pathways for asthma. The retention of patients in RCTs and real-world use of digital health care apps is a major challenge.

Trial Registration: International Standard Randomised Controlled Trial Number (ISRCTN) registry ISRCTN87679686; https://www.isrctn.com/ISRCTN87679686

Introduction

Asthma is one of the most common chronic conditions worldwide, with an increasing prevalence that currently affects 1 in 10 people at some time [ 1 - 3 ]. The inflammatory disease causes mild-to-severe respiratory symptoms, including shortness of breath, chest tightness, wheezing, and cough. It significantly burdens patients and health care services, including the need for long-term treatment, emergency care, and hospitalizations that will cost the US economy an estimated US $300 billion over the next 20 years in direct health care expenditure [ 4 ]. Effective asthma control is necessary to reduce these costs and improve the quality of life for people with the condition.

Asthma management is based on achieving symptom control and reducing the frequency and severity of exacerbations [ 5 ]. This involves the use of inhaled anti-inflammatory medications and the avoidance of asthma triggers. Symptom control is associated with improved quality of life, reduced health care costs, and better work performance [ 6 ]. However, a significant proportion of individuals with asthma have suboptimal control because of poor adherence to medication, insufficient recognition of triggers, comorbidities (such as rhinitis or obesity), health behaviors (such as smoking), and inadequate information about treatment [ 7 ]. Mobile apps may address some of these treatment challenges by enabling people with asthma to more easily and consistently self-manage their condition compared to existing treatment plans. For example, digital apps can offer timely reminders to improve medication adherence or real-time feedback to identify and adapt to possible triggers and health behaviors [ 8 , 9 ].

A 2017 systematic review and meta-analysis of randomized controlled trials (RCTs) of mobile, web-based, and messaging service apps to support asthma self-management [ 9 ] concluded that these interventions could improve asthma control, but that effectiveness and important features of the apps varied. The majority of these apps included combinations of medication prompts, patient education, digital diaries, action plans, and professional support facilitation [ 9 ]. A similar 2018 review of RCTs and observational studies concluded that, in adults with asthma, mobile apps were more effective than other types of digital interventions, such as web-based interventions [ 10 ]. Studies of app-based interventions published since these reviews have generally been feasibility trials or small underpowered RCTs [ 11 - 14 ]. A 2022 Cochrane review examined the effect of digital apps on asthma medication adherence, concluding they were likely to be useful in poorly adherent populations, but again highlighting heterogeneity among mobile or web-based interventions [ 8 ]. Despite the mixed evidence for effectiveness, several apps are publicly available. These apps frequently incorporate behavior change techniques and gamification. Reviews of these apps have highlighted that they vary considerably in quality, use a range of behavior change techniques, struggle with adequate engagement and retention, and lack clinical validation of efficacy [ 15 - 17 ]. The Global Strategy for Asthma Management and Prevention (Global Initiative for Asthma [GINA]) highlights that, despite the use of digital technologies rapidly increasing in patients with asthma, “high-quality studies are needed to evaluate their utility and effectiveness” [ 3 ].

We aimed to address the fundamental issue that commercially available apps require sufficient evaluation of their effectiveness, by conducting an RCT of juli. This is a digital health app that aims to support people with asthma by combining numerous approaches that have been shown as effective in research-grade apps for asthma, including symptom tracking; medication reminders; trigger identification (including geolocated weather, pollen, and air pollution data); data visualization of respiratory symptoms, mood, exercise, activity, sleep, and heart rate variability; and behavioral activation recommendations about how to improve these parameters [ 18 , 19 ]. Our RCT was fully remote, increasing time efficiency, cost-effectiveness, and reach. We hypothesized that participants randomized to juli would have a greater reduction in asthma symptoms at 8 weeks than those randomized to the attention-placebo control.

Study Design and Participants

We conducted a fully remote pragmatic single-blind, placebo-control RCT to test the efficacy of juli in adults with asthma. The trial was open to individuals from anywhere in the world, provided they were aged 18 to 65 years, English-speaking, had access to a smartphone, and self-identified as having asthma. We also only included people with asthma symptoms that were uncontrolled according to a score of 19 or lower on the Asthma Control Test (ACT) at baseline. An ACT score of 19 is consistent with GINA-defined uncontrolled asthma [ 20 ].

Recruitment

Recruitment ran from May 2021 until April 2023. We recruited via self-help groups for asthma, online adverts, and social media posts. For the duration of the RCT, we modified the onboarding so that recruitment was automated with study information provided to participants within the app. The support for potential participants interested in the RCT was provided by this study’s team via email.

Ethical Considerations

The University College London Ethics Committee gave full ethical approval (19413/001). All participants supplied written informed consent within the app, with additional information on a dedicated web page. Data required for the RCT were stored separately in an anonymized format. The juli app is Health Insurance Portability and Accountability Act, Service Organization Control Type 2, and General Data Protection Regulation compliant. Participants in both arms of the RCT were entered into a prize draw at 2, 4, 6, and 8 weeks with the possibility of winning US $20 at each time point. The trial was entered on the International Standard Randomised Controlled Trial Number (ISRCTN) registry (ISRCTN87679686). At the same time, we were running an RCT of the juli app for depression. This RCT had a similar design and analysis [ 21 ].

Randomization and Masking

We assigned participants in a 1:1 ratio to either an attention-placebo control or the full version of juli. We automated and conducted randomization within the app, using random block sizes ranging from 4 to 8. To ensure data integrity, the treatment allocation was concealed from both the research team and independent statisticians until the analysis was finalized.

Intervention

The juli app was developed by gamification experts in collaboration with patients, a psychiatrist, and a pulmonologist. A patient with asthma and a psychiatrist with expertise in mental health and physical health interface are the chief technical officer and chief medical officer of juli, respectively. We held development and user testing interviews with 10 patients with asthma (5 female individuals, aged 18-65 years). The app underwent multiple iterations following feedback from these patient panel interviews and discussions with a pulmonologist. Our trial used a full version of the juli app for the intervention group and a limited version in the attention-placebo control group. Participants with the complete juli app received automatic prompts to open the app each day at a user-inputted time. The app asked participants about how their asthma was affecting them on a 5-face emoji scale, their emergency inhaler use that day, how often they had a shortness of breath episode, and whether they woke in the night due to shortness of breath. Individuals could also track various factors they regarded as relevant to their asthma symptoms, such as tobacco smoke exposure [ 19 ]. The app connects to smart peak flow meters (such as Smart Peak Flow [Smart Respiratory Products Ltd] or MIR Smart One [Smart One]) through Google Fit or Apple HealthKit, or participants could enter this information manually.

The app presented participants with regular, geolocated weather, pollen, and air pollution data relevant to their asthma [ 22 ]. All participants could also access passively gathered smartphone data on relevant health-related factors, including, activity, menstrual cycle, and sleep. Participants could check this information daily and see associations with their asthma [ 23 - 25 ]. If they had them, participants with wearables that they chose to connect to the app would see additional data on workouts and heart rate variability, as well as improved data on activity and sleep. However, the lack of access to a wearable was not an exclusion criterion.

The app also uses behavioral activation techniques to provide personalized recommendations about these factors to encourage healthy behaviors. The app includes customizable medication reminders to improve medication adherence [ 26 ]. The juli app also encouraged participants to use the positive affect journaling function [ 27 ]. The design of the juli app guides participants toward all elements of the app but allows them to flexibly choose where they want to engage.

Attention-Placebo Control

Participants in the control arm had a limited version of the app. The app prompted participants to open it each day and rate how they were feeling on the 5 emoji scale, but they did not have access to any further functionality or intervention. There was no change to usual care in either arm.

Assessment Tools

Participants in both arms completed baseline assessments and follow-up assessments at 2, 4, 6, and 8 weeks remotely from within the app. Assessments included the ACT for asthma symptoms and the 12-Item Short Form Health Survey (SF-12) for health-related quality of life. The ACT is a widely used, self-completed asthma symptom scale that is responsive to change with scores ranging from 5 to 25 [ 28 ]. A cutoff score of 19 or lower identifies patients with uncontrolled asthma. The SF-12 is a self-reported measure assessing the impact of health on an individual’s everyday life. Scores ranging from 0 to 100 with higher scores indicate a better quality of life [ 29 ].

The total ACT score at 8 weeks was our primary outcome. Secondary outcomes were continuous ACT score at 2, 4, 6, and 8 weeks in a repeated measures analysis using mixed-effect models; remission, defined as a score of >19 at 8 weeks; remission at 2, 4, 6, and 8 weeks in a repeated measures analysis; SF-12 physical and mental component scores at 8 weeks; and SF-12 physical and mental component scores at 4 and 8 weeks in a repeated measures analysis.

We added achieving a minimal clinically important difference (MCID) at 8 weeks (a 3-point increase on the ACT) [ 30 ] and a worsening of asthma symptoms (ie, a decrease in ACT scores from baseline) as post hoc outcomes.

Sample Size Estimation

The best MCID estimate for the ACT is between 2.2 and 3.0 (SD 3.1 to 4.7) [ 30 ]. A 2-sided 5% significance level at 80% power requires a total sample size of 146 for an MCID of 3. We aimed to recruit 90 participants per arm, allowing for 23% attrition [ 31 ].

Statistical Analyses

We preprinted the analysis plan on UCL Discovery [ 32 ] and preregistered the RCT on the ISRCTN registry with a description of the primary and secondary outcomes before the trial started. In reporting and analyzing our data we followed the CONSORT (Consolidated Standards of Reporting Trials) guidelines [ 33 ].

Our primary outcome was the difference in total ACT score at 8 weeks between the control and intervention groups in a per-protocol analysis. We estimated this difference with a linear regression model adjusted for baseline ACT and any imbalanced baseline covariates. We tested how robust the result was to model specification by also using a Poisson model and adjusting for any variables not balanced at baseline. We used logistic regression to calculate the odds ratio (OR) of remission at 8 weeks (ACT>19), achieving MCID (≥3 point ACT improvement), and worsening of asthma, adjusting for baseline ACT. We completed the repeat measures analyses using linear or logistic mixed-effect models adjusting for ACT at baseline.

We repeated the analysis of all outcomes in a modified intention-to-treat (ITT) analysis. This analysis included all randomized participants with a complete baseline and week 2 ACT score, dropping participants who were randomized but never used the app (see Figure 1 ). We imputed the missing ACT scores first using multiple imputation models and then using the last observation carried forward [ 34 ]. The multiple imputation models included predictive mean matching with 5 nearest neighbors and 50 iterations. This method means that only plausible values are imputed and is more robust to model misspecification than fully parametric imputation [ 35 ].

An independent statistician with no conflicts of interest with the company providing juli completed the analyses. All analyses we conducted using Stata (version 17; StataCorp) and R (version 4.3.1 for Windows; R Foundation for Statistical Computing).

quality improvement in healthcare research paper

Participants

Of 1199 participants who completed the baseline ACT, 411 (34.3%) participants met eligibility criteria. The 411 participants were randomized: 204 (49.6%) to the intervention arm and 207 (50.4%) to the active control arm. Of the 411 participants randomized, 325 (79.1%) were from the United States. Attrition was similar in both arms: 71 (34.8%) out of 204 participants in the intervention arm and 78 (37.7%) out of 207 participants in the active control arm left this study before the week-2 ACT. The remaining 262 participants contributed to our modified ITT analysis ( Figure 1 ). Further attrition occurred between week 2 and week 8: a total of 66 (49.6%) out of 133 remaining participants left the intervention group, and 44 (34.1%) out of 129 remaining participants left the active control group. The remaining 152 participants contributed to our per-protocol analysis ( Figure 1 ). Participants included in the modified ITT and per-protocol analyses were similar in terms of baseline characteristics (see Table 1 and Multimedia Appendix 1 ).

a ACT: Asthma Control Test (possible range 5-25).

b SF-12 physical health subscale: Short-Form Health Survey-12 physical health subscale (possible range 0-100).

c SF-12 mental health subscale: Short-Form Health Survey-12 mental health subscale (possible range 0-100).

Per-Protocol Analysis

Of the 152 participants in the per-protocol analysis, they were mostly female individuals (n=122, 80.3%) who had been diagnosed by a physician more than 5 years ago (n=115, 75.7%) and had ongoing contact with a doctor about their asthma (n=134, 88.2%; Table 1 ). Participants had a mean baseline ACT score of 12.84 (SD 4.00).

Intervention group participants had a mean ACT score of 17.93 (SD 4.72) compared with 16.24 (SD 5.78) in the control group after 8 weeks (see Figure 2 ). After adjusting for baseline ACT score, the intervention group showed a greater improvement in symptom scores at 8 weeks than those in the control group (adjusted coefficient 1.91, 95% CI 0.31-3.51; P =.02; Table 2 ). After adjusting for imbalanced baseline characteristics, the improvement was 2.01 (95% CI 0.48-3.53; P =.01) points on the ACT. Using Poisson regression rather than linear regression did not alter our results.

The chance of being in remission by week 8 did not differ between the intervention and control groups after accounting for baseline asthma. However, participants in the intervention group were more likely to experience an MCID (adjusted OR 2.38, 95% CI 1.20-4.70; P =.01) than those in the control group. This effect was consistent across the 2-, 4-, 6-, and 8-week assessments ( Table 2 ). The odds of worsening symptoms were similar in both arms (adjusted OR 0.55, 95% CI 0.23-1.32, P =.18). There were no between-group differences in SF-12 mental or physical component scores.

quality improvement in healthcare research paper

a ACT: Asthma Control Test.

b coefficient.

c Odds ratio.

d SF-12: 12-Item Short Form Health Survey.

e MCID: minimal clinically important difference.

f ITT: intention-to-treat.

ITT Analysis

The baseline characteristics of participants in the intervention and control groups were similar to the per-protocol analysis. Following multiple imputations of missing outcomes, there was a greater improvement in ACT scores in the intervention group than in the active control group (adjusted coefficient 1.56, 95% CI 0.32-2.79; P =.01; Table 2 ). MCID was more common in the intervention group than the control group (adjusted OR 2.17, 95% CI 1.25-3.78, P =.006). Both arms had similar odds of remission, worsening of symptoms, and SF-12 scores. The results from the last observation carried forward analyses were consistent with the per-protocol and multiply imputed results.

Principal Findings

Our primary analysis showed that juli users had a greater improvement in asthma symptoms at 8 weeks compared to an attention-placebo control. The mean improvement in the intervention group was 5.33 (SD 5.33) compared with 3.20 (SD 5.26) in the control group. This total improvement and the difference between arms are consistent with a clinically important effect of juli on asthma control [ 30 ]. Participants assigned to juli had more than twice the odds of a 3-point (MCID) or greater improvement on the ACT. However, the mean ACT score at 8 weeks in both arms fell below the established cut point for “well-controlled” asthma, and there was no difference between arms in terms of odds of remission. The results from our multilevel models covering outcomes from 2 to 8 weeks and the modified ITT analysis with all individuals who were randomized and used the app for at least 2 weeks were consistent with these primary findings.

Participants entering our trial had a mean baseline ACT score of 12.84 (SD 4.00), indicating they fulfilled the GINA definition of “very poorly controlled” (score of 13) as uncontrolled is scores of 19 or lower [ 20 ], and most reported having asthma for several years with routine physician contact, suggesting difficulties with long-term asthma control. The results of this trial indicate that juli can augment the treatment of uncontrolled asthma as indicated by improved ACT scores over 8 weeks. There is consistent evidence that low ACT scores are associated with rescue medication use, asthma exacerbations, reduced lung function, and reduced asthma-specific quality of life, sleep, work, and productivity [ 6 ]. Increases in ACT scores are associated with decreased health care usage and health care costs [ 6 ].

It is unclear which component of juli resulted in improved ACT scores, but participants likely chose elements that suited them, which is a strength of juli’s design, allowing for a degree of self-personalization. Previous research into asthma app functionality has highlighted symptom tracking, clinical questionnaires, goal setting, performance feedback, medication reminders, and tracking as valuable to patients [ 17 ]. Gamification and contingent rewards are also important features incorporated into juli [ 17 ]. Positive affect journaling is a novel, evidence-based addition to juli’s functionality [ 36 ]. Other commercially available apps for adult asthma self-management use similar behavior change techniques, health education, symptom recording, environmental data, medication reminders, and data presentation. A recent review identified over 500 asthma-related mobile and inhaler-based monitoring apps [ 37 ]. However, only a small number of these had any degree of scientific evaluation; with positive fully powered trials being rare [ 37 ]. An additional problem for patients is the high rate of failure of companies providing these apps, with only a small number with evidence being available currently. These include AsthmaMD (AsthmaMD Inc) [ 14 ], Kiss myAsthma (University of Sydney, the Woolcock Institute of Medical Research, and The University of Melbourne) [ 38 ], ASTHMAXcel (ASTHMAXcel) [ 39 ], and eAMS (EAPOC [Evidence at the Point-of-Care]) [ 40 ], each having positive pilot data.

The juli app is available in Android and iOS formats globally. It is a highly accessible platform for people with asthma, and our trial provides methodologically robust evidence of its efficacy in managing asthma. Additional research is required to understand the most cost-effective support procedures to improve adherence to digital self-management tools and how best to integrate them into clinical practice. The majority of the early attrition in our RCT was in participants who never began to use the app. To reduce this, future RCTs of digital interventions may benefit from a run-in period, in which participants become familiarized with the app before randomization [ 41 ].

Strengths and Limitations

There were several strengths and limitations to this RCT. We successfully and remotely recruited, screened, randomized, treated, and assessed participants worldwide. People could easily participate in the trial as our modified version of the juli app allowed consent, randomization, and assessments to occur within the platform. This facilitated a low-cost global recruitment strategy and a pragmatic trial design with good external validity. However, our focus on reducing participant burden limited the types and richness of the data we were able to collect at baseline. For example, we lack relevant information on income, education, and other social determinants of health. Despite this, we did achieve a postrandomization balance in recorded characteristics at baseline, indicating successful randomization. Most of the participants were female, reflecting established differences in sex-specific rates of asthma [ 42 ], health behaviors, and health care use in adults [ 43 ].

Participants completed the ACT, which is a recommended primary end point in clinical trials for asthma [ 6 ]. We also preregistered our primary and secondary outcome measures along with a full analysis plan, which we adhered to. However, we lacked a broader battery of outcome measures that could have further contextualized our findings and identified possible mechanisms of action.

Attrition was greater than we predicted. The attrition in our trial follows a similar pattern to other digital RCTs, including for asthma apps, where it mostly occurs between randomization and week 2. Dropout rates in previous RCTs have ranged from 20% to 60% [ 10 ]. However, studies recruiting via social media have had low retention at 30 days (<20%) [ 44 ], and a similar, all-remote RCT of mobile health support for asthma had an attrition of 62% at 9 weeks [ 45 ]. To manage attrition, we continued recruiting and randomizing participants until we had a sufficient number of participants completing the week 8 outcome measures to meet our sample size calculation. We examined differences in completers versus noncompleters (see Table 1 and Multimedia Appendix 1 ). There were unlikely to be differences between those who dropped out of this study and those who completed it, based on their baseline characteristics, including asthma severity. Our modified ITT and primary analysis findings were similar, suggesting the intervention would have had a similar effect on those who dropped out. The ITT analysis used 2 imputation methods that make different assumptions [ 34 ], and results were consistent using both methods. Despite this, it is impossible to rule out attrition bias and our results should be seen as reflecting the effect on people motivated and able to use juli.

Conclusions

The juli app has been demonstrated to decrease asthma symptoms within 8 weeks, with an increased chance of achieving MCID, but no difference in terms of odds of remission. As such, juli represents a low-risk and low-cost adjunct to the care regimen of individuals with asthma.

Acknowledgments

AK is supported by the UK Research and Innovation (UKRI) Digital Youth program award (MRC project reference MR/W002450/1), which is part of the Arts and Humanities Research Council/Economic and Social Research Council/Medical Research Council (AHRC/ESRC/MRC) Adolescence, Mental Health and the Developing Mind program. JH is supported by the UK Research and Innovation grant MR/V023373/1, the University College London Hospitals NIHR Biomedical Research Centre, and the NIHR North Thames Applied Research Collaboration. This study was funded by juli Health.

Data Availability

All data produced in this study are available upon reasonable request to the authors.

Authors' Contributions

JH conceived this study. JH, AK, BD, BH, and JS designed this study. AK, BD, and BH collected the data. KE analyzed the data. JH wrote the initial draft. All authors edited and approved the final paper.

Conflicts of Interest

AK, BD, BH, JS, and JH are shareholders in juli Health. AK has received consultancy fees from juli Health and Wellcome Trust. BD, BH, JS, and JH are cofounders of juli Health. JH has received consultancy fees from juli Health and Wellcome Trust. KE has no conflicts of interest. The funders played no part in the analysis of the data.

Baseline characteristics of individuals in the modified intention-to-treat analysis.

CONSORT-eHealth checklist (V 1.6.1).

  • Nunes C, Pereira AM, Morais-Almeida M. Asthma costs and social impact. Asthma Res Pract. 2017;3:1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Song P, Adeloye D, Salim H, Dos Santos JP, Campbell H, Sheikh A, et al. Global, regional, and national prevalence of asthma in 2019: a systematic analysis and modelling study. J Glob Health. 2022;12:04052. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Global strategy for asthma management and prevention. Global Initiative for Asthma. 2023. URL: https://ginasthma.org/2023-gina-main-report/ [accessed 2024-03-29]
  • Yaghoubi M, Adibi A, Safari A, FitzGerald JM, Sadatsafavi M. The projected economic and health burden of uncontrolled asthma in the United States. Am J Respir Crit Care Med. 2019;200(9):1102-1112. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Global Initiative for Asthma Executive Committee. Global strategy for asthma management and prevention (revised 2002). NHLBI/WHO Workshop Report. 2006. URL: https://cir.nii.ac.jp/crid/1572543025541173888 [accessed 2024-03-29]
  • van Dijk BCP, Svedsater H, Heddini A, Nelsen L, Balradj JS, Alleman C. Relationship between the Asthma Control Test (ACT) and other outcomes: a targeted literature review. BMC Pulm Med. 2020;20(1):79. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Larsson K, Kankaanranta H, Janson C, Lehtimäki L, Ställberg B, Løkke A, et al. Bringing asthma care into the twenty-first century. NPJ Prim Care Respir Med. 2020;30(1):25. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chan A, De Simoni A, Wileman V, Holliday L, Newby CJ, Chisari C, et al. Digital interventions to improve adherence to maintenance medication in asthma. Cochrane Database Syst Rev. 2022;6(6):CD013030. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hui CY, Walton R, McKinstry B, Jackson T, Parker R, Pinnock H. The use of mobile applications to support self-management for people with asthma: a systematic review of controlled studies to identify features associated with clinical effectiveness and adherence. J Am Med Inform Assoc. 2017;24(3):619-632. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Unni E, Gabriel S, Ariely R. A review of the use and effectiveness of digital health technologies in patients with asthma. Ann Allergy Asthma Immunol. 2018;121(6):680-691.e1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ainsworth B, Greenwell K, Stuart B, Raftery J, Mair F, Bruton A, et al. Feasibility trial of a digital self-management intervention 'my breathing matters' to improve asthma-related quality of life for UK primary care patients with asthma. BMJ Open. 2019;9(11):e032465. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ljungberg H, Carleborg A, Gerber H, Öfverström C, Wolodarski J, Menshi F, et al. Clinical effect on uncontrolled asthma using a novel digital automated self-management solution: a physician-blinded randomised controlled crossover trial. Eur Respir J. 2019;54(5):1900983. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Newhouse N, Martin A, Jawad S, Yu LM, Davoudianfar M, Locock L, et al. Randomised feasibility study of a novel experience-based internet intervention to support self-management in chronic asthma. BMJ Open. 2016;6(12):e013401. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Murphy J, McSharry J, Hynes L, Molloy GJ. A smartphone app to support adherence to inhaled corticosteroids in young adults with asthma: multi-methods feasibility study. JMIR Form Res. 2021;5(9):e28784. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tinschert P, Jakob R, Barata F, Kramer JN, Kowatsch T. The potential of mobile apps for improving asthma self-management: a review of publicly available and well-adopted asthma apps. JMIR mHealth uHealth. 2017;5(8):e113. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ramsey RR, Caromody JK, Voorhees SE, Warning A, Cushing CC, Guilbert TW, et al. A systematic evaluation of asthma management apps examining behavior change techniques. J Allergy Clin Immunol Pract. 2019;7(8):2583-2591. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Camacho-Rivera M, Vo H, Huang X, Lau J, Lawal A, Kawaguchi A. Evaluating asthma mobile apps to improve asthma self-management: user ratings and sentiment analysis of publicly available apps. JMIR mHealth uHealth. 2020;8(10):e15076. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Van Lieshout RJ, Macqueen G. Psychological factors in asthma. Allergy Asthma Clin Immunol. 2008;4(1):12. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gibson PG, Powell H, Wilson A, Abramson MJ, Haywood P, Bauman A, et al. Self-management education and regular practitioner review for adults with asthma. Cochrane Database Syst Rev. 2002;2010(3):CD001117.
  • Thomas M, Kay S, Pike J, Williams A, Rosenzweig JRC, Hillyer EV, et al. The Asthma Control Test (ACT) as a predictor of GINA guideline-defined asthma control: analysis of a multinational cross-sectional survey. Prim Care Respir J. 2009;18(1):41-49. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kandola A, Edwards E, Muller MAE, Dührkoop B, Hein B, Straatman J, et al. Digitally managing depression: a fully remote randomized attention-placebo controlled trial. medRxiv. Preprint posted online on April 11 2023. [ FREE Full text ] [ CrossRef ]
  • Lee SW, Yon DK, James CC, Lee S, Koh HY, Sheen YH, et al. Short-term effects of multiple outdoor environmental factors on risk of asthma exacerbations: age-stratified time-series analysis. J Allergy Clin Immunol. 2019;144(6):1542-1550.e1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lehrer PM, Irvin CG, Lu SE, Scardella A, Roehmheld-Hamm B, Aviles-Velez M, et al. Heart rate variability biofeedback does not substitute for asthma steroid controller medication. Appl Psychophysiol Biofeedback. 2018;43(1):57-73. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Scheer FAJL, Hilton MF, Evoniuk HL, Shiels SA, Malhotra A, Sugarbaker R, et al. The endogenous circadian system worsens asthma at night independent of sleep and other daily behavioral or environmental cycles. Proc Natl Acad Sci U S A. 2021;118(37):e2018486118. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sánchez-Ramos JL, Pereira-Vega AR, Alvarado-Gómez F, Maldonado-Pérez JA, Svanes C, Gómez-Real F. Risk factors for premenstrual asthma: a systematic review and meta-analysis. Expert Rev Respir Med. 2017;11(1):57-72. [ CrossRef ] [ Medline ]
  • Engelkes M, Janssens HM, de Jongste JC, Sturkenboom MCJM, Verhamme KMC. Medication adherence and the risk of severe asthma exacerbations: a systematic review. Eur Respir J. 2015;45(2):396-407. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boggiss AL, Consedine NS, Brenton-Peters JM, Hofman PL, Serlachius AS. A systematic review of gratitude interventions: effects on physical health and health behaviors. J Psychosom Res. 2020;135:110165. [ CrossRef ] [ Medline ]
  • Schatz M, Sorkness CA, Li JT, Marcus P, Murray JJ, Nathan RA, et al. Asthma control test: reliability, validity, and responsiveness in patients not previously followed by asthma specialists. J Allergy Clin Immunol. 2006;117(3):549-556. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jenkinson C, Layte R, Jenkinson D, Lawrence K, Petersen S, Paice C, et al. A shorter form health survey: can the SF-12 replicate results from the SF-36 in longitudinal studies? J Public Health Med. 1997;19(2):179-186. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schatz M, Kosinski M, Yarlas AS, Hanlon J, Watson ME, Jhingran P. The minimally important difference of the Asthma Control Test. J Allergy Clin Immunol. 2009;124(4):719-723.e1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McLean G, Murray E, Band R, Moffat KR, Hanlon P, Bruton A, et al. Interactive digital interventions to promote self-management in adults with asthma: systematic review and meta-analysis. BMC Pulm Med. 2016;16(1):83. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hayes J. Randomised control trial of a digital asthma management application: juli: protocol and analysis plan. UCL Discovery. 2021. URL: https://discovery.ucl.ac.uk/id/eprint/10129351/ [accessed 2023-08-22]
  • Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. J Pharmacol Pharmacother. 2010;1(2):100-107. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cro S, Morris TP, Kenward MG, Carpenter JR. Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: a practical guide. Stat Med. 2020;39(21):2815-2842. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vink G, Frank LE, Pannekoek J, van Buuren S. Predictive mean matching imputation of semicontinuous variables. Stat Neerl. 2014;68(1):61-90. [ CrossRef ]
  • Cook KA, Woessner KM, White AA. Happy asthma: improved asthma control with a gratitude journal. J Allergy Clin Immunol Pract. 2018;6(6):2154-2156. [ CrossRef ] [ Medline ]
  • Himes BE, Leszinsky L, Walsh R, Hepner H, Wu AC. Mobile health and inhaler-based monitoring devices for asthma management. J Allergy Clin Immunol Pract. 2019;7(8):2535-2543. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis SR, Peters D, Calvo RA, Sawyer SM, Foster JM, Smith LD. A consumer designed smartphone app for young people with asthma: pilot of engagement and acceptability. J Asthma. 2021;58(2):253-261. [ CrossRef ] [ Medline ]
  • Hsia B, Mowrey W, Keskin T, Wu S, Aita R, Kwak L, et al. Developing and pilot testing ASTHMAXcel, a mobile app for adults with asthma. J Asthma. 2021;58(6):834-847. [ CrossRef ] [ Medline ]
  • Gupta S, Price C, Agarwal G, Chan D, Goel S, Boulet LP, et al. The electronic Asthma Management System (eAMS) improves primary care asthma management. Eur Respir J. 2019;53(4):1802241. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pratap A, Neto EC, Snyder P, Stepnowsky C, Elhadad N, Grant D, et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. NPJ Digit Med. 2020;3:21. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chowdhury NU, Guntur VP, Newcomb DC, Wechsler ME. Sex and gender in asthma. Eur Respir Rev. 2021;30(162):210067. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boulet LP, Lavoie KL, Raherison-Semjen C, Kaplan A, Singh D, Jenkins CR. Addressing sex and gender to improve asthma management. NPJ Prim Care Respir Med. 2022;32(1):56. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hui CY, McKinstry B, Walton R, Pinnock H. Strategies to promote adoption and usage of an application to support asthma self-management: a qualitative observational study. J Innov Health Inform. 2018;25(4):243-253. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Koufopoulos JT, Conner MT, Gardner PH, Kellar I. A web-based and mobile health social support intervention to promote adherence to inhaled asthma medications: randomized controlled trial. J Med Internet Res. 2016;18(6):e122. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T de Azevedo Cardoso; submitted 19.07.23; peer-reviewed by Anonymous; comments to author 17.08.23; revised version received 23.08.23; accepted 26.03.24; published 29.04.24.

©Aaron Kandola, Kyra Edwards, Joris Straatman, Bettina Dührkoop, Bettina Hein, Joseph Hayes. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Isr J Health Policy Res

Logo of ijhpr

Improving the quality of care in health systems: towards better strategies

Jennifer dixon.

The Health Foundation, 8 Salisbury Square, London, EC4Y 8AP UK

Associated Data

not applicable.

Improving the quality of health care across a nation is complex and hard. Countries often rely on multiple single national level programmes to make progress. But the key is to use a framework to develop a balanced overall strategy, and evaluate the main elements continuously and over time. Achieving that requires having a critical mass of leaders who collectively can see the bigger picture now, envision a roadmap for the future to chart an intelligent course, and course correct regularly. This is a long-term agenda requiring commitment, careful stewardship, different perspectives, trust, and the building of knowledge and experience over time. It is also almost completely at odds with much current policymaking which is short term, reactive and demands hard results. Many countries are making progress. But the rapid introduction of new types of care during the COVID 19 pandemic, such as online and digital, the use of new technologies which could soon revolutionalise the way care is delivered, experienced and evaluated, and the huge pressures on spending on health care in future mean we will have to do better. Achieving system-wide quality of care requires having a critical mass of leaders who collectively can see the bigger picture now, envision a roadmap for the future to chart a balanced intelligent course. For the Israeli health system, the recent IJHPR article by Dreiher et al. will help, but it will be important, in the future, to analyse how Israel measures up on the framework outlined above. This ideally would be supplemented with a survey of key leaders for their assessment, and both would be a regular (say 5 yearly) exercise and would help inform future strategies.

Introduction

Most governments in developed countries want to ensure their populations have accessible, high quality, affordable health care. Building blocks for achieving this objective include providing the population some form of universal coverage of comprehensive benefits, investments allowing a decent level of quality of care to be provided, and regulating care providers, in particular the medical profession.

As good quality care does not necessarily flow from these basic ingredients, most countries have developed approaches to try to ensure it. Put simply, at one end of the spectrum are those that largely seek to mitigate the worst safety risks to health, perhaps in response to significant and well publicised lapses in quality. In the middle are an extensive range of initiatives that seek to improve care each in specific high priority areas. And at the other end are countries with a comprehensive and coherent strategy comprised of multiple approaches. Many countries aspire to the latter but are in the middle part of the spectrum. The ability to design a comprehensive strategy is difficult, but the ability to deliver it is more so, given the historical context, assets and power structures within countries to make or break progress. Some countries lack the ability to make needed changes, as power over different levers is widely distributed across different parties or different levels of the system – so agreeing and implementing a national strategy is possible but considerably more difficult.

It is refreshing when occasionally, as in Dreiher et al’s report with respect to the health care system in Israel [ 1 ], there is an attempt to lay out the key approaches used in a particular health care system to improve quality and assess progress. The feat is exceptionally challenging because quality of care is a slippery multifaceted concept and difficult to measure. And initiatives cover a multitude of dimensions, from regulation to measurement to financial incentives to public reporting to patient choice and more. How individual initiatives are meant to impact on quality may not be particularly clear, still less on how they might interact with others. Some initiatives may have indirect and lagged effects and may not be seen as quality initiatives at all. While direct and significant national initiatives may be well described, how they stack up as a whole is often not.

The easier job is to compare with other countries – is one country’s set of initiatives missing anything big being tried somewhere else? Are there glaring differences in outcomes? But the more difficult task is to assess whether, taken as a whole, policies in a country represent a coherent and balanced strategy. This is a tall order for any group of national leaders to assess, be they in a ministry, university, a quality institute or professionals in the health care system itself. And yet it is important to try, and keep trying, because doing so gives the best chance to make progress.

Concepts to consider in developing a coherent strategy

One way forward is to identify some basic concepts within a strategy, before categorising policies under them to assess balance, identify gaps, and point to where efforts should best be directed. Here I draw heavily on the work by Sutherland and Leatherman [ 2 , 3 ] Molloy et al. [ 4 ], Darzi [ 5 ] and others for the NHS in England. As in other countries, in England there have been several attempts to produce an overall strategy for quality of care in the National Health Service, seen most recently in the policy High Quality Care for All led by Lord Ara Darzi, published in 2008, which attempted to put quality at the centre of policymaking [ 5 ].

The obvious first step is to be clear about what is meant by quality of care and which are the objectives to achieve in any strategy. Many countries use the Institute of Medicine’s (IOM’s) definition of six domains: safety, effectiveness, patient-centredness, timeliness, efficiency and equity (equal access for equal need) [ 6 ].

The second is to consider in a strategy the balance of three core functions in achieving high quality in any industry, as outlined in the Juran trilogy: planning; improvement; control. In the context of health care this means effective strategic planning for quality at national level; support for organisations and professionals to improve care (for example using quality improvement techniques [ 7 ]); and control mechanisms to ensure progress and mitigate risks (including regulation and inspection, and also accountability through for example management and use of metrics). These three core functions are clearly linked, and Juran thought it important not to rely on any single one. For example a country heavily relying on regulation and inspection, might drive out professional motivation to improve care, or perversely encourage behaviour which may reduce quality.

The third is to use a framework to classify and organize quality-related activity to spot potential gaps or weaknesses in a national strategy, as modified [ 4 ] from High Quality Health care for All, as shown in Table  1 .

A practical strategic framework for improving quality

Source: Molloy A, Martin S, Gardner T, Leatherman S. A Clear Road Ahead. The Health Foundation, 2016

The fourth is in any strategy to pay attention to building capacity to improve quality at different levels in a country, for example at different geopolitical or administrative levels or institutions (providers or professional membership institutions for example). The aim here is to ensure that capability to improve quality and ‘ownership’ is developed at each level, and that there is synergy of activities at each level.

In a thoughtful essay comparing quality strategies in the UK and US, Ferlie and Shortell [ 8 ] describe four levels as being those operating at: individual level (such as staff education); group or team level (such as team development and pathway redesign); organisation level (such as approach to quality improvement and assurance); and larger system level (such as regulation, and public reporting of performance and outcomes).

Molloy et al [ 4 ] have a slightly different approach to categorising the multiple levels where action is needed to improve quality, illustrated in the pyramid in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 13584_2021_448_Fig1_HTML.jpg

Multi-level model for building capacity for a national quality strategy. Source: Molloy A, Martin S, Gardner T, Leatherman S. A Clear Road Ahead. The Health Foundation, 2016

In the pyramid the four levels refer broadly to the following:

  • national – policy formulation, resourcing, infrastructure and accountability to the public
  • regional/local – translating national policy into the local context, macro-management and monitoring
  • institutional – good governance, competent operational management and continuous quality improvement
  • individual – this is the level of encounter between patients and health professionals where the key attributes of quality must be actualised through individual behaviours.

The fifth concept (adapted from Leatherman and Sutherland) is to consider initiatives according to who or what is their intended target – people individually or collectively involved in health care delivery, or organisations at national, regional and local level that form part of the health system. Given the myriad of initiatives, some only indirectly targeting quality, it is important for any strategy to define the scope of what might be included. For example, to what extent is, say, criteria for capital investment important for improving quality, or initiatives to improve coding of data used to measure quality?

Putting it all together

In 2016 a comprehensive independent assessment of the main approaches to improve the quality of care in the NHS in England using these five concepts was published [ 4 ]. In brief, the findings revealed a very large number of national initiatives directly to improve quality (179 announced by the government alone over the previous 4 years), many in response to hospital-based lapses in care and heavily focused on patient safety (70% of initiatives). Given this, many were skewed towards Juran’s ‘control’ (regulation and reporting metrics, such as the introduction of national chief inspectors of care and a publicly reported system for rating the quality of primary, social and hospital care) rather than ‘improvement’ (supporting clinicians for example by developing quality improvement skills). The government initiatives, and many more (for example coming from national public agencies), were aimed at all levels of the pyramid shown in Fig.  1 .

More initiatives were targeted on ‘system’ and patients and the public, and far fewer on the clinical staff delivering care, yet an accompanying survey of national leaders showed workforce-focused initiatives were thought to be among the ‘best bets’ for protecting and enhancing quality. A significant step forward was the introduction of relicensure of physicians (known in the UK as ‘revalidation’) every 5 years by the General Medical Council in 2012 ( https://www.gmc-uk.org/registration-and-licensing/managing-your-registration/revalidation ), linked to a formal annual appraisal.

The evidence supporting the design and introduction of initiatives was frequently weak or absent, and it was also not always clear the extent of consensus among leaders on these when evidence ran short. There were examples however when a conscious effort had been made by government to press the rationale for, gather and challenge key stakeholder views to find a way forward when evidence was incomplete, for example in the work on whether to introduce a controversial national system of ratings of providers in health and social care [ 9 ].

Implementation and evaluation

Clearly whatever the ultimately designed strategy, what is implementable and when, involves a complex set of choices, depending in part on context (what is possible) as well as a selection of priorities. But given the dynamic interplay between different elements of a quality strategy when being implemented, and the length of time to show impact, monitoring progress and formal evaluation of impact is key. The assessment of quality initiatives in the NHS in England showed that accountability for their implementation (as opposed to accountability for other managerial and clinical must-dos) and monitoring was not strong. Many initiatives were introduced at different times and overlapped in what has been described elsewhere as a ‘policy thicket’ [ 10 ]. Overall, in one-third of initiatives implementation was found to be monitored although only the biggest, high profile and national initiatives were both monitored and formally evaluated. The long lag time in implementation meant often new initiatives were overlain on older ones before their effect could be seen. The fundamental point here, too often repeated, is the importance of monitoring implementation, the need to have a stronger system of independent evaluation, and to design a system where enough people see this information to modify the course of implementation or the overall strategy.

Building a long-term commitment

Clearly achieving high quality care is highly complex, and a moving target. Factors that will help progress include clarity and balance in elements of a multi-level strategy, wise choice of do-able initiatives, investment, competent and well-monitored implementation, solid evaluation and patience in the pace of progress. As Ferlie and Shortel noted in their analysis of quality strategies in the UK and US ‘efforts to date relied on ‘relatively narrow single-level programmatic strategies’ and that ‘well intentioned efforts will fail to realise their potential unless both policymakers and practitioners consider and implement a more comprehensive multi-level approach to change’ [ 8 ].

Achieving that surely requires having a critical mass of leaders who collectively can see the bigger picture now, envision a roadmap for the future to chart a balanced intelligent course, and course correct regularly. Dreiher et al’s contribution will help, but it will be important, in the future, to analyse how Israel measures up on a systematic framework such as the one outlined above. This ideally would be supplemented with a survey of key leaders for their assessment, and both would be a regular (say 5 yearly) exercise and would help inform future strategies.

It is worth emphasising that as quality of care will never be fully measurable, particularly the more intangible human aspects of care like empathy, kindness and understanding, any strategy must also nurture core professional values to do what is in the best interests of their patients. This is a long-term agenda in of itself requiring commitment, careful stewardship, different perspectives, trust, and the building of knowledge and experience over time. It is also almost completely at odds with much current policymaking which is short term, reactive and demands hard results.

Countries have and continue to make huge progress, as clearly demonstrated by Dreiher et al in Israel [ 1 ], in England [ 2 , 4 ] and internationally by OECD [ 11 ] among others. But the question on the table now is can we move faster? The agenda is more urgent given the rapid introduction of new types of care during the COVID 19 pandemic, such as online and digital, the crowding on the horizon of new technologies which could soon revolutionalise the way care is delivered, experienced and evaluated, the huge pressures on spending on health care by governments, employers and individuals, and the changing burden of risk and ill health in the population. We will have to do better.

Dr. Jennifer Dixon is the chief executive of The Health Foundation, an independent philanthropic foundation based in London. She originally trained in medicine, is a specialist in public health and health policy research, and has published widely.

Acknowledgements

Author’s contributions.

The author(s) read and approved the final manuscript.

Availability of data and materials

Ethics approval and consent to participate, consent for publication.

fully given by the author.

Competing interests

none. The Health Foundation funded some of the research cited in the paper (the paper by Molloy et al).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

COMMENTS

  1. Quality improvement and healthcare: The Mayo Clinic quality Academy experience

    What is Quality Improvement (QI)? Paul Batalden and Frank Davidoff, in 2008, described QI as "the combined and unceasing efforts of everyone—healthcare professionals, patients and their families, researchers, payers, planners and educators—to make the changes that will lead to better patient outcomes (health), better system performance (care) and better professional development" .

  2. An introduction to quality improvement

    Within healthcare there is no universally accepted definition of 'quality', but those commonly used pick up on this multi-dimensionality. The following popular definition, from the US Institute of Medicine and also used by the World Health Organization, describes quality as 'the degree to which health services for individuals and populations increase the likelihood of desired health ...

  3. Quality improvement in healthcare: the need for valid, reliable and

    Moreover, the GTT method correctly detected most of the AEs that were missed out by the Agency for Healthcare Research and Quality's Patients Safety Indicators. Mevik et al. evaluated a modified GTT method with a manual review of automatically triggered records to measure AEs using the original GTT method as a gold standard. However, the ...

  4. A practical guide to publishing a quality improvement paper

    Journal of Perinatology (2021) Quality improvement (QI) is a relatively new and evolving field as it applies to healthcare. Hence, publishing a QI paper may present certain challenges as QI ...

  5. The Journal for Healthcare Quality (JHQ)

    The Journal for Healthcare Quality (JHQ), a peer-reviewed journal, is an official publication of the National Association for Healthcare Quality. JHQ is a professional forum that continuously advances healthcare quality practice in diverse and changing environments, and is the first choice for creative and scientific solutions in the pursuit of healthcare quality.

  6. A scoping review of continuous quality improvement in healthcare system

    The growing adoption of continuous quality improvement (CQI) initiatives in healthcare has generated a surge in research interest to gain a deeper understanding of CQI. However, comprehensive evidence regarding the diverse facets of CQI in healthcare has been limited. Our review sought to comprehensively grasp the conceptualization and principles of CQI, explore existing models and tools ...

  7. Can quality improvement improve the quality of care? A systematic

    Plan-Do-Study-Act (PDSA) cycles are widely used for quality improvement (QI) in most healthcare systems where tools and models inspired by industrial management have become influential [].The essence of the PDSA cycle is to structure the process of improvement in accordance with the scientific method of experimental learning [2,3,4,5].It is used with consecutive iterations of the cycle ...

  8. Quality improvement into practice

    Definitions of quality improvement. Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3. The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4.

  9. Quality improvement in healthcare: Six Sigma systematic review

    Abstract. Six Sigma has been widely used in the healthcare sector as a management tool to improve patient quality and safety. The objective of this study is to identify opportunities for its implementation through literature analysis. A literature review has been carried out since the first publication of Six Sigma in the sector appeared until ...

  10. Basics of Quality Improvement in Health Care

    With the rapid expansion of knowledge and technology and a health care system that performs far below acceptable levels for ensuring patient safety and needs, front-line health care professionals must understand the basics of quality improvement methodologies and terminology. The goals of this review are to provide clinicians with sufficient information to understand the fundamentals of ...

  11. Quality improvement in healthcare: an action learning approach

    Background. Quality improvement is the focus of senior managers and leaders in healthcare worldwide. While it rose to the forefront of the public's attention with the Institute of Medicine's (IOM) report (Institute of Medicine & Committee on Quality of Health Care in America Citation 2000), the emphasis on patient safety and quality of care is now central to all standards of care and ...

  12. The potential for artificial intelligence to transform healthcare

    Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be ...

  13. PDF Engaging Primary Care Practices in Quality Improvement

    WHITE PAPER National Center for Excellence. IN PRIMARY CARE RESEARCH. ... Center for Evidence and Practice Improvement, Agency for Healthcare Research and Quality . David Share, M.D., M.P.H., Senior Vice President, Value Partnerships, at Blue Cross Blue ... In an effort to create a high-quality health care system in the United States, many ...

  14. Quality improvement research: understanding the science of change in

    Essential for all who want to improve health care. Expectations of healthcare services are ever increasing and those delivering care no longer hold the monopoly of opinion on what constitutes good or best care. To earn the label "good enough", care must meet standards expected by consumers as well those of expert providers. Headlines in newspapers, statements in policy documents, and many ...

  15. (PDF) QUALITY MANAGEMENT IN HEALTH CARE: CONCEPTS ...

    Abstract. The paper clarifies the idea and summarizes the concepts, principles and standards of quality management in healthcare, thus creating the foundation for understanding the role and ...

  16. Journal of Participatory Medicine

    Background: Patient-accessible electronic health records (PAEHRs) are assumed to enhance the quality of care, expressed in terms of safety, effectiveness, timeliness, person centeredness, efficiency, and equity. However, research on the impact of PAEHRs on the perceived quality of care among parents, children, and adolescents is largely lacking.

  17. Journal of Medical Internet Research

    Background: Asthma is one of the most common chronic conditions worldwide, with a substantial individual and health care burden. Digital apps hold promise as a highly accessible, low-cost method of enhancing self-management in asthma, which is critical to effective asthma control. Objective: We conducted a fully remote randomized controlled trial (RCT) to assess the efficacy of juli, a ...

  18. Improving the quality of care in health systems: towards better

    Abstract. Improving the quality of health care across a nation is complex and hard. Countries often rely on multiple single national level programmes to make progress. But the key is to use a framework to develop a balanced overall strategy, and evaluate the main elements continuously and over time. Achieving that requires having a critical ...