A literature review about usability evaluation methods for e-learning platforms


  • 1 Deparment of Production and Systems Engineering, University of Minho, Guimarães, Portugal. [email protected]
  • PMID: 22316857
  • DOI: 10.3233/WOR-2012-0281-1038

The usability analysis of information systems has been the target of several research studies over the past thirty years. These studies have highlighted a great diversity of points of view, including researchers from different scientific areas such as Ergonomics, Computer Science, Design and Education. Within the domain of information ergonomics, the study of tools and methods used for usability evaluation dedicated to E-learning presents evidence that there is a continuous and dynamic evolution of E-learning systems, in many different contexts -academics and corporative. These systems, also known as LMS (Learning Management Systems), can be classified according to their educational goals and their technological features. However, in these systems the usability issues are related with the relationship/interactions between user and system in the user's context. This review is a synthesis of research project about Information Ergonomics and embraces three dimensions, namely the methods, models and frameworks that have been applied to evaluate LMS. The study also includes the main usability criteria and heuristics used. The obtained results show a notorious change in the paradigms of usability, with which it will be possible to discuss about the studies carried out by different researchers that were focused on usability ergonomic principles aimed at E-learning.

Publication types

  • Computer-Assisted Instruction / standards*
  • Evaluation Studies as Topic*
  • User-Computer Interface*

Evaluation of Usability and Accessibility of Mobile Application for People with Disability: Systematic Literature Review

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Search life-sciences literature (44,091,450 articles, preprints and more)

  • Free full text
  • Citations & impact
  • Similar Articles

A Review of Usability Evaluation Methods and Their Use for Testing eHealth HIV Interventions.

Author information, affiliations.

  • Gardner J 2
  • Schnall R 3

ORCIDs linked to this article

  • Schnall R | 0000-0003-2184-4045

Current HIV/AIDS Reports , 01 Jun 2020 , 17(3): 203-218 https://doi.org/10.1007/s11904-020-00493-3   PMID: 32390078  PMCID: PMC7367140

Review Free full text in Europe PMC


Purpose of review, recent findings, free full text .

Logo of nihpa

A Review of Usability Evaluation Methods and their Use for Testing eHealth HIV Interventions

Rindcy davis.

1 Gertrude H. Sergievsky Center, College of Physicians and Surgeons, Columbia University Medical Center, 630 W 168th Street, New York, NY 10032 USA. ude.aibmuloc.cmuc@0512der

Jessica Gardner

2 Department of Epidemiology, Mailman School of Public Health, Columbia University Medical Center, 630 W 168th Street, New York, NY 10032 USA.

Rebecca Schnall

3 School of Nursing, Columbia University, New York, NY 10032 USA.

Purpose of review:

To provide a comprehensive review of usability testing of eHealth interventions for HIV.

Recent Findings:

We identified 28 articles that assessed the usability of eHealth interventions for HIV, most of which were published within the past 3 years. The majority of the eHealth interventions for HIV were developed on a mobile platform and focused on HIV prevention as the intended health outcome. Usability evaluation methods included: eye-tracking, questionnaires, semi-structured interviews, contextual interviews, think-aloud protocols, cognitive walkthroughs, heuristic evaluations and expert reviews, focus groups, and scenarios.

A wide variety of methods are available to evaluate the usability of eHealth interventions. Employing multiple methods may provide a more comprehensive assessment of the usability of eHealth interventions as compared to inclusion of only a single evaluation method.


Approximately two thirds of the population worldwide are connected by mobile devices and more than three billion are smartphone users [ 1 , 2 ]. Even in limited-resource settings, there is growing use of the internet and increasing accessibility to internet capable technologies such as computers, tablets, and smartphones [ 3 , 4 ]. eHealth takes advantage of the proliferation of technology users by delivery of health information and interventions though information and communication technologies. eHealth interventions can be delivered through a variety of technology platforms including mobile phones (mHealth), internet-based websites, tablets, electronic devices, and desktop computers [ 5 ]. With substantially rising numbers of internet and electronic device users, eHealth can reach patients across the HIV care cascade, from HIV prevention and testing to medication adherence for people living with HIV (PLWH)[ 6 – 11 ].

While there have been many promising eHealth HIV interventions, many of these have do not have reports of being developed using a rigorous design process nor rigorously evaluated through usability testing prior to deployment. Lack of formative evaluation may result in a failure to achieve usability, which is broadly defined as ‘the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use [ 12 ].The core metrics of effectiveness, efficiency, and satisfaction can be measured to determine the usability of a health information technology interventions [ 13 , 14 ]. In sum, usability is a critical determinant of successful use and implementation of eHealth interventions [ 15 ]. Without evidence of usability, an eHealth intervention may result in frustrated users, reduced efficiency, increased costs, interruptions in workflow, and increases in healthcare errors, which can hinder adoption of an eHealth intervention [ 16 ]. Given the importance of assessing the usability of health information technology interventions and the growing development of HIV-related eHealth interventions, this paper presents a review of the published literature of usability evaluations conducted during the development of eHealth HIV interventions.

Our team conducted a comprehensive search of usability evaluations of eHealth HIV interventions using Pubmed, Embase, CINAHL, IEEE, Web of Science, and Google Scholar (first 10 pages of results). The search was limited to English language articles published from January 2005 to September 2019. An informationist assisted with tailoring search strategies for online reference databases. The final list of search terms included: eHealth, mHealth, HIV, telemedicine, intervention or implementation science, user testing, user-centered, effectiveness, ease of use, performance speed, error prevention, heuristic, and usability. We included studies that measured and reported usability evaluation methods of eHealth HIV-related interventions. We excluded studies based on the following criteria: (1) did not focus on an eHealth intervention; (2) did not focus on HIV; (3) focused on an eHealth HIV intervention without providing information on the usability of the intervention; (4) articles that were systematic reviews, conference posters, or presentations (5) articles not published in English.

Two authors (RD, JG) divided the online reference databases and conducted the initial title/article review. All articles recommended for full text review were recorded in an MS Excel spreadsheet. The two investigators then independently reviewed 128 full texts of all selected articles from the title/article review (see Figure 1 ). Any discrepancies regarding article inclusion for the review were discussed by the two investigators until consensus was reached.

literature review about usability evaluation methods

Flowchart of article selection

We located 28 studies which included usability evaluations of eHealth HIV interventions (See table 1 ), the majority (71%, n=20) of which were published within the past 3 years. More than half of studies (57%, n=16) used more than one method of evaluation to assess the usability of the eHealth interventions. Platforms for the delivery of intervention varied: mobile applications (68%, n=19), websites (25%, n=7), and desktop-based programs (7%, n=2). Two included articles evaluated the mVIP, using different usability methods for each article [ 17 , 18 ]; and two included articles evaluated MyPEEPS Mobile, using different usability evaluation methods in each article[ 19 , 20 ].

Studies evaluating eHealth HIV interventions using usability evaluation methods

ART: Antiretroviral Therapy; CSUQ: Computer System Usability Questionnaire; Health-ITUES: Health Information Technology (IT) Usability Evaluation Scale; HIV: Human Immunodeficiency Virus; MSM: Men who have Sex with Men; PLWH: People living with HIV; PMTCT: Prevention of mother-to-child transmission; PSSUQ: Post-Study System Usability Questionnaire; SMS: Short message service; STI: Sexually Transmitted Infections; SUS: System Usability Scale; WAMMI: Website Analysis and MeasureMent Inventory; YMSM: Young Men who have Sex with Men

The target populations for the eHealth interventions included healthy youth participants (39%, n=11), people living with HIV (39%, n=11), healthy adults, including men who have sex with men (MSM) (21%, n=6), and health professionals (7%, n=2). The eHealth interventions also focus on a variety of topics including HIV prevention (54%, n=15), ART medication adherence (22%, n=6), and health management for PLWH (21%, n=6).

Our findings are organized by usability evaluation methods. The methodological approach is detailed in Table 2 . The narrative describes how each study operationalized the usability evaluation method.

Overview of Usability Evaluation Methods


Eye-tracking was utilized by Cho and colleagues to evaluate usability mVIP, a health management mobile app. Gaze plots illustrating eye movements of participants were reviewed along with notes of critical incidents during task completion. Participants were asked to watch the recording of their task performance and verbalize their thoughts retrospectively. Participant difficulty with a task in the app was characterized with long eye fixation or distractive eye movements. For further insight behind the unusual eye movements, a retrospective think-aloud protocol was conducted among participants. This combination of methods allowed Cho and colleagues to decipher eye movements and further understand participants’ expectations of where information should be in the app. For example, one identified usability problem was placement of the ‘continue’ button in the app when displayed on a mobile device. Due to the small screen of a mobile device, participants had to scroll down to find the ‘Continue’ button. To resolve the placement issue, Cho and colleagues transitioned the mVIP app from a native app to a mobile responsive web-app [ 20 ].

In another study by Cho and colleagues, they evaluated the MyPEEPS Mobile intervention using eye-tracking and a retrospective think-aloud. The combination of eye-tracking and a retrospective think-aloud allowed for the identification of critical errors with the system and the time spent on each task. By analyzing participant fixations on the problem areas of the app, the study team was able to identify critical usability problems [ 19 ].


The majority of studies (68%, n=19) included questionnaires as part of their usability evaluation of the eHealth intervention [ 26 , 27 , 29 , 35 , 36 , 39 , 40 , 42 , 43 , 21 , 20 , 10 , 38 , 41 , 24 , 30 , 33 , 34 , 19 ]. The complete list of validated questionnaires is described in Table 3 . Among the studies that only utilized a single usability assessment (32%, n=9), a questionnaire was always used [ 26 , 27 , 29 , 35 , 36 , 39 , 40 , 42 , 43 ] Many different types of questionnaires were used including Health Information Technology Usability Evaluation Scale (Health-ITUES) [ 21 , 20 , 10 , 38 , 19 ], Computer System Usability Questionnaire (CSUQ)[ 41 ], Website Analysis and Measurement (WAMMI) [ 24 ], System Usability Scale (SUS) [ 39 , 30 , 40 ], Post-Study System Usability Questionnaire (PSSUQ) [ 38 , 33 , 34 , 20 ]. Notably, a study by Stonbraker and colleagues used two different surveys, Health-ITUES and PSSUQ, among end-users in combination with a heuristic evaluation, think-aloud, and scenarios methods to evaluate the Video Information Provider-HIV-associated non-AIDS (VIP-HANA) app. This method provided feedback on overall usability. The end-users rated the app with high usability scores on both questionnaires [ 38 ].

Types of validated questionnaires commonly used to evaluate usability of eHealth interventions

Semi-structured Interviews

Semi-structured interviews were conducted by 18% (n=5) of the included studies [ 22 , 24 , 31 , 25 , 42 ]. Interviews were conducted to evaluate a variety of technological platforms including mobile applications, websites, and a desktop-based curriculum. This usability evaluation method was primarily conducted with end-users [ 22 , 24 , 31 , 42 ]. One unique study by Musiimenta and colleagues conducted semi-structured interviews to evaluate an SMS reminder intervention with both study participants and social supporters encouraging ART adherence [ 25 ].This method provided in-depth details of an end-user‟s experience with the intervention. One participant reported that they felt motivated when getting text messaging notifications: “ I also like it [SMS notification] because when I have many people reminding me it gives great strength. My sister calls me when she receives an SMS reminder and asks why I didn’t swallow.” Findings from the semi-structured interviews led to the conclusion that the eHealth intervention was generally acceptable and feasible in a resource-limited country

Contextual Interviews

Contextual interviews were conducted in only three studies [ 44 , 37 , 22 ]. This method was used with end-users in all three studies. Two of these studies were conducted in a low-resource setting [ 44 , 22 ]. The study by Coppock and colleagues conducted two rounds of contextual interviews to observe pharmacists use a mobile application during clinical sessions [ 22 ]. Ybarra and colleagues used the usability evaluation for a website targeting risk reduction among with adolescents [ 44 ]. The study by Skeels and colleagues had observed end-users work through CARE+, a tablet-based application [ 37 ].


The think aloud method was used by 43% of studies (n=12) [ 38 , 21 , 20 , 18 , 24 , 25 , 28 , 33 , 34 , 41 , 44 , 19 ]. This method was used to evaluate usability of websites and mobile applications. The study by Beauchemin and colleagues used the think aloud method to evaluate both a mobile app with an electronic pill bottle [ 21 ]. All studies conducted the think aloud protocol among end-users. Five studies conducted the method with both end-users and experts.

Cognitive Walkthrough

One study by Beauchemin and colleagues conducted a cognitive walkthrough in combination with a think-aloud and heuristic evaluation to assess the usability of the WiseApp, a health management mobile application linked to an electronic pill bottle [ 21 ]. There were 31 tasks in total and 61% were easy to complete tasks, requiring less than 2 steps on average to complete. The tasks that were more difficult were related to finding a specific item within the mobile application. For example, participants reported that the “To-Do” list was hard to locate on the home screen. This feedback was incorporated as iterative updates to the app and onboarding procedures for future end-users of the app.

Heuristic Evaluation and Expert Reviews

Multiple studies (21%, n=6) conducted a heuristic evaluation with experts in combination with other usability evaluation methods [ 21 , 18 , 33 , 34 , 38 , 20 ]. Majority of studies that used a heuristic evaluation used a think-aloud protocol with experts as they completed tasks using the eHealth program. All studies were using a heuristic evaluation to measure usability for mobile applications. All studies used a think-aloud protocol with five experts as they completed tasks using the eHealth program. The results from this method included feedback which mainly focused on interface design, navigability, and functionality issues and recommendations based in expertise to resolve these issues.

Focus groups

Focus groups were conducted by 18% (n=5) of all included studies [ 17 , 23 , 32 , 28 , 44 ]. Four studies evaluated mobile applications [ 17 , 23 , 32 , 28 ] and one study evaluated a website [ 44 ]. The studies conducted between two and four focus groups, ranging from 5 to 12 participants. Sabben and colleagues conducted focus groups with participants and their parents to evaluate a risk reduction mobile application for healthy adolescents [ 32 ]. The focus groups divided parents up by the age of their children [ 32 ]. The results from this method revealed positive feedback and acceptability among participants and lack of safety concerns associated with application from parents.

Five studies used scenarios to evaluate usability of mobile applications with end-users and experts [ 20 , 18 , 33 , 34 , 38 ]. These studies employed case scenarios that reflected main functions of the system and used the same scenarios for both end-users and experts. This evaluation method was consistently used in the context of a heuristic evaluation and think aloud methods to obtain qualitative data on usability from experts and end-users. This method would not be possible to execute with method that did not involve direct interaction with the system, such as a questionnaire or focus group discussion taking place after using the system.

This paper provides a broad overview of some of the most frequently employed usability evaluation methods. This summary provides a compilation of methods, which can be considered in the future by others in the development of eHealth interventions. Most of the studies used multiple usability evaluation methods for the evaluation eHealth HIV interventions. Questionnaires were the most frequently used method of usability evaluation. In cases where only one usability evaluation was conducted, the questionnaire was the preferred method.

Questionnaires can be quick and cost-effective tools to quantitatively assess one or two aspects of usability and therefore are frequently used. However, they cannot provide a comprehensive evaluation of usability issues and instead simply provide a score to indicate the level of usability of an eHealth tools. Therefore, both quantitative and qualitative methods are recommended for evaluating complex interventions, such as eHealth interventions targeting HIV [ 55 ]. Questionnaires should be used in conjunction with other validated methods, such as a cognitive walkthrough as part of a multistep process to evaluate usability. If questionnaires are used alone, the overall usability can be determined but it is nearly impossible to identify the issues in the technology which need to be changed in response.

Cognitive walkthrough is an underutilized evaluation method within our review. This method specifically evaluates end-user learnability and ease of use through a series of tasks performed using the system. This method can pinpoint challenging tasks or complicated features associated with an eHealth intervention [ 21 , 50 , 49 ].

Future research should consider incorporating multiple methods as part of their overall usability evaluation of eHealth interventions.

When using multiple usability evaluation methods, there is potential to get varying results. One study by Beauchemin and colleagues conducted a Health-ITUES questionnaire with both end-users and experts to evaluate WiseApp, a mobile application linked to an electronic pill model [ 21 ]. The experts gave the eHealth intervention a lower score, emphasizing design issues, compared to end-users [ 20 ]. The authors then used a think-aloud method and cognitive walkthrough for further clarification on the cited issues [ 21 ].

Another study by Stonbraker and colleagues assessed the usability of the VIP-HANA app, a mobile application targeting symptom management for PLWH, with both end-users and experts. The researchers used multiple usability evaluation methods including heuristic evaluation, think aloud, scenarios, and two questionnaires. The heuristic evaluation with experts indicated that there were design issues and the area needing the most improvement was the navigation between sections in the app and adding a help feature In contrast, end-users did not comment on the lack of a back button Further, end-users indicated that app features needed to be more clearly marked rather than specifying a need for a help feature The combination of multiple usability methods allowed for detailed identification of usability concerns and the researchers were able to refine the app to make it more usable while reconciling the experts and the end-users feedback [ 33 ].

Several limitations should be considered when reading this review. Measures were taken to build comprehensive search strategies and were created under the guidance of an informationist. However, the results from the search strategies may not include all eligible studies. In addition, publication bias should be considered when conducting a systematic review as we may have missed relevant unpublished work.


In summary, this paper provides a review of the usability evaluation methods employed in the assessment of eHealth HIV eHealth interventions. eHealth is a growing platform for delivery of HIV interventions and there is a need to critically evaluate the usability of these tools before deployment. Each usability evaluation method has its own set advantages and disadvantages. Cognitive walkthroughs and eye-tracking are underutilized usability evaluation methods. They are both useful approaches which provide detailed information on the usability violations and guidance on key factors which need to be fixed to ensure the efficacious use of eHealth tools. Further, given the limitations of any one usability evaluation method, technology developers will likely need to employ multiple usability evaluation methods to gain a comprehensive understanding of the usability of an eHealth tool.

Human and Animal Rights

All reported studies/experiments with human or animal subjects performed by authors have been previously published and complied with all applicable ethical standards (including Helsinki declaration and its amendments, institutional/national research committee standards, and international/national/institutional guidelines).


We would like to acknowledge the contributions of John Usseglio, an Informationist at Columbia University Irving Medical Center. Mr. Usseglio provided his expertise on constructing comprehensive search strategies for this review. RD is funded by the Mervyn W. Susser Post-doctoral Fellowship Program at the Gertrude H. Sergievsky Center. RS is supported by the National Institute of Nursing Research of the National Institutes of Health under award number K24NR018621 .The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Conflict of Interest

Rindcy Davis, Jessica Gardner and Rebecca Schnall declare that they have no conflict of interest.

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of a an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

Full text links 

Read article at publisher's site: https://doi.org/10.1007/s11904-020-00493-3

Citations & impact 

Impact metrics, citations of article over time, smart citations by scite.ai smart citations by scite.ai include citation statements extracted from the full text of the citing article. the number of the statements may be higher than the number of citations provided by europepmc if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles. explore citation contexts and check if this article has been supported or disputed. https://scite.ai/reports/10.1007/s11904-020-00493-3, article citations, patients' perceptions of use, needs, and preferences related to a telemedicine solution for hiv care in a norwegian outpatient clinic: a qualitative study..

Johnsen HM , Øgård-Repål A , Martinez SG , Fangen K , Bårdsen Aas K , Ersfjord EMI

BMC Health Serv Res , 24(1):209, 15 Feb 2024

Cited by: 0 articles | PMID: 38360650 | PMCID: PMC10870609

Comparison of evaluation methods for improving the usability of a Spanish mHealth tool.

Hahn AL , Michaels CL , Khawly G , Nichols TK , Baez P , Ozoria Ramirez S , Juarez Padilla J , Stonbraker S , Olender S , Schnall R

Int J Med Inform , 184:105355, 12 Feb 2024

Cited by: 0 articles | PMID: 38368698

Usability study of SOSteniamoci: An internet-based intervention platform to support informal caregivers in Italy.

Semonella M , Marchesi G , Andersson G , Dekel R , Pietrabissa G , Vilchinsky N

Digit Health , 10:20552076231225082, 15 Jan 2024

Cited by: 0 articles | PMID: 38235418 | PMCID: PMC10793194

Evaluation Methods in Clinical Health Technologies: A Systematic Review.

Mohammadzadeh N , Rahmani Katigari M , Hosseini R , Pahlevanynejad S

Iran J Public Health , 52(5):913-923, 01 May 2023

Cited by: 1 article | PMID: 37484728 | PMCID: PMC10362203

Usability of a mobile application for health professionals in home care services: a user-centered approach.

Manzano-Monfort G , Paluzie G , Díaz-Gegúndez M , Chabrera C

Sci Rep , 13(1):2607, 14 Feb 2023

Cited by: 3 articles | PMID: 36788261 | PMCID: PMC9929220

Similar Articles 

To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Methods of usability testing in the development of eHealth applications: A scoping review.

Maramba I , Chatterjee A , Newman C

Int J Med Inform , 126:95-104, 31 Mar 2019

Cited by: 177 articles | PMID: 31029270

Peer Group Focused eHealth Strategies to Promote HIV Prevention, Testing, and Care Engagement.

Ronen K , Grant E , Copley C , Batista T , Guthrie BL

Curr HIV/AIDS Rep , 17(5):557-576, 01 Oct 2020

Cited by: 8 articles | PMID: 32794071 | PMCID: PMC7492479

Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation.

Sinabell I , Ammenwerth E

Appl Clin Inform , 13(1):67-79, 01 Jan 2022

Cited by: 4 articles | PMID: 35263798 | PMCID: PMC8906994

Assessing usability of eHealth technology: A comparison of usability benchmarking instruments.

Broekhuis M , van Velsen L , Hermens H

Int J Med Inform , 128:24-31, 05 May 2019

Cited by: 35 articles | PMID: 31160008


Funders who supported this work.


Grant ID: K24 NR018621

39 publication s

Europe PMC is part of the ELIXIR infrastructure

Sustainability and Usability Evaluation of E-Commerce Portals

  • Conference paper
  • First Online: 11 May 2024
  • Cite this conference paper

literature review about usability evaluation methods

  • José A. García-Berná 14 ,
  • Sofia Ouhbi 15 ,
  • Juan M. Carrillo de Gea 14 ,
  • Joaquín Nicolás 14 &
  • José L. Fernández-Alemán 14  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 985))

Included in the following conference series:

  • World Conference on Information Systems and Technologies

13 Accesses

Information and communications technologies (ICT) are essential to achieving energy sustainability objectives. With their substantial daily traffic, e-commerce websites are especially important. This study looked into the connection between the most popular e-commerce websites’ sustainability and usability. This study evaluated usability employing accepted web-based user experience criteria. Furthermore, a framework for assessing website elements that support sustainability was put forth. Through an analysis of the features of four tools that quantify sustainability-related factors such as page load time, browser cache usage, JavaScript usage, HTTP request volume, and usage of green servers we gathered a total of 39 sustainability criteria for software. Finally, we evaluated whether usability and sustainability were correlated. There was no significant correlation between usability and sustainability according to our research. Still, the framework this paper develops is a useful tool for sustainability evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

http://www.oberlo.com/statistics/how-many-people-shop-online .

http://umubox.um.es/index.php/s/OY3CuZkAGqAePEt .

Venkatesh, V., Agarwal, R.: Turning visitors into customers: A usability-centric perspective on purchase behavior in electronic channels. Manage. Sci. 52 (3), 367–382 (2006)

Article   Google Scholar  

Bevan, N.: What is the difference between the purpose of usability and user experience evaluation methods. In: Proceedings of the Workshop UXEM, vol. 9, pp. 1–4 (2009)

Google Scholar  

Dix, A., Dix, A.J., Finlay, J., Abowd, G.D., Beale, R.: Human-computer interaction, Pearson Education (2003)

Mangiaracina, R., Marchet, G., Perotti, S., Tumino, A.: A review of the environmental implications of b2c e-commerce: a logistics perspective. Inter. J. Phys. Distribut. Logistics Manag. (2015)

García-Mireles, G.A., Moraga, M.A., García, F., Calero, C., Piattini, M.: Interactions between environmental sustainability goals and software product quality: a mapping study. Inform. Softw. Technol. 95 , 108–129 (2018)

Andrae, A.S.G.: Prediction studies of electricity use of global computing in 2030. Int. J. Sci. Eng. Invest. 8 , 27–33 (2019)

Christie, J.: Sustainable web design. Application Development, State of the Web (2013). https://alistapart.com/article/sustainable-web-design/ , .(Accessed Nov 2023)

Top sites ranking for e-commerce and shopping in the world. https://www.similarweb.com/top-websites/category/e-commerce-and-shopping (Accessed Nov 2023)

60 \(\times \) e-commerce UX case studies. https://baymard.com/ux-benchmark [Accessed Nov 2023]

How green is your website. https://ecograder.com (Accessed Nov 2023)

The Green Web Foundation. Is your website hosted green? https://www.thegreenwebfoundation.org (Accessed Nov 2023)

How strong is your website? https://website.grader.com (Accessed Nov 2023)

Pagespeed insignts. https://developers.google.com/speed/pagespeed/insights (Accessed Nov 2023)

Garcia-Berna, J.A.: Energy efficiency in software: a case study on sustainability in personal health records. J. Cleaner Product. 282 , 124262 (2021)

García-Berná, J.A., Ouhbi, S., Fernández-Alemán, J.L., Carrillo de Gea, J.M., Nicolás, J.: Investigating the impact of usability on energy efficiency of web-based personal health records. J. Med. Syst. 45 (6), 65 (2021)

Download references


This research is part of the OASSIS-UMU project (PID2021-122554OB-C32) and the Network of Excellence in Software Quality and Sustainability (RED2022-134656-T), all funded by MCIN/ AEI /10.13039/501100011033/ and by ERDF A way to make Europe.

Author information

Authors and affiliations.

Department of Computer Science and Systems, University of Murcia, Murcia, Spain

José A. García-Berná, Juan M. Carrillo de Gea, Joaquín Nicolás & José L. Fernández-Alemán

Department of Infromation Technology, Uppsala University, Uppsala, Sweden

Sofia Ouhbi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to José A. García-Berná .

Editor information

Editors and affiliations.

ISEG, Universidade de Lisboa, Lisbon, Portugal

Álvaro Rocha

College of Engineering, The Ohio State University, Columbus, OH, USA

Hojjat Adeli

Institute of Data Science and Digital Technologies, Vilnius University, Vilnius, Lithuania

Gintautas Dzemyda

DCT, Universidade Portucalense, Porto, Portugal

Fernando Moreira

Institute of Information Technology, Lodz University of Technology, Łódz, Poland

Aneta Poniszewska-Marańda

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

García-Berná, J.A., Ouhbi, S., de Gea, J.M.C., Nicolás, J., Fernández-Alemán, J.L. (2024). Sustainability and Usability Evaluation of E-Commerce Portals. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Poniszewska-Marańda, A. (eds) Good Practices and New Perspectives in Information Systems and Technologies. WorldCIST 2024. Lecture Notes in Networks and Systems, vol 985. Springer, Cham. https://doi.org/10.1007/978-3-031-60215-3_20

Download citation

DOI : https://doi.org/10.1007/978-3-031-60215-3_20

Published : 11 May 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-60214-6

Online ISBN : 978-3-031-60215-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 21 May 2024

Evaluation of usability and user feedback to guide telepharmacy application development in Indonesia: a mixed-methods study

  • Sofa D. Alfian 1 , 2 , 3 ,
  • Jihan A. Sania 1 ,
  • Dzulfah Q. Aini 1 ,
  • Qisty A. Khoiry 1 ,
  • Meliana Griselda 2 ,
  • Yudisia Ausi 1 , 2 ,
  • Neily Zakiyah 1 , 2 ,
  • Irma M. Puspitasari 1 , 2 ,
  • Auliya A. Suwantika 1 , 2 , 3 ,
  • Mariska Mahfud 4 ,
  • Saktian Aji 4 ,
  • Rizky Abdulah 1 , 2 &
  • Angelos P. Kassianos 5  

BMC Medical Informatics and Decision Making volume  24 , Article number:  130 ( 2024 ) Cite this article

Metrics details

In Indonesia, the adoption of telepharmacy was propelled by the COVID-19 pandemic, prompting the need for a user-friendly application to support both the general population and pharmacists in accessing healthcare services. Therefore, this study aimed to evaluate usability and user feedback of a pioneering telepharmacy application known as Tanya Obat (translating to “Ask about Medications”) in Indonesia, from the perspectives of the general population and pharmacists.

A mixed-methods sequential study was conducted with the early-stage Tanya Obat application in Bandung City. Participants, including the general population and pharmacists, were instructed to use the application for a week. Questionnaires for the general population and pharmacists were distributed from March to May and February to June 2023, respectively. The System Usability Scale questionnaire was adopted to describe usability of the developed application. Further exploration of the quantitative results required collecting open-ended feedback to assess the impressions of the participants, difficulties encountered, and desired features for enhanced user-friendliness. The collected statements were summarized and clustered using thematic analysis. Subsequently, the association between the characteristics of participants and perceived usability was determined with the Chi-square test.

A total of 176 participants, comprising 100 individuals from the general population and 76 pharmacists, engaged in this study. In terms of usability, the questionnaire showed that Tanya Obat application was on the borderline of acceptability, with mean scores of 63.4 and 64.1 from the general population and pharmacists, respectively. Additionally, open-ended feedback targeted at achieving a more compelling user experience was categorized into two themes, including concerns regarding the functionality of certain features and recommendations for improved visual aesthetics and bug fixes. No significant associations were observed between the characteristics of participants and perceived usability (p-value > 0.05).

The results showed that the perceived usability of Tanya Obat developed for telepharmacy was below average. Therefore, feature optimizations should be performed to facilitate usability of this application in Indonesia.

Peer Review reports

The development of telepharmacy, with technology incorporation into pharmaceutical care services, was propelled by the COVID-19 pandemic. Telepharmacy provides remote counseling, medication information, online purchases, adverse effects monitoring, and therapy tracking through digital platforms [ 1 , 2 ]. Moreover, it creates a virtual channel for pharmaceutical care, connecting pharmacists and patients remotely to facilitate easy health evaluation [ 3 ]. Other benefits include reducing direct interaction between healthcare professionals and patients, improving pharmaceutical service quality, and minimizing medication errors and adverse effects through various channels such as application, text messaging, video, and voice calls [ 2 ]. Telepharmacy improves cost-effectiveness, healthcare access, and after-hours availability, as well as reduces travel time to healthcare facilities [ 4 , 5 ], breaking geographical barriers and enhancing healthcare accessibility [ 6 , 7 ]. However, implementing this system in clinical practice may be challenging due to legal considerations, operational costs, and patient trust-building [ 8 , 9 ].

Recent guidelines in Indonesia mainly address general aspects of delivering telehealth services during the COVID-19 pandemic, particularly focusing on definitions and procedures [ 10 ]. Specific regulations for telemedicine are necessary to ensure compliance with ethics and other regulations [ 11 ]. Additionally, evaluation of usability and applicability is crucial for optimal implementation and adoption in real-world settings. Gaining insights into usability from the perspectives of the general population and pharmacists during the design and testing stages can enhance application effectiveness, which is essential for successful implementation and scalability [ 12 , 13 , 14 ]. Despite the rapid growth of telepharmacy in the country [ 15 , 16 , 17 , 18 ], only a few available application is tested [ 19 , 20 , 21 ], leading to varied usability experiences based on features and limitations encountered [ 19 , 20 , 21 ]. Many types of Indonesian telepharmacy application lack appropriate study designs and methods to address engagement issues faced by users and pharmacists. Considering the interconnection between program usability and the engagement of users, usability studies are essential for understanding and improving engagement [ 22 ]. Usability is defined as a user interface characteristic that facilitates application adoption, effectiveness, efficiency, and satisfaction in a targeted environment [ 23 ]. In telepharmacy implementation, poor usability can hinder technology acceptance [ 24 ]. Therefore, the products, systems, processes, and procedures constituting telepharmacy must be designed and implemented to be usable, useful [ 25 ], accessible, and user-friendly for both healthcare providers and patients [ 26 ]. Engaging these users in usability testing can help to address specific needs and preferences, promoting successful technology acceptance and adoption [ 27 ].

An Indonesian telepharmacy application known as Tanya Obat , translating to “Ask about Medication”, is developed through design-based study [ 28 ] by a team of pharmacists and academicians. This provides a comprehensive ecosystem connecting pharmacies, pharmacists, and technicians with the general population. Furthermore, the application comprises features such as locating nearby pharmacies, online medication purchasing, consultation services, medication use information, and educational resources including webinars, e-modules, and coaching clinics for pharmacists and technicians. Tanya Obat differs from other types of application in Indonesia by offering a dedicated ecosystem for pharmacists, including online medication consultation and educational opportunities to enhance competency in delivering health services. Therefore, this study aimed to evaluate Tanya Obat usability and obtain user feedback on the application at early development stage in Bandung City from the perspectives of the general population and pharmacists. The association between the characteristics of participants and perceived usability will be explored.

Study design

A mixed-methods sequential study design was used to assess usability and obtain user feedback on Tanya Obat application at the early development stage before launching. Furthermore, questionnaires for the general population and pharmacists were distributed from March to May and February to June 2023, respectively. Approval with contract number 670/UN6.KEP/EC/2022 was received from the Research Ethics Commission of Universitas Padjadjaran, Indonesia, and all recruited participants provided written informed consent.

Study population and settings

Participants recruited in Bandung City comprised untrained and first-time Tanya Obat users from the general population, irrespective of being health system users, as well as pharmacists registered in the application or not. One week was provided for Tanya Obat usage without supervision to permit full exploration of the application, and recruiting was performed online through a convenience sampling method. Inclusion criteria for the general population consisted of individuals with (i) age above 18 years, (ii) ability to use a smartphone, (iii) lack of prior application experience, and (iv) willingness to participate. Meanwhile, the inclusion criteria for pharmacists were (i) current employment at a Community Health Center (CHC), hospital, or pharmacy, (ii) ability to use a smartphone, (iii) no prior experience with the tested application, and (iv) willingness to participate. All participants received an explanation regarding the study stages, application installation, and features.

Tanya Obat application

Tanya Obat application was developed by a team of pharmacists and pharmacy academicians in collaboration with software engineers using Dart ( https://dart.dev/ ) programming language. As shown in Fig.  1 , the application offers five main features including:

figure 1

Tanya Obat Application Features

Finding Nearby Pharmacies

This feature allows users to conveniently search for the price sites of nearby pharmacies and view available medication lists, saving time spent on locating the physical buildings.

Medication Purchasing

The medication purchasing feature extends the reach of pharmacies to patients through a digital platform. Users can freely browse categories of non-prescription and prescription medications which can be ordered with the delivery service available on the application.

Ask a Pharmacist

This flagship feature enables digital consultation with pharmacists regarding self-medication, regular medications currently in use, and other concerns about consumption and side effects. Additionally, it offers pharmacists the opportunity to gain consultation experience, which can subsequently be converted into a credit unit for pursuing professional competency certification. Recruiting for this feature was performed by distributing information through webinars, scientific events, social media and ads, personal and group chat application, as well as direct visits to pharmacies. Users will be charged IDR 20,000 or USD 1.5 for 15-minute consultations based on regulations established by the Indonesian Pharmacists Association in 2019.

Health Article

Tanya Obat application allows both the general population and pharmacists to access accurate medication information in a clear and easily understandable manner. Moreover, the authors of this study were accredited pharmacists, pharmacy graduates, and students with competencies and excellent knowledge in writing health articles. Each published article was successfully curated and reviewed by professionals and an editorial team.

Sustainable Educational Development for Pharmacists

This feature can only be accessed by pharmacists, serving as a channel for enhancing educational knowledge and professional competency through webinars, e-modules, and coaching clinics.

Each feature of Tanya Obat application is available for both the general population and pharmacists. However, “Ask a Pharmacist” has slightly different user interfaces tailored for the respective group of participants. This specific feature shows a list of online pharmacists for the general population. Meanwhile, the inability of accounts designed for pharmacists to request consultations as patients, is complemented by the provision of access to current and completed consultation history.


Perceived usability was evaluated based on a standardized quantitative System Usability Scale (SUS). Meanwhile, additional feedback from users engaged in this study was assessed through open-ended qualitative questions.

Usability measurement

Usability, often defined as user interface characteristic that facilitates application adoption, effectiveness, efficiency, and satisfaction in a targeted environment [ 23 ], was assessed with the Indonesian version of the standardized SUS questionnaire [ 29 , 30 ]. This evaluation was conducted by identifying potential user interface, functionality, and design issues based on the perspective of untrained, first-time users from the general population and pharmacists. Moreover, SUS questionnaire comprised 10 questions rated on a 5-point Likert scale ranging from [ 1 ] “Strongly Disagree”, “Disagree”, “Neutral”, “Agree”, to “Strongly Agree” [ 5 ]. Participants responded based on the subjective assessment of usability, as presented in Table S1 Supplementary data. Each question contributed a score, with different scoring conversions for odd [ 1 , 3 , 5 , 7 , 9 ] and even [ 2 , 4 , 6 , 8 , 10 ] questions, reflecting positive and negative responses, respectively. Summation of all the points yielded a maximum score of 40 which generated a scale ranging from 0 to 100 through multiplication by 2.5. Higher scores suggested favorable user perceptions of the application, while lower scores signified low usability [ 31 ]. A previous study showed a significant association between SUS scores reported as continuous or dichotomous data and outcomes [ 32 ]. Reporting as dichotomous data may be more practical in real-world scenarios due to the ease of interpretation [ 33 ]. Therefore, in this study, a SUS score exceeding 68 points was considered above average in perceived usability [ 34 ], calculated based on a published formula [ 33 ].

The questionnaire was tested for construct validity and internal consistency using 30 participants who were not included in the main analyses. Construct validity was assessed with the Pearson Product Moment correlation by correlating the score of each question with the total score [ 35 ]. The questionnaire was deemed valid when the correlation coefficient (r-value) exceeded the critical value [28]. The values for each question were > 0.361, signifying correlation with the respective dimension and affirming this construct as a measurement instrument. Additionally, Cronbach’s Alpha coefficient yielded a score of 0.721, meeting the criterion established for internal consistency, namely a value > 0.60 [ 36 ].

Additional user feedback

After completing SUS questionnaire, participants provided additional feedback based on open-ended questions to obtain impressions following one week of Tanya Obat usage. Three questions were used to assess the perceptions of participants about the application (e.g.,), potential confusion or difficulties encountered (e.g.,), and desired features for enhanced user-friendliness. These included “What was your first impression of this application?”, “What would you say was the most challenging aspect of using this application?”, and “What are your recommendations to improve the application?”. Subsequently, a summary of interpretations made by the study team was sent to all participants for verification.

Demographic characteristics

The survey collected sociodemographic information about the participants, including factors such as gender (male, female), age, highest education level (junior high school, senior high school, bachelor’s degree, registered pharmacists, master’s degree, doctorate), practice settings, and years of practice (for pharmacists). Additionally, participants provided information on subjective experiences with smartphone application (amateur, beginner, skilled, highly skilled), daily duration of smartphone usage in hours (< 1, 1−2, 3−4, and > 4), and internet accessibility at home (available, unavailable).

Sample size calculation

The sample size was calculated using Slovin’s formula [ 37 ] due to a lack of prior knowledge about outcome distributions. To achieve a 95% confidence interval (CI) and a margin error of 0.15 with a statistical power of 80%, a minimum sample size of 100 for the general population and 64 pharmacists was required.

Data collection

The link to both the application and survey was disseminated through a digital leaflet across social media, as well as online personal and group chat platforms. This application was available for download on the Google Play store, and data were collected using the Qualtrics platform. To prevent duplicate entries, participants were restricted to filling out the questionnaire only once from a single email address. For pharmacists, a convenient sample was formally or personally recruited through the Indonesian Pharmacists Association network. The estimated time for questionnaire completion was 10 min, while participants were initially reminded that the study aimed to assess user-friendliness of the application and not the ability to use the device correctly. Following one week of exploring the application, participants were immediately instructed to complete the questionnaire to avoid bias introduced by repeated usage and becoming overly familiar with the application. Additionally, screenshots of the features found were requested as evidence of exploring the application.

Data analysis

Basic demographic characteristics were examined using descriptive statistics, where mean SUS scores were analyzed for each subgroup of the general population and pharmacists. Survey quantitative data were tabulated and presented as frequencies, then a Chi-square test was used to determine the association between the characteristics of participants and perceived usability. Open-ended questions were summarized and clustered with thematic analysis, following steps including familiarization with data, initial code generation, theme development, theme review, theme definition, and conduction of independent qualitative analysis by JAS and DQA using NVivo software version 12. Any disagreements among JAS and DQA were resolved through discussion with a third study team member (SDA). To ensure content analysis reliability, continuous discussion and negotiation regarding the content of keywords, broader concepts, and units of meaning were performed among the team.

Quantitative results

A total of 176 participants (100 individuals from the general population and 76 pharmacists) engaged in this study. Those from the general population were recruited from 26 districts in Bandung City, majorly comprising 18−25 years old (59%) females (71%) with high school qualifications (71%). Furthermore, most participants were ‘skilled’ at using smartphone application (56%), with over four hours of daily smartphone usage (65%), and found to use Wi-Fi at home (72%), as shown in Table  1 .

Subsequently, a total of 76 responses were obtained from pharmacists in Bandung City, Indonesia. These majorly included females (73.7%), working in community pharmacies (79.0%), with over six years of job experience (38.2%), considered ‘skilled’ in using mobile application (64.5%), possessing access to the internet (89%), and known to use the smartphone for more than four hours daily (63.2%).

Associations between the characteristics of participants and perceived usability

The characteristics of the general population and pharmacists were presented in Tables  2 and 3 , respectively. This study identified no significant correlations between the characteristics and perceived application usability (p-value > 0.05).

Usability results

According to the calculation matrix in Table S2 Supplementary data, the average SUS score of the general population was 63.4, representing a below-average perceived usability. A significant variation in scores was observed, ranging from a high of 87.5 to a low of 30. The average SUS score of pharmacists was 64.1 (Table S3 , Supplementary data), which was also below average, with a significant variation in scores ranging from 95 to 42.5.

Qualitative results

Two themes, including concerns and recommendations, were identified from the responses provided to the open-ended questions. These were further divided into subthemes comprising feature functionality for concerns as well as visual enhancements, feature improvements, system improvements, and other parameters for recommendations, as presented in Table  4 along with examples of supporting quotes.

User feedback: the general population

Concerns identified for Tanya Obat were features such as consulting a pharmacist, searching for online medication purchases, and locating nearby pharmacies. Users reported that the application was not functioning optimally and required assistance using features including purchasing medication, redeeming prescribed medicines, and consulting a pharmacist. Additionally, inconsistencies in color schemes and delays or errors when accessing features within the application were noted (Table  4 ).

Potential enhancements for the application included creating more visually appealing shows in terms of color combinations, layout, and font styles. Additionally, it was suggested to ensure that features function as intended, providing relevant descriptions for each feature, and enhancing filters. System-related recommendations comprised addressing bugs or errors such as loading failures, adding a user guide, improving maps, and enhancing search speed. Other suggestions included introducing Tanya Obat on iOS devices and promoting the application among a wider demographic (Table  4 ).

User feedback: pharmacists

Concerns observed in using Tanya Obat application were related to medication purchase, address configuration, and specific hurdles in the registration of pharmacists. Another concern was identified with the “consult a pharmacist” feature, where the need for more assistance was reported (Table  5 ).

Several suggestions obtained from feedback in open-ended questions were majorly related to features. Regarding the medication purchase feature, the requirement for expanded stock availability and the incorporation of new functionalities was reiterated. Furthermore, pharmacists stated the need for an online Continuing Professional Development (CPD) feature, which would be incorporated by the Indonesian Pharmacists Association as the issuer of competency certificates. This suggestion was provided to support consistent education through seminars, articles, and learning modules, along with the integration of counseling history or medical records of patients that can be accessed by pharmacists in Indonesia and claimed as credit points for CPD. The feature for registration of pharmacists should be enhanced, while modification of the color palette was recommended to ensure increased visibility and maintain satisfactory show and font choices.

Principal observations

In this study, usability and user feedback of the innovative Tanya Obat application were evaluated. Perceived usability was categorized as below average, with average scores of 63.4 and 64.1 for the general population and pharmacists, respectively. The identified concerns primarily included the functionality of some features, while chances for improvement were observed in the areas of visual, features, and system enhancements. These concerns and recommendations slightly differed between the general population and pharmacists.

From the general population perspective, the below-average score obtained might be attributed to disparities in digital literacy, defined as the ability to acknowledge and use information from various sources presented through computers [ 38 ]. A national survey conducted among 10,000 Indonesians showed a digital literacy index of 3.49 in 2021 and 3.54 on a scale of 5 in 2022 [ 39 , 40 , 41 ], and despite the improvement, digital literacy in the country remained at a moderate level. Previous results showed that the general population and pharmacy students in Indonesia had a positive perception and were willing to use telepharmacy services [ 42 , 43 ], providing an opportunity for successful implementation of the application.

Several areas for improvement were identified to enhance and adapt the future application to the understanding and capabilities of users, thereby leading to a more usable and useful system [ 25 ]. Moreover, primarily existing problems included complications using the features, attributed to unfamiliarity with technology, lack of confidence in using electronic devices, or fear of committing mistakes [ 39 ]. A survey reported that 69% of 10,000 Indonesians have not accessed health services through digital platforms [ 39 ]. Complications while running the application can be reduced by simplifying the log-in process, reducing required tasks, and showing fewer buttons on the screen [ 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. Selecting the appropriate design, wording, and development language in a user-centered and participatory design process is crucial and may have an important impact on engagement [ 56 ]. These improvements tend to increase satisfaction, which plays a vital role in the implementation and continual use of the application [ 57 , 58 ]. Therefore, developers must prioritize maximizing the application performance to be more user-friendly [ 44 , 59 , 60 , 61 , 62 , 63 , 64 ].

The similar marginally acceptable score obtained from the feedback of pharmacists signified that more work was needed for usability improvement, as these professionals were expected to be more exposed to telepharmacy application than the general population. This unsatisfying score could be partly explained by the varying levels of readiness to use telepharmacy application [ 65 ]. Pharmacists were forced to implement telehealth services without adequate readiness assessment, particularly in a setting without well-established telehealth services before the COVID-19 pandemic, such as Indonesia [ 66 ] Furthermore, the main concerns that require adjustment to enhance application usability and reduce errors include expanding the network of available pharmacies and pharmacists with specialties comprising traditional medication and cosmetics. Specialization among pharmacists has various important benefits such as higher adherence and persistence, better clinical outcomes, monetary benefits for both patients and the healthcare system, and higher patient satisfaction [ 67 ]. Therefore, expanding the network of the pharmacy sector to include traditional medication specialists and pharmacist cosmetologists will significantly improve the application.

Recommendations of pharmacists regarding giving details about all medication were consistent with a previous study that reported an increment of adherence among patients after providing simple and brief written medication information [ 68 ]. Additionally, offering live interaction during counseling led to care quality improvement. Specific concerns need to be addressed, such as ensuring properly configured camera and audio settings, appropriate light quality, stable internet connection, and readiness to assist patients unfamiliar with the technology [ 69 ]. Furthermore, pharmacists recommended interactive video learning for application enhancement. This was consistent with another study that showed effectiveness in interactive video [ 70 , 71 ], with an 89.7% reported increase in learning outcomes [ 70 ]. Interactive virtual content had significant effectiveness compared to the online class group method [ 71 ]. Another recommendation was related to visualization enhancement, playing an important role in attracting user attention [ 44 ] with a tendency to influence perceived usability even when no differences existed in functionality offered [ 72 ]. Users preferred simplicity, showing more graphics than crowded text, and consistent style using combinations of colors [ 45 , 46 , 47 , 73 , 74 , 75 ]. Consequently, telepharmacy application usability would be optimized with the results of this study before proceeding to evaluate effectiveness because the standards of health professionals need to be met [ 76 ].

The characteristics of the general population and pharmacists were found to have no significant association with perceived usability. These results showed that the application could be used for all age groups irrespective of educational background, familiarity with smartphone operation, and internet accessibility level [ 77 ]. However, certain studies reported inconsistent results on the association between sociodemographic factors and perceived usability among the general population and pharmacists, respectively [ 77 , 78 , 79 ]. Significant associations were previously shown between demographic-related variables and usability [ 80 ] from an investigation conducted among participants with years of mobile application experience [ 80 ]. The less experienced participants using mobile health application in Indonesia and their subjectivity might lead to under or overestimating the experience had with smartphones, which could clarify the non-significant associations observed in this study [ 81 ]. Moreover, some unmeasured factors tended to be associated with perceived usability, such as readiness [ 81 ], experience in using mobile health application [ 80 ], digital literacy [ 38 ], and health literacy [ 82 , 83 ].

Implications and future directions

The mixed-methods evaluation conducted in this study and the results represent the first step in optimizing the development and evaluation of Tanya Obat application. A previous investigation assessed satisfaction with telehealth application using SUS method and showed improvement from 71.8 to 82.5 following enhancements based on the first round of testing [ 84 ]. Therefore, SUS score tends to increase when adequate improvements are provided to Tanya Obat application. These results support the importance of incorporating usability studies as part of the digital health intervention design process [ 85 ].

Continuous explorations to obtain post-refinement data with a larger sample size from a multicenter study in different provinces in Indonesia are essential to provide more accurate representative results. Additional investigations are required to evaluate the impact of Tanya Obat application, particularly among patients with chronic diseases who regularly consume medications, as usability may vary between different clinical groups. This can help identify context-related issues in the future, such as patient adherence, access to healthcare, and satisfaction.

Strengths and limitations

The strengths of this study include the recruitment of two different subjects, namely the general population and pharmacists, representing potential and relevant end users. Furthermore, the quantitative and qualitative data provided by real-world users are crucial in ensuring the addition of obtained views in the application development to improve usability. The inclusion of real-world participants showed important usability problems and unidentified solutions during the development or expert panel review stages. However, one main limitation of convenience sampling is the risk of bias due to the lack of random selection. Certain groups within the population may be overrepresented or underrepresented due to the inability to control the questionnaire link distribution. This skew might lead to the production of results not accurately representing the entire population. For example, the majority of participants were aged below 50 years and had a moderate to high educational level, both of which were factors increasing the tendency of technological proficiency and willingness to adopt new technology [ 86 , 87 ]. In future investigations, combination of multiple methods, such as random, stratified, or systematic sampling, can improve the quality and representativeness of the convenience sample, which tends to produce more accurate and reliable results, thereby enhancing generalizability. Furthermore, the subjectivity of respondents may be resulting in under or overestimating their experience in using smart-phones which may lead to the non-significant association with the usability score. There was a challenge in ensuring full exploration of all Tanya Obat application features, and the reliance on screenshots from feedback provided by participants without direct verification constituted potential limitations. Future investigations need to conduct a real-time observation or the recording of screen activity as a more objective measurement [ 88 ]. The method of this study was limited to user-based usability evaluation, focusing solely on the satisfaction aspect, and not incorporating the think-aloud method [ 89 ]. Additionally, expert usability evaluation through heuristic tests [ 90 ] were not performed, generating a less comprehensive scope for the results obtained. Despite these limitations, Tanya Obat application usability was improved before launching by making changes based on the concerns and recommendations of the target user groups.

In conclusion, the results showed that the developed Tanya Obat application was below average in perceived usability. Therefore, specific feature optimizations should be performed, particularly in terms of visual appeal, features, and system functionality, to improve potential acceptance and usability, facilitating successful adoption in Indonesia.

Data availability

The datasets used and/or analyzed are available from the corresponding authors upon reasonable request.

Ahmed NJ, Almalki ZS, Alsawadi AH, Alturki AA, Bakarman AH, Almuaddi AM, et al. Knowledge, perceptions, and readiness of Telepharmacy among Hospital pharmacists in Saudi Arabia. Healthcare. 2023;11(8):1087. https://doi.org/10.3390/healthcare11081087

Article   PubMed   PubMed Central   Google Scholar  

Farid AF, Firdausy AZ, Sulaiman AM, Simangunsong DE, Sulistyani FE. Efektivitas Penggunaan Layanan Telefarmasi Di Era Pandemi COVID-19 Dari Perspektif Masyarakat. Jurnal Farmasi Komunitas. 2022;9(2):152–7. https://doi.org/10.20473/jfk.v9i2.32924

Article   Google Scholar  

Ridho A, Alfian SD, van Boven JFM, Levita J, Yalcin EA, Le L, et al. Digital Health Technologies to improve medication adherence and treatment outcomes in patients with tuberculosis: systematic review of Randomized controlled trials. J Med Internet Res. 2022;24(2):e33062. https://doi.org/10.2196/33062

Baldoni S, Amenta F, Ricci G. Telepharmacy services: Present Status and Future perspectives: a review. Med (B Aires). 2019;55(7):327. https://doi.org/10.3390/medicina55070327

Ameri A, Salmanizadeh F, Keshvardoost S, Bahaadinbeigy K. Investigating pharmacists’ views on Telepharmacy: prioritizing Key relationships, barriers, and benefits. J Pharm Technol. 2020;36(5):171–8. https://doi.org/10.1177/8755122520931442

Alfian SD, Insani WN, Puspitasari IM, Wawruch M, Abdulah R. Effectiveness and Process Evaluation of Using Digital Health Technologies in Pharmaceutical Care in Low- and Middle-Income Countries: A Systematic Review of Quantitative and Qualitative Studies. Telemed J E Health. 2023;29(9):1289–303. https://doi.org/10.1089/tmj.2022.0406

Article   PubMed   Google Scholar  

Atmojo JT, Sudaryanto WT, Widiyanto A, Ernawati E, Arradini D, Telemedicine. Cost effectiveness, and patients satisfaction: a systematic review. J Health Policy Manage. 2020;5(2):103–7. https://doi.org/10.26911/thejhpm.2020.05.02.02

Poudel A, Nissen L. Telepharmacy: a pharmacist’s perspective on the clinical benefits and challenges. Integr Pharm Res Pract. 2016;5:75–82. https://doi.org/10.2147/IPRP.S101685

Elbeddini A, Yeats A. Pharmacist intervention amid the coronavirus disease 2019 (COVID-19) pandemic: From direct patient care to telemedicine. J Pharm Policy Pract. 2020;13(1):1–4. https://doi.org/10.1186/s40545-020-00229-z .

Menteri Kesehatan Republik Indonesia. Pedoman, Pelayanan Kesehatan Melalui Telemedicine Pada Masa Pandemi Corona. Virus Disease 2019 (COVID-19) [Internet]. 2021 [cited 2022 Oct 4]. https://jdih.kemkes.go.id/ .

Sulistiyono A, Budiyanti RT, Sriatmi. A regulatory framework for telemedicine in Indonesia. Eubios Journal of Asian and International Bioethics [Internet]. 2019 [cited 2023 Nov 17];29(4). https://philpapers.org/rec/SULARF-3 .

Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res. 2017 N;19(11). https://doi.org/10.2196/jmir.8775

Jones RB, Ashurst EJ, Trappes-Lomax T. Searching for a sustainable process of service user and health professional online discussions to facilitate the implementation of e-health. Health Informatics J. 2016;22(4):948–61. https://doi.org/10.1177/1460458215599024

Zapata BC, Fernández-Alemán JL, Idri A, Toval A. Empirical studies on usability of mHealth apps: a systematic literature review. J Med Syst. 2015;39(2):1–19. https://doi.org/10.1007/s10916-014-0182-2

Fitrina Andiani A, Taruna B, Putra1 W, Khoiri A. The Future of Telemedicine in Indonesia During Covid-19 Pandemic Era: Literature Review. Health Technology Assessment in Action. 2022;6(2). https://doi.org/10.18502/htaa.v6i2.12198

Komalasari R. Telemedicine in Pandemic Times in Indonesia: Healthcare Professional’s Perspective. https://services.igi-global.com/resolvedoi/resolve.aspx?doi=104018/978-1-6684-5499-2.ch008 [Internet]. 1AD Jan 1 [cited 2023 Nov 17];138–53. https://www.igi-global.com/chapter/telemedicine-in-pandemic-times-in-indonesia/314113.

Antarsih NR, Setyawati SP, Ningsih S, Deprizon, Sulaiman E, Pujiastuti N. Telehealth Business Potential in Indonesia. Proceedings of the International Conference on Social, Economics, Business, and Education (ICSEBE 2021). 2022;205:73–8. https://doi.org/10.2991/aebmr.k.220107.015

Dentons. - The Rise of Telemedicine in Indonesia [Internet]. [cited 2023 Oct 31]. https://www.dentons.com/en/insights/articles/2020/july/20/the-rise-of-telemedicine-in-indonesia .

Novrianda D, Herini ES, Haryanti F, Supriyadi E, Lazuardi L. Chemo assist for children mobile health application to manage chemotherapy-related symptoms in acute leukemia in Indonesia: a user-centered design approach. BMC Pediatr. 2023;23(1). https://doi.org/10.1186/s12887-023-04076-0

Rahayu SR, Zainafree I, Merzistya ANA, Cahyati WH, Farida E, Wandastuti AD et al. Development of the SIKRIBO Mobile Health Application for Active Tuberculosis Case Detection in Semarang, Indonesia. Healthc Inform Res. 2022;28(4):297–306. https://doi.org/10.4258/hir.2022.28.4.297

Fitria N, Idrus L, Putri AR, Sari YO. The usability testing of the integrated electronic healthcare services for diabetes mellitus patients during the pandemic in Indonesia. Digit Health. 2023;9. https://doi.org/10.1177/20552076231173227

Nitsch M, Dimopoulos CN, Flaschberger E, Saffran K, Kruger JF, Garlock L et al. A Guided Online and Mobile Self-Help Program for Individuals With Eating Disorders: An Iterative Engagement and Usability Study. J Med Internet Res. 2016;18(1). https://doi.org/10.2196/jmir.4972

IEC 62366-1. 2015(en), Medical devices — Part 1: Application of usability engineering to medical devices [Internet]. [cited 2023 Oct 30]. https://www.iso.org/obp/ui/#iso:std:iec:62366:-1:ed-1:v1:en .

Scholtz B, Mahmud I, Ramayah T. Does usability matter? An analysis of the impact of usability on technology acceptance in ERP settings. Interdisciplinary Journal of Information, Knowledge, and Management. 2016;11:309–330. https://doi.org/10.28945/3591

Kayser L, Kushniruk A, Osborne RH, Norgaard O, Turner P. Enhancing the effectiveness of consumer-focused health information technology systems through ehealth literacy: a framework for understanding users’ needs. JMIR Hum Factors. 2015;2(1). https://doi.org/10.2196/humanfactors.3696

Senjam SS, Manna S, Bascaran C. Smartphones-based assistive technology: accessibility features and apps for people with visual impairment, and its usage, challenges, and usability testing. Clinical Optometry. 2021;13:311–22. https://doi.org/10.2147/OPTO.S336361

Sripathi V, Sandru V. Effective usability testing-knowledge of user centered design is a key requirement. Int J Emerg Technol Adv Eng [Internet]. 2013;3(1):627–35. Available from: www.ijetae.com.

Google Scholar  

Jayatilleke BG, Ranawaka GR, Wijesekera C, Kumarasinha MCB. Development of mobile application through design-based research. Asian Association Open Universities J. 2019;13(2):145–68. https://doi.org/10.1108/AAOUJ-02-2018-0013

Nugroho HNIA, Ferdiana PI. R. Pengujian Usability Website Menggunakan System Usability Scale. JURNAL IPTEKKOM: Jurnal Ilmu Pengetahuan & Teknologi Informasi. 2015;17(1):31.

Brooke JSUS. A quick and dirty usability scale. Usability Evaluation Ind. 2020;207–12.

Bangor A, Staff T, Kortum P, Miller J, Staff T. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114–23.

Bloom BM, Pott J, Thomas S, Gaunt DR, Hughes TC. Usability of electronic health record systems in UK EDs. Emerg Med J. 2021;38(6):410–415 https://doi.org/10.1136/emermed-2020-210401

Sauro J, Lewis JR. Quantifying the User Experience. Quantifying the User Experience [Internet]. 2012 [cited 2023 Nov 20]; https://doi.org/10.1016/C2010-0-65192-3

Sauro J. A practical guide to measuring usability. Should you use 5 or 7 point scales. Denver; 2010. 2–13 p.

e Silva JL, de Sousa Mata M, Câmara SMA. do Céu Clara Costa Í, de Medeiros KS, Cobucci RN, Validity and reliability of the lederman Prenatal Self-Evaluation Questionnaire (PSEQ) in Brazil. BMC Pregnancy Childbirth. 2021;21(1):481 https://doi.org/10.1186/s12884-021-03959-3

Firdaus MM. Metodologi Penelitian Kuantitatif: Dilengkapi Analisis Regresi IBM SPSS Statistics Version 26.0. Riau: CV. Dotplus; 2021.

Almeda JV, Capistrano TG, Sarte GM. Elementary statistics. Diliman Quezon City: University of the Philippines; 2010.

Spante M, Hashemi SS, Lundin M, Algers A. Digital competence and digital literacy in higher education research: systematic review of concept use. Cogent Educ. 2018;5(1):1519143. https://doi.org/10.1080/2331186X.2018.1519143

Kominfo. Status Literasi Digital di Indonesia 2022 [Internet]. 2022 [cited 2023 Oct 19]. https://web.kominfo.go.id/sites/default/files/ReportSurveiStatusLiterasiDigitalIndonesia2022.pdf .

Nurhayati-Wolff H. Digital literacy index in Indonesia from 2020 to 2022, by type [Internet]. 2023 [cited 2023 Oct 19]. https://www.statista.com/statistics/1337349/indonesia-digital-literacy-index-by-type/#statisticContainer .

Harmoko DD. Digital literacy as a solution to improve the quality of Indonesia’s Human resources. Res Dev J Educ. 2021;7(2):413. https://doi.org/10.30998/rdje.v7i2.10569

Alfian SD, Khoiry QA, Andhika A, Pratama M, Pradipta IS, Kristina SA, Zairina E et al. Knowledge, perception, and willingness to provide telepharmacy services among pharmacy students: a multicenter cross-sectional study in Indonesia. BMC Medical Education 2023;23(1):1–9. https://doi.org/10.1186/s12909-023-04790-4 .

Tjiptoatmadja NN, Alfian SD. Knowledge, Perception, and Willingness to Use Telepharmacy Among the General Population in Indonesia. Front Public Health. 2022;10. https://doi.org/10.3389/fpubh.2022.825554

Wei Y, Zheng P, Deng H, Wang X, Li X, Fu H. Design features for improving Mobile Health intervention user Engagement: systematic review and thematic analysis. J Med Internet Res. 2020;22(12):e21687. https://doi.org/10.2196/21687

Perski O, Blandford A, Ubhi HK, West R, Michie S. Smokers’ and drinkers’ choice of smartphone applications and expectations of engagement: a think aloud and interview study. BMC Med Inf Decis Mak. 2017;17(1):25. https://doi.org/10.1186/s12911-017-0422-8

Lazard AJ, Pikowski J, Horrell L, Ross JC, Noar SM, Sutfin EL. Adolescents’ and young adults’ aesthetics and Functionality preferences for Online Tobacco Education. J Cancer Educ. 2020;35(2):373–9. https://doi.org/10.1007/s13187-019-1475-4

Ledel Solem IK, Varsi C, Eide H, Kristjansdottir OB, Mirkovic J, Børøsund E, et al. Patients’ needs and requirements for eHealth Pain Management interventions: qualitative study. J Med Internet Res. 2019;21(4):e13205. https://doi.org/10.2196/13205

Milward J, Deluca P, Drummond C, Watson R, Dunne J, Kimergård A. Usability testing of the BRANCH Smartphone App designed to reduce Harmful drinking in young adults. JMIR Mhealth Uhealth. 2017;5(8):e109. https://doi.org/10.2196/mhealth.7836

Rabin C, Bock B. Desired features of Smartphone Applications promoting physical activity. Telemedicine e-Health. 2011;17(10):801–3. https://doi.org/10.1089/tmj.2011.0055

Coyne I, Prizeman G, Sheehan A, Malone H, While AE. An e-health intervention to support the transition of young people with long-term illnesses to adult healthcare services: design and early use. Patient Educ Couns. 2016;99(9):1496–504. https://doi.org/10.1016/j.pec.2016.06.005

Article   CAS   PubMed   Google Scholar  

Peng W, Yuan S, Holtz BE. Exploring the Challenges and Opportunities of Health Mobile Apps for individuals with type 2 diabetes living in Rural communities. Telemedicine e-Health. 2016;22(9):733–8. https://doi.org/10.1089/tmj.2015.0180

Gkatzidou V, Hone K, Sutcliffe L, Gibbs J, Sadiq ST, Szczepura A, et al. User interface design for mobile-based sexual health interventions for young people: design recommendations from a qualitative study on an online Chlamydia clinical care pathway. BMC Med Inf Decis Mak. 2015;15(1):72. https://doi.org/10.1186/s12911-015-0197-8

Nathalie Lyzwinski L, Caffery L, Bambling M, Edirippulige S. University Students’ perspectives on mindfulness and mHealth: a qualitative exploratory study. Am J Health Educ. 2018;49(6):341–53. https://doi.org/10.1080/19325037.2018.1502701

Phillips SM, Courneya KS, Welch WA, Gavin KL, Cottrell A, Nielsen A, et al. Breast cancer survivors’ preferences for mHealth physical activity interventions: findings from a mixed methods study. J Cancer Surviv. 2019;13(2):292–305. https://doi.org/10.1007/s11764-019-00751-3

Herbeć A, Perski O, Shahab L, West R. Smokers’ views on Personal Carbon Monoxide Monitors, Associated Apps, and their use: an interview and think-Aloud Study. Int J Environ Res Public Health. 2018;15(2):288. https://doi.org/10.3390/ijerph15020288

Ludden GDS, Van Rompay TJL, Kelders SM, Van Gemert-Pijnen JEWC. How to increase reach and adherence of web-based interventions: A design research viewpoint. J Med Internet Res. 2015;17(7):e4201. https://doi.org/10.2196/jmir.4201

Assael H. Consumers behavior. 6th ed. Ohio: Southwestern College Publishing.; 1995.

Dulkhatif HAT, Warso MM, Pengaruh Kualitas Pelayanan, Kepuasan Pelanggan Dan Lokasi, Terhadap Loyalitas Pelanggan Pada Penyedia Jasa Internet Study Pt Noken Mulia Tama Semarang. J Manag. 2016;2(2):1–34.

Partridge SR, McGeechan K, Hebden L, Balestracci K, Wong AT, Denney-Wilson E, et al. Effectiveness of a mHealth Lifestyle Program With Telephone support (TXT2BFiT) to prevent Unhealthy Weight Gain in Young adults: Randomized Controlled Trial. JMIR Mhealth Uhealth. 2015;3(2):e66. https://doi.org/10.2196/mhealth.4530

Recio-Rodriguez J, Agudo Conde C, Calvo-Aponte M, Gonzalez-Viejo N, Fernandez-Alonso C, Mendizabal-Gallastegui N, et al. The effectiveness of a Smartphone application on modifying the intakes of Macro and micronutrients in Primary Care: a Randomized Controlled Trial. The EVIDENT II study. Nutrients. 2018;10(10):1473. https://doi.org/10.3390/nu10101473

Nguyen Thanh V, Guignard R, Lancrenon S, Bertrand C, Delva C, Berlin I, et al. Effectiveness of a fully automated internet-based Smoking Cessation Program: a randomized controlled trial (STAMP). Nicotine Tob Res. 2019;21(2):163–72. https://doi.org/10.1093/ntr/nty016

Free C, Phillips G, Galli L, Watson L, Felix L, Edwards P, et al. The effectiveness of Mobile-Health Technology-Based Health Behaviour Change or Disease Management Interventions for Health Care consumers: a systematic review. PLoS Med. 2013;10(1):e1001362. https://doi.org/10.1371/journal.pmed.1001362

Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an International Consensus for the reporting of Behavior Change interventions. Ann Behav Med. 2013;46(1):81–95. https://doi.org/10.1007/s12160-013-9486-6

Schwarzer R. Modeling Health Behavior Change: how to predict and modify the Adoption and Maintenance of Health Behaviors. Appl Psychol. 2008;57(1):1–29. https://doi.org/10.1111/j.1464-0597.2007.00325.x

Farha RA, Gharaibeh L, Alzoubi KH, Alhamad H. Exploring Community Pharmacists’ Perception and Readiness Toward Telepharmacy Implementation in Jordan: A Cross-Sectional Study. Telemed J E Health.. 2023; 30(3): 816-824 https://doi.org/10.1089/tmj.2023.0264

Elawady A, Khalil A, Assaf O, Toure S, Cassidy C. Telemedicine during COVID-19: a survey of Health Care professionals’ perceptions. Monaldi Arch Chest Dis. 2020;90(4):576–81. https://doi.org/10.4081/monaldi.2020.1528

Zuckerman AD, Whelchel K, Kozlicki M, Simonyan AR, Donovan JL, Gazda NP, et al. Health-system specialty pharmacy role and outcomes: a review of current literature. Am J Health-System Pharm. 2022;79(21):1906–18. https://doi.org/10.1093/ajhp/zxac212

Grime J, Blenkinsopp A, Raynor DK, Pollock K, Knapp P. The role and value of written information for patients about individual medicines: a systematic review. Health Expect. 2007;10(3):286–98. https://doi.org/10.1111/j.1369-7625.2007.00454.x

Barnett N, Jubray B. Remote consultations: how pharmacy teams can practise them successfully. Pharm J. 2020.

Sholikhah R, Krisnawati M, Sudiyono. Effectiveness of the Use of Interactive Video Learning Media in Fashion Technology courses. Adv Social Sci Educ Humanit Res. 2019;379:172–6. https://doi.org/10.2991/assehr.k.191217.029

Vakilian A, Ranjbar E, Hassanipour M, Ahmadinia H, Hasani H. The effectiveness of virtual interactive video in comparison with online classroom in the stroke topic of theoretical neurology in COVID-19 pandemic. J Educ Health Promot. 2022;11(1):219. https://doi.org/10.4103/jehp.jehp_1297_21

Tractinsky N. Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues. Proceedings of the ACM SIGCHI Conference on Human factors in computing systems. https://doi.org/10.1145/258549.258626

Crane D, Garnett C, Brown J, West R, Michie S. Factors influencing usability of a smartphone app to reduce excessive alcohol consumption: think aloud and interview studies. Front Public Health. 2017;5. https://doi.org/10.3389/fpubh.2017.00039

Su MC, Chen WC, Liu CY, Jou HJ, Hsiao YC, Tsao LI. The design requirements for an E-Health Management platform: addressing the needs of adolescent girls at high risk of metabolic syndrome. Hu Li Za Zhi. 2015;62(5):51–60. https://doi.org/10.6224/jn62.5.51

Peters D, Deady M, Glozier N, Harvey S, Calvo RA. Worker Preferences for a Mental Health App within male-dominated industries: participatory study. JMIR Ment Health. 2018;5(2):e30. https://doi.org/10.2196/mental.8999

Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. J Biomed Inform. 2005;38(1):75–87. https://doi.org/10.1016/j.jbi.2004.11.005

Rezaee R, Asadi S, Yazdani A, Rezvani A, Kazeroon AM. Development, usability and quality evaluation of the resilient mobile application for women with breast cancer. Health Sci Rep. 2022;5(4). https://doi.org/10.1002/hsr2.708

Padrini-Andrade L, Balda R, de Areco CX, Bandiera-Paiva KCN, Nunes P, Marba M, Evaluation Of Usability Of A Neonatal Health Information System According To The User’S Perception. STM,. Revista Paulista de Pediatria. 2019;37(1):90–6. https://doi.org/10.1590/1984-0462/;2019;37;1;00019

Fitria N, Idrus L, Putri AR, Sari YO. The usability testing of the integrated electronic healthcare services for diabetes mellitus patients during the pandemic in Indonesia. Digit Health. 2023;9:205520762311732. https://doi.org/10.1177/20552076231173227

Mkpojiogu EOC, Hashim NL, Adamu R. Observed Demographic Differentials in User Perceived Satisfaction on the Usability of Mobile Banking Applications. Knowledge Management International Conference (KMICe). 2016;263–8.

Handayani PW, Indriani R, Pinem AA. Mobile health readiness factors: from the perspectives of mobile health users in Indonesia. Inf Med Unlocked. 2021;24:100590. https://doi.org/10.1016/j.imu.2021.100590

Schillinger D. Association of Health Literacy with Diabetes Outcomes. JAMA. 2002;288(4):475. https://doi.org/10.1001/jama.288.4.475

Soemitro DH, Analisis Tingkat Health Literacy Dan Pengetahuan Pasien Hipertensi, Di Puskesmas Kabupaten Malang. Calyptra: Jurnal Ilmiah Mahasiswa Universitas Surabaya. 2014;3(1):1–13.

Immanuel SS, Usability Testing Pada Aplikasi Klikdokter Mobile. Berdasarkan ISO 9241-11. Universitas Diponegoro; 2023.

Horvath KJ, Ecklund AM, Hunt SL, Nelson TF, Toomey TL. Developing Internet-based health interventions: a guide for public health researchers and practitioners. J Med Internet Res. 2015;17(1):e28. https://doi.org/10.2196/jmir.3770

Yap YY, Tan SH, Choon SW. Elderly’s intention to use technologies: A systematic literature review. Heliyon. 2022;8(1):e08765 https://doi.org/10.1055/s-0040-1714693

Rochmawati E, Kamilah F, Iskandar AC. Acceptance of e-health technology among older people: A qualitative study. Nurs Health Sci. 2022;24(2):437–46. https://doi.org/10.1111/nhs.12939

Richter Lagha R, Burningham Z, Sauer BC, Leng J, Peters C, Huynh T et al. Usability Testing a Potentially Inappropriate Medication Dashboard: A Core Component of the Dashboard Development Process. Appl Clin Inform. 2020;11(4):528–34. https://doi.org/10.1055/s-0040-1714693

Cho H, Powell D, Pichon A, Kuhns LM, Garofalo R, Schnall R. Eye-tracking retrospective think-aloud as a novel approach for a usability evaluation. Int J Med Inf. 2019;129:366–73. https://doi.org/10.1016/j.ijmedinf.2019.07.010

Wahyuningrum T, Kartiko C, Wardhana AC. Exploring e-Commerce Usability by Heuristic Evaluation as a Compelement of System Usability Scale. In: 2020 International Conference on Advancement in Data Science, E-learning and Information Systems (ICADEIS). 2020. pp. 1–5.

Download references


The authors are grateful to all respondents for their efforts and contributions.

Open access funding provided by University of Padjadjaran. Financial assistance was received through a grant-in-aid from The Ministry of Education, Culture, Research, and Technology of Indonesia. This funding body did not play any role in the design, writing, and publication of this study.

Open access funding provided by University of Padjadjaran

Author information

Authors and affiliations.

Department of Pharmacology and Clinical Pharmacy, Faculty of Pharmacy, Universitas Padjadjaran, Jatinangor, Indonesia

Sofa D. Alfian, Jihan A. Sania, Dzulfah Q. Aini, Qisty A. Khoiry, Yudisia Ausi, Neily Zakiyah, Irma M. Puspitasari, Auliya A. Suwantika & Rizky Abdulah

Drug Utilization and Pharmacoepidemiology Research Group, Center of Excellence for Pharmaceutical Care Innovation, Universitas Padjadjaran, Jatinangor, Indonesia

Sofa D. Alfian, Meliana Griselda, Yudisia Ausi, Neily Zakiyah, Irma M. Puspitasari, Auliya A. Suwantika & Rizky Abdulah

Center for Health Technology Assessment, Universitas Padjadjaran, Jatinangor, Indonesia

Sofa D. Alfian & Auliya A. Suwantika

Dienggo Kreasi Nusantara Company, Jakarta, Indonesia

Mariska Mahfud & Saktian Aji

Department of Nursing, School of Health Sciences, Cyprus University of Technology, Limassol, Cyprus

Angelos P. Kassianos

You can also search for this author in PubMed   Google Scholar


SDA, QAK, YA, NZ, IMP, AAS, MM, SA, and RA were involved in protocol development; SDA and QAK were responsible for gaining ethical approval; SDA, QAK, JAS, DQA, MG, and APK were involved in patient recruitment and data analysis. SDA wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version.

Corresponding author

Correspondence to Sofa D. Alfian .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Research Ethics Commission of Universitas Padjadjaran, Indonesia (number 670/UN6.KEP/EC/2022) and all participants provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Alfian, S.D., Sania, J.A., Aini, D.Q. et al. Evaluation of usability and user feedback to guide telepharmacy application development in Indonesia: a mixed-methods study. BMC Med Inform Decis Mak 24 , 130 (2024). https://doi.org/10.1186/s12911-024-02494-3

Download citation

Received : 28 November 2023

Accepted : 27 March 2024

Published : 21 May 2024

DOI : https://doi.org/10.1186/s12911-024-02494-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Telemedicine
  • User-centered design
  • Mobile application

BMC Medical Informatics and Decision Making

ISSN: 1472-6947

literature review about usability evaluation methods

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • JMIR Med Educ
  • v.8(2); Apr-Jun 2022

Usability Methods and Attributes Reported in Usability Studies of Mobile Apps for Health Care Education: Scoping Review

Susanne grødem johnson.

1 Faculty of Health and Function, Western Norway University of Applied Sciences, Bergen, Norway

Thomas Potrebny

Lillebeth larun.

2 Division of Health Services, Norwegian Institute of Public Health, Oslo, Norway

Donna Ciliska

3 Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada

Nina Rydland Olsen

Associated data.

PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist for reporting scoping reviews.

The search strategies for the 10 databases.

Data extraction sheet.

Mobile devices can provide extendable learning environments in higher education and motivate students to engage in adaptive and collaborative learning. Developers must design mobile apps that are practical, effective, and easy to use, and usability testing is essential for understanding how mobile apps meet users’ needs. No previous reviews have investigated the usability of mobile apps developed for health care education.

The aim of this scoping review is to identify usability methods and attributes in usability studies of mobile apps for health care education.

A comprehensive search was carried out in 10 databases, reference lists, and gray literature. Studies were included if they dealt with health care students and usability of mobile apps for learning. Frequencies and percentages were used to present the nominal data, together with tables and graphical illustrations. Examples include a figure of the study selection process, an illustration of the frequency of inquiry usability evaluation and data collection methods, and an overview of the distribution of the identified usability attributes. We followed the Arksey and O’Malley framework for scoping reviews.

Our scoping review collated 88 articles involving 98 studies, mainly related to medical and nursing students. The studies were conducted from 22 countries and were published between 2008 and 2021. Field testing was the main usability experiment used, and the usability evaluation methods were either inquiry-based or based on user testing. Inquiry methods were predominantly used: 1-group design (46/98, 47%), control group design (12/98, 12%), randomized controlled trials (12/98, 12%), mixed methods (12/98, 12%), and qualitative methods (11/98, 11%). User testing methods applied were all think aloud (5/98, 5%). A total of 17 usability attributes were identified; of these, satisfaction, usefulness, ease of use, learning performance, and learnability were reported most frequently. The most frequently used data collection method was questionnaires (83/98, 85%), but only 19% (19/98) of studies used a psychometrically tested usability questionnaire. Other data collection methods included focus group interviews, knowledge and task performance testing, and user data collected from apps, interviews, written qualitative reflections, and observations. Most of the included studies used more than one data collection method.


Experimental designs were the most commonly used methods for evaluating usability, and most studies used field testing. Questionnaires were frequently used for data collection, although few studies used psychometrically tested questionnaires. The usability attributes identified most often were satisfaction, usefulness, and ease of use. The results indicate that combining different usability evaluation methods, incorporating both subjective and objective usability measures, and specifying which usability attributes to test seem advantageous. The results can support the planning and conduct of future usability studies for the advancement of mobile learning apps in health care education.

International Registered Report Identifier (IRRID)



Mobile devices can provide extendable learning environments and motivate students to engage in adaptive and collaborative learning [ 1 , 2 ]. Mobile devices offer various functions, enable convenient access, and support the ability to share information with other learners and teachers [ 3 ]. Most students own a mobile phone, which makes mobile learning easily accessible [ 4 ]. However, there are some challenges associated with mobile devices in learning situations, such as small screen sizes, connectivity problems, and multiple distractions in the environment [ 5 ].

Developers of mobile learning apps need to consider usability to ensure that apps are practical, effective, and easy to use [ 1 ] and to ascertain that mobile apps meet users’ needs [ 6 ]. According to the International Organization for Standardization, usability is defined as “the extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” [ 7 ]. Better mobile learning usability will be achieved by focusing on user-centered design and attention to context, ensuring that the technology corresponds to the user’s requirements and putting the user at the center of the process [ 8 , 9 ]. In addition, it is necessary to be conscious of the interrelatedness between usability and pedagogical design [ 9 ].

A variety of usability evaluation methods exists to test the usability of mobile apps, and Weichbroth [ 10 ] categorized them into the following 4 categories: inquiry, user testing, inspection, and analytical modeling. Inquiry methods are designed to gather data from users through questionnaires (quantitative data) and interviews and focus groups (qualitative data). User testing methods include think-aloud protocols, question-asking protocols, performance measurements, log analysis, eye tracking, and remote testing. Inspection methods, in contrast, involve experts testing apps, heuristic evaluation, cognitive walk-through, perspective-based inspections, and guideline reviews. Analytical modeling methods include cognitive task analysis and task environment analysis [ 10 ]. Across these 4 usability evaluation methods, the most commonly used data collection methods are controlled observations and surveys, whereas eye tracking, think-aloud methods, and interviews are applied less often [ 10 ].

Usability evaluations are normally performed in a laboratory or in field testing. Previous reviews have reported that usability evaluation methods are mainly conducted in a laboratory, which means in a controlled environment [ 1 , 11 ]. By contrast, field testing is conducted in real-life settings. There are pros and cons to the 2 different approaches. Field testing allows data collection within a dynamic environment, whereas in a laboratory data collection and conditions are easier to control [ 1 ]. A variety of data collection methods are appropriate for usability studies; for instance, in laboratories, participants performing predefined tasks, such as using questionnaires and observations, are often applied [ 1 ]. In field testing, logging mechanisms and diaries have been applied to capture user interaction with mobile apps [ 1 ].

In all, 2 systematic reviews examined various psychometrically tested usability questionnaires as a means of enhancing the usability of apps. Sousa and Lopez [ 12 ] identified 15 such questionnaires and Sure [ 13 ] identified 13. In all, 5 of the questionnaires have proven to be applicable in usability studies in general: the System Usability Scale (SUS), Questionnaire for User Interaction Satisfaction, After-Scenario Questionnaire, Post-Study System Usability Questionnaire, and Computer System Usability Questionnaire [ 12 ]. The SUS questionnaire and After-Scenario Questionnaire are most widely applied [ 13 ]. The most frequently reported usability attributes of these 5 questionnaires are learnability, efficiency, and satisfaction [ 12 ].

Usability attributes are features that measure the quality of mobile apps [ 1 ]. The most commonly reported usability attributes are effectiveness, efficiency, and satisfaction [ 5 ], which are part of the usability definition [ 7 ]. In the review by Weichbroth [ 10 ], 75 different usability attributes were identified. Given the wide selection of usability attributes, choosing appropriate attributes depends on the nature of the technology and the research question in the usability study [ 14 ]. Kumar and Mohite [ 1 ] recommended that researchers present and explain which usability attributes are being tested when mobile apps are being developed.

Previous reviews have examined the usability of mobile apps in general [ 5 , 10 , 11 , 14 , 15 ]; however, only one systematic review has specifically explored the usability of mobile learning apps [ 1 ]. However, studies from health care education were not included. Similarly, usability has not been widely explored in medical education apps [ 16 ]. Thus, there is a need to develop a better understanding of how the usability of mobile learning apps developed for health care education has been evaluated and conceptualized in previous studies.

The aim of this scoping review has therefore been to identify usability methods and attributes in usability studies of mobile apps for health care education.

We have used the framework for scoping reviews developed by Arksey and O'Malley [ 17 ] and further developed by Levac et al [ 18 ] and Khalil et al [ 19 ]. We adopted the following five stages of this framework: (1) identifying the research question, (2) identifying relevant studies, (3) selecting studies, (4) charting the data, and (5) summarizing and reporting the results [ 17 - 19 ]. A detailed presentation of each step can be found in the published protocol for this scoping review [ 20 ]. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist for reporting scoping reviews ( Multimedia Appendix 1 [ 21 ]).

Stage 1: Identifying the Research Question

The following two research questions have been formulated:

  • Which usability methods are used to evaluate the usability of mobile apps for health care education?
  • Which usability attributes are reported in the usability studies of mobile apps for health care education?

Stage 2: Identifying Relevant Studies

A total of 10 electronic databases on technology, education, and health care from January 2008 to October 2021 and February 2022 were searched. These databases were as follows: Engineering Village, Scopus, ACM Digital Library, IEEE Xplore, Education Resource Information Center, PsycINFO, CINAHL, MEDLINE, EMBASE, and Web of Science. The search string was developed by the first author and a research librarian and then peer reviewed by another research librarian. The search terms used in the Web of Science, in addition to all relevant subject headings, included: ((student* or graduate* or undergraduate* or postgraduate*) NEAR/3 nurs*) . This search string was repeated for other types of students and combined with the Boolean operator OR. The search string for all types of health care students was then combined with various search terms for mobile apps and mobile learning using the Boolean operator AND. Similar search strategies were used and adapted for all 10 databases as shown in Multimedia Appendix 2 . In addition, a citation search in Google Scholar, screening reference lists of included studies, and searching for gray literature in OpenGrey were conducted.

Stage 3: Selecting Studies

Two of the authors independently screened titles and abstracts using Rayyan web-based management software [ 22 ]. Studies deemed eligible by one of the authors were included for full-text screening and imported into the EndNote X9 (Clarivate) reference management system [ 23 ]. Eligibility for full-text screening was determined independently by two of the authors and disagreements were resolved by consensus-based discussions. Research articles with different designs were included, and there were no language restrictions. As mobile apps started appearing in 2008, this year was set as the starting point for the search. Eligibility criteria are presented in Table 1 .

Study eligibility.

Stage 4: Charting the Data (Data Abstraction)

The extracted data included information about the study (eg, authors, year of publication, title, and country), population (eg, number of participants), concepts (usability methods, usability attributes, and usability phase), and context (educational setting). The final data extraction sheet can be found in Multimedia Appendix 3 [ 24 - 111 ]. One review author extracted the data from the included studies using Microsoft Excel software [ 21 ], which was checked by another researcher.

Descriptions of usability attributes have not been standardized, making categorization challenging. Therefore, a review author used deductive analysis to interpret the usability attributes reported in the included studies. This interpretation was based on a review of usability attributes as defined in previous literature. These definitions were assessed on the basis of the results of the included studies. This analysis was reviewed and discussed by another author. Disagreements were resolved through a consensus-based discussion.

Stage 5: Summarizing and Reporting the Results

Frequencies and percentages were used to present nominal data, together with tables and graphical illustrations. For instance, a figure showing the study selection process, an illustration of the frequency of inquiry-based usability evaluation and data collection methods, and an overview of the distribution of identified usability attributes were provided.

Eligible Studies

Database searches yielded 34,369 records, and 2796 records were identified using other methods. After removing duplicates, 28,702 records remained. A total of 626 reports were examined in full text. In all, 88 articles were included in the scoping review [ 24 - 111 ] ( Figure 1 ). A total of 8 articles comprised results from several studies in the same article, presented as study A, study B, or study C in Multimedia Appendix 3 . Therefore, a total of 98 studies were reported in the 88 articles included.

An external file that holds a picture, illustration, etc.
Object name is mededu_v8i2e38259_fig1.jpg

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart of study selection process.

The included studies comprised a total sample population of 7790, with participant numbers ranging from 5 to 736 participants per study. Most of the studies included medical students (34/88, 39%) or nursing students (25/88, 28%). Other participants included students from the following disciplines: pharmacy (9/88, 10%), dentistry (5/88, 6%), physiotherapy (5/88, 6%), health sciences (3/88, 3%), and psychology (2/88, 2%). Further information is provided in Multimedia Appendix 3 . There were 22 publishing countries, with most studies being from the United States (22/88, 25%), Spain (9/88, 10%), the United Kingdom (8/88, 9%), Canada (7/88, 8%), and Brazil (7/88, 8%), with an increasing number of publications from 2014. Table 2 provides an overview and characteristics of the included articles.

Characteristics of included articles.

a Performances measured, comparing paper and app results, quiz results, and exam results.

b Reported use of validated questionnaires.

Usability Evaluation Methods

The usability evaluation methods found were either inquiry-based or based on user testing. The following inquiry methods were used: 1-group design (46/98, 47%), control group design (12/98, 12%), randomized controlled trials (12/98, 12%), mixed methods (12/98, 12%), and qualitative methods (11/98, 11%). Several studies that applied inquiry-based methods used more than one data collection method, with questionnaires being used most often (80/98, 82%), followed by task and knowledge performance testing (17/98, 17%), focus groups (15/98, 15%), collection of user data from the app (10/98, 10%), interviews (5/98, 5%), written qualitative reflections (4/98, 4%), and observations (3/98, 3%). Additional information can be found in the data extraction sheet ( Multimedia Appendix 3 ). Figure 2 illustrates the frequency of the inquiry-based usability evaluation methods and data collection methods.

An external file that holds a picture, illustration, etc.
Object name is mededu_v8i2e38259_fig2.jpg

Inquiry usability evaluation methods and data collection methods.

The only user testing methods found were think-aloud methods (5/98, 5%), and 4 (80%) of these studies applied more than one data collection method. The data collection methods used included interviews (4/98, 4%), questionnaires (3/98, 3%), task and knowledge performance (3/98, 3%), focus groups (1/98, 1%), and collection of user data from the app (1/98, 1%).

A total of 19 studies used a psychometrically tested usability questionnaire, including the SUS, Technology Acceptance Model, Technology Satisfaction Questionnaire, and Technology Readiness Index. SUS [ 112 ] was used in most (9/98, 9%) of the studies.

Field testing was the most frequent type of usability experiment, accounting for 72% (71/98) of usability experiments. A total of 22 (22%) studies performed laboratory testing, and 5 (5%) studies did not indicate the type of experiment performed. Multimedia Appendix 3 provides an overview of the type of experiment conducted in each study. The usability testing of the mobile apps took place in a classroom setting (41/98, 42%), in clinical placement (29/98, 30%), during simulation training (14/98, 14%), other (7/98, 7%), or the setting was not specified (5/98, 5%).

Usability Attributes

A total of 17 usability attributes have been identified among the included studies. The most frequently identified attributes were satisfaction, usefulness, ease of use, learning performance, and learnability. The least frequent were errors, cognitive load, comprehensibility, memorability, and simplicity. Table 3 provides an overview of the usability attributes identified in the included studies.

Distribution of usability attributes (n=17) and affiliated reports (N=88).

Principal Findings

This scoping review sought to identify the usability methods and attributes reported in usability studies of mobile apps for health care education. A total of 88 articles, with a total of 98 studies reported in these 88 articles, were included in this review. Our findings indicate a steady increase in publications from 2014, with studies being published in 22 different countries. Field testing was used more frequently than laboratory testing. Furthermore, the usability evaluation methods applied were either inquiry-based or based on user testing. Most of the inquiry-based methods were experiments that used questionnaires as a data collection method, and all of the studies with user testing methods applied think-aloud methods. Satisfaction, usefulness, ease of use, learning performance, and learnability were the most frequently identified usability attributes.

Comparison With Prior Work

The studies included in this scoping review mainly applied inquiry-based methods, primarily the collection of self-reported data through questionnaires. This is congruent with the results of Weichbroth [ 10 ], in which controlled observations and surveys were the most frequently applied methods. Asking users to respond to a usability questionnaire may provide relevant and valuable information. Among the 83 studies that used questionnaires in our review, only 19 (23%) used a psychometrically tested usability questionnaire; of these, the SUS questionnaire [ 112 ] was used most frequently. In line with the review on usability questionnaires [ 12 ], we recommend using a psychometrically tested usability questionnaire to support the advancement of usability science. As questionnaires address only certain usability attributes, mainly learnability, efficiency, and satisfaction [ 12 ], it would be helpful to also include additional methods, such as interviews or mixed methods, and to incorporate additional open-ended questions when using questionnaires.

Furthermore, the application of usability evaluation methods other than inquiry methods, such as user testing methods and inspection methods [ 10 ], could be beneficial and lead to more objective measures of app usability. Among other things, subjective data are collected via self-reported questionnaires, and objective data are collected based on task completion rates [ 40 ]. For example, in one of the included studies, the participants reported that the usability of the app was satisfactory by subjective measures, but the participants did not use the app [ 75 ]. Another study reported a lack of coherence between subjective and objective data; thus, these results indicate the importance of not relying solely on subjective measures of usability [ 40 ]. Therefore, it is suggested that various usability evaluation methods, including subjective and objective usability measures, are used in future usability studies.

Our review found that most of the included studies in health care education (71/98, 72%) performed field testing, whereas previous literature suggests that usability experiments in other fields are more often conducted in a laboratory [ 1 , 113 ]. For instance, Kumar and Mohite [ 1 ] found that 73% of the studies included in their review of mobile learning apps used laboratory testing. Mobile apps in health care education have been developed to support students’ learning, on-campus and during clinical placement, in various settings and on the move. Accordingly, it is especially important to test how the apps are perceived in specific environments [ 5 ]; hence, field testing is required. However, many usability issues can be discovered in a laboratory. Particularly in the early phases of app development, testing an app with several participants in a laboratory may make it more feasible to test and improve the app [ 8 ]. Usability testing in a laboratory can provide rapid feedback on usability issues, which can then be addressed before testing the app in a real-world environment. Therefore, it may be beneficial to conduct small-scale laboratory testing before field testing.

Previous systematic reviews of mobile apps in general identified satisfaction, efficiency, and effectiveness as the most common usability attributes [ 5 , 10 ]. In this review, efficiency and effectiveness were explored to a limited extent, whereas satisfaction, usefulness, and ease of use were the most frequently identified usability attributes. Our results coincide with those from a previous review on the usability of mobile learning apps [ 1 ], possibly because satisfaction, usefulness, and ease of use are usability attributes of particular importance when examining mobile learning apps.

Learning performance was assessed frequently in the included studies. For ensuring that apps are valuable in a given learning context, it is relevant to test additional usability attributes such as cognitive load [ 9 ]. However, few studies included in our review examined cognitive load [ 68 , 80 , 108 ]. Mobile apps are often used in an environment with multiple distractions, which may contribute to an increased cognitive load [ 5 ], affecting the learning performance. Testing both learning performance and app users’ cognitive load may improve the understanding of the app’s usability.

We found that several of the included studies did not use terminology from usability literature to describe which usability attributes they were testing. For instance, studies that tested satisfaction often used words such as “likes and dislikes” and “recommend use to others” and did not specify that they tested the usability attribute satisfaction. Specifying which usability attributes are investigated will be important when performing a usability study of mobile apps, as this will influence transparency and enable comparison between different studies. In addition, evaluating a wider range of usability attributes may enable researchers to expand their perspective regarding the app’s usability problems and ensure quicker improvement of the app. Defining and presenting different usability attributes in a reporting guideline can assist in deciding on and reporting relevant usability attributes. As such, a reporting guideline would be beneficial for researchers planning and conducting usability studies, a point that is also supported by the systematic review conducted by Kumar and Mohite [ 1 ].

Future Directions

Combining different usability evaluation methods that incorporate both subjective and objective usability measures can add various and important perspectives when developing apps. In future studies, it would be advantageous to use psychometrically tested usability questionnaires to support the advancement of the usability science. In addition, developers of mobile apps should determine which usability attributes are relevant before conducting usability studies (eg, by registering a protocol). Incorporating these perspectives into the development of a reporting guideline would be beneficial to future usability studies.

Strengths and Limitations

First, the search strategy was designed in collaboration with a research librarian and peer reviewed by another research librarian and included 10 databases and other sources. This broad search strategy resulted in a high number of references, which may be associated with a lower level of precision. To ensure the retrieval of all potentially pertinent articles, two of the authors independently screened titles and abstracts; studies deemed eligible by one of the authors were included for full-text screening.

Second, the full-text evaluation was challenging because the term usability has multiple meanings that do not always relate to usability testing. For instance, the term was used when testing students’ experience of a commercially developed app but not in connection with the app’s further development. In addition, many studies did not explicitly state that a mobile app was being investigated, which also created a challenge when deciding whether they satisfied the eligibility criteria. Nevertheless, reading the full-text articles independently by 2 reviewers and solving disagreements through consensus-based discussions ensured the inclusion of relevant articles.

This scoping review was performed to provide an overview of the usability methods used and the attributes identified in usability studies of mobile apps in health care education. Experimental designs were commonly used to evaluate usability and most studies used field testing. Questionnaires were frequently used for data collection, although few studies used psychometrically tested questionnaires. Usability attributes identified most often were satisfaction, usefulness, and ease of use. The results indicate that combining different usability evaluation methods, incorporating both subjective and objective usability measures, and specifying which usability attributes to test seem advantageous. The results can support the planning and conduct of future usability studies of the advancement of learning apps in health care education.


The research library at Western Norway University of Applied Sciences provided valuable assistance in developing and performing the search strategy for this scoping review. Gunhild Austrheim, a research librarian, provided substantial guidance in the planning and performance of the database searches. Marianne Nesbjørg Tvedt peer reviewed the search string. Malik Beglerovic also assisted with database searches. The authors would also like to thank Ane Kjellaug Brekke Gjerland for assessing the data extraction sheet.


Multimedia appendix 1, multimedia appendix 2, multimedia appendix 3.

Authors' Contributions: SGJ, LL, DC, and NRO proposed the idea for this review. SGJ, DC, and NRO contributed to the screening of titles and abstracts, and SGJ and TP decided on eligibility based on full-text examinations. SGJ extracted data from the included studies. SGJ, TP, LL, DC, and NRO contributed to the drafts of the manuscript, and all authors approved the final version for publication.

Conflicts of Interest: None declared.


  1. Usability evaluation methods a literature review.pdf

    literature review about usability evaluation methods

  2. (PDF) Usability Evaluation Methods: How Usable Are They?

    literature review about usability evaluation methods

  3. Experimental Comparisons of Usability Evaluation Methods (ebook

    literature review about usability evaluation methods

  4. Usability Evaluation Methods

    literature review about usability evaluation methods

  5. (PDF) Usability Methods and Evaluation Criteria for Published Clinical

    literature review about usability evaluation methods

  6. (PDF) Usability evaluation methods in practice: Understanding the

    literature review about usability evaluation methods


  1. Interview Session on the topic Usability Evaluation of Daily Mail Australia

  2. HCI 460 Usability Evaluation Methods Test Interview

  3. Customer review, Usability Evaluation, Feedback Rating system. Motion graphics

  4. Usability evaluation(Questionnaires) with participants

  5. Measuring Your User Experience Design

  6. Usability Evaluation on SHEIN Website. Post Question Interview. Participant no. 4


  1. Usability evaluation methods: a literature review

    Usability is defined as 'the ease with whic h a user can learn to operate, prepare inputs for, and. interpret outputs of a system or compone nt' (IEEE Std. 1061, 1992). Usability correlates ...

  2. Potential effectiveness and efficiency issues in usability evaluation

    A systematic literature review of usability evaluation studies, published by (academic) practitioners between 2016 and April 2023, was conducted. 610 primary articles were identified and analysed, utilising five major scientific databases. ... Usability evaluation methods like the traditional heuristic evaluation method, often used for general ...

  3. A Review of Usability Evaluation Methods and their Use for Testing

    Conclusions: In summary, this paper provides a review of the usability evaluation methods employed in the assessment of eHealth HIV eHealth interventions. eHealth is a growing platform for delivery of HIV interventions and there is a need to critically evaluate the usability of these tools before deployment.


    Various methods are available in the literature for usability evaluation like Inspection, DRUM, QUIS, SUMI MUSIC, Empirical testing. 4.1Inspection. This method is proposed by Boehm et al.(1976 ...

  5. Agile, Easily Applicable, and Useful eHealth Usability Evaluations

    Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems.

  6. [PDF] A literature review about usability evaluation methods for e

    This review is a synthesis of research project about Information Ergonomics and embraces three dimensions, namely the methods, models and frameworks that have been applied to evaluate LMS and shows a notorious change in the paradigms of usability. The usability analysis of information systems has been the target of several research studies over the past thirty years.

  7. A Systematic Literature Review of Usability Evaluation ...

    The existing usability evaluation methods seems not to concern all the aspects about a mobile educational game. ... Lin Gao, X.W., Murillo, B., Paz, F. (2019). A Systematic Literature Review of Usability Evaluation Guidelines on Mobile Educational Games for Primary School Students. In: Marcus, A., Wang, W. (eds) Design, User Experience, and ...

  8. A systematic literature review of mobile application usability

    A Data Extraction Form (see Appendix A) was used to manage the review results, including (1) author(s) of the paper; (2) year of publication; (3) suggested usability evaluation methods, approaches, and models; (4) category of the mobile app; (5) usability attributes and features used for mobile app design and evaluation; and (6) usability ...

  9. Usability: An introduction to and literature review of usability

    Various testing methods were used, including questionnaires, think aloud studies and heuristic evaluation. Usability testing comprised a range of single cycle through to several rounds of testing. ... Methods. A literature review was carried out to assess the reported use of usability testing in the radiation oncology education literature.

  10. Usability: An introduction to and literature review of usability

    Evaluation Method Description Benefits (+)/Limitations (-) Example Study; Direct observation - live or recorded evaluation: Heuristic evaluation [19]: Usability experts examine an interface against a set of pre-defined characteristics - "heuristics" - such as simple language, consistency and shortcuts in order to identify usability flaws and severity

  11. A literature review about usability evaluation methods for e-learning

    Within the domain of information ergonomics, the study of tools and methods used for usability evaluation dedicated to E-learning presents evidence that there is a continuous and dynamic evolution of E-learning systems, in many different contexts -academics and corporative. These systems, also known as LMS (Learning Management Systems), can be ...

  12. IEA 2012

    A literature review about usability evaluation methods for e-learning platforms Freire, Luciana Lopesa,, Arezes, Pedro Miguelb and Campos, José Creissacc abDeparment of Production and Systems Engineering - University of Minho - Guimarães, Portugal - c Deparment of Computer Science - Gualtar - Un iversity of Minho - Gualtar, Portugal -

  13. Usability and User Experience: Design and Evaluation

    It reviews the major methods of usability assessment, focusing on usability testing. The concept of UX casts a broad net over all of the experiential aspects of use, primarily subjective experience. User-centered design and design thinking are methods used to produce initial designs, after which they typically use iteration for design improvement.

  14. A Systematic Literature Review of Usability Evaluation Guidelines on

    However, despite the fact that a lot of usability evaluation methods exist, most of them are focused on traditional computer usage and those are not 100% compatible with mobile phone usage. Therefore, a systematic literature review was conducted in order to identify usability evaluation guidelines for mobile educational games, which are ...

  15. Evaluation of Usability and Accessibility of Mobile Application for

    Usability evaluation is the evaluation of the product or system context of use, which is determined by the users, environment, tasks, and equipment. As the field of usability evaluation research has evolved, researchers have developed a variety of ways to apply the evaluation of usability methods. This systematic review aims to identify topics, trends, categories, methods and to answer ...

  16. Usability Evaluation of Dashboards: A Systematic Literature Review of

    The exclusion criteria were as follows: (1) non-English studies, (2) focusing on only dashboard design or dashboard evaluation, (3) use of evaluation methods other than questionnaires to evaluate usability, and (4) lack of access to the full text of articles. 2.3. Study Selection, Article Evaluation, and Data Extraction.

  17. Development of Usability Guidelines: A Systematic Literature Review

    The development of usability guidelines was undertaken in many different domains, where the medical domain recorded the highest number of activities. Literature survey is the dominant data collection technique for developing usability guidelines, while usability evaluation is the most common technique for validating newly developed guidelines.

  18. A Review of Usability Evaluation Methods and Their Use for Testing

    Usability evaluation methods included eye-tracking, questionnaires, semi-structured interviews, contextual interviews, think-aloud protocols, cognitive walkthroughs, heuristic evaluations and expert reviews, focus groups, and scenarios. A wide variety of methods is available to evaluate the usability of eHealth interventions.

  19. A literature review about usability evaluation methods for e-learning

    This review is a synthesis of research project about Information Ergonomics and embraces three dimensions, namely the methods, models and frameworks that have been applied to evaluate LMS. The study also includes the main usability criteria and heuristics used. The obtained results show a notorious change in the paradigms of usability, with ...

  20. Sustainability and Usability Evaluation of E-Commerce Portals

    The database utilised by this benchmark was derived from a usability analysis of the top 60 B2C retail e-commerce websites. Every assessment considers 702 distinct UX components. For each web portal, a score ranging from 0 to 702 is thus displayed. For this study, these grades have been normalised on a scale of 0 to 10.

  21. Evaluation of usability and user feedback to guide telepharmacy

    The mixed-methods evaluation conducted in this study and the results represent the ... The method of this study was limited to user-based usability evaluation, focusing solely on the ... Fernández-Alemán JL, Idri A, Toval A. Empirical studies on usability of mHealth apps: a systematic literature review. J Med Syst. 2015;39(2):1-19. ...

  22. Usability Methods and Attributes Reported in Usability Studies of

    Therefore, it is suggested that various usability evaluation methods, including subjective and objective usability measures, are used in future usability studies. Our review found that most of the included studies in health care education (71/98, 72%) performed field testing, whereas previous literature suggests that usability experiments in ...