• Research article
  • Open access
  • Published: 02 December 2020

Integrating students’ perspectives about online learning: a hierarchy of factors

  • Montgomery Van Wart 1 ,
  • Anna Ni 1 ,
  • Pamela Medina 1 ,
  • Jesus Canelon 1 ,
  • Melika Kordrostami 1 ,
  • Jing Zhang 1 &

International Journal of Educational Technology in Higher Education volume  17 , Article number:  53 ( 2020 ) Cite this article

148k Accesses

48 Citations

24 Altmetric

Metrics details

This article reports on a large-scale ( n  = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students’ perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social Comfort, Online Interactive Modality, and Social Presence--were identified as significant and reliable. Regression analysis indicates the minimal factors for enrollment in future classes—when students consider convenience and scheduling—were Basic Online Modality, Cognitive Presence, and Online Social Comfort. Students who accepted or embraced online courses on their own merits wanted a minimum of Basic Online Modality, Teaching Presence, Cognitive Presence, Online Social Comfort, and Social Presence. Students, who preferred face-to-face classes and demanded a comparable experience, valued Online Interactive Modality and Instructional Support more highly. Recommendations for online course design, policy, and future research are provided.

Introduction

While there are different perspectives of the learning process such as learning achievement and faculty perspectives, students’ perspectives are especially critical since they are ultimately the raison d’être of the educational endeavor (Chickering & Gamson, 1987 ). More pragmatically, students’ perspectives provide invaluable, first-hand insights into their experiences and expectations (Dawson et al., 2019 ). The student perspective is especially important when new teaching approaches are used and when new technologies are being introduced (Arthur, 2009 ; Crews & Butterfield, 2014 ; Van Wart, Ni, Ready, Shayo, & Court, 2020 ). With the renewed interest in “active” education in general (Arruabarrena, Sánchez, Blanco, et al., 2019 ; Kay, MacDonald, & DiGiuseppe, 2019 ; Nouri, 2016 ; Vlachopoulos & Makri, 2017 ) and the flipped classroom approach in particular (Flores, del-Arco, & Silva, 2016 ; Gong, Yang, & Cai, 2020 ; Lundin, et al., 2018 ; Maycock, 2019 ; McGivney-Burelle, 2013 ; O’Flaherty & Phillips, 2015 ; Tucker , 2012 ) along with extraordinary shifts in the technology, the student perspective on online education is profoundly important. What shapes students’ perceptions of quality integrate are their own sense of learning achievement, satisfaction with the support they receive, technical proficiency of the process, intellectual and emotional stimulation, comfort with the process, and sense of learning community. The factors that students perceive as quality online teaching, however, has not been as clear as it might be for at least two reasons.

First, it is important to note that the overall online learning experience for students is also composed of non-teaching factors which we briefly mention. Three such factors are (1) convenience, (2) learner characteristics and readiness, and (3) antecedent conditions that may foster teaching quality but are not directly responsible for it. (1) Convenience is an enormous non-quality factor for students (Artino, 2010 ) which has driven up online demand around the world (Fidalgo, Thormann, Kulyk, et al., 2020 ; Inside Higher Education and Gallup, 2019 ; Legon & Garrett, 2019 ; Ortagus, 2017 ). This is important since satisfaction with online classes is frequently somewhat lower than face-to-face classes (Macon, 2011 ). However, the literature generally supports the relative equivalence of face-to-face and online modes regarding learning achievement criteria (Bernard et al., 2004 ; Nguyen, 2015 ; Ni, 2013 ; Sitzmann, Kraiger, Stewart, & Wisher, 2006 ; see Xu & Jaggars, 2014 for an alternate perspective). These contrasts are exemplified in a recent study of business students, in which online students using a flipped classroom approach outperformed their face-to-face peers, but ironically rated instructor performance lower (Harjoto, 2017 ). (2) Learner characteristics also affect the experience related to self-regulation in an active learning model, comfort with technology, and age, among others,which affect both receptiveness and readiness of online instruction. (Alqurashi, 2016 ; Cohen & Baruth, 2017 ; Kintu, Zhu, & Kagambe, 2017 ; Kuo, Walker, Schroder, & Belland, 2013 ; Ventura & Moscoloni, 2015 ) (3) Finally, numerous antecedent factors may lead to improved instruction, but are not themselves directly perceived by students such as instructor training (Brinkley-Etzkorn, 2018 ), and the sources of faculty motivation (e.g., incentives, recognition, social influence, and voluntariness) (Wingo, Ivankova, & Moss, 2017 ). Important as these factors are, mixing them with the perceptions of quality tends to obfuscate the quality factors directly perceived by students.

Second, while student perceptions of quality are used in innumerable studies, our overall understanding still needs to integrate them more holistically. Many studies use student perceptions of quality and overall effectiveness of individual tools and strategies in online contexts such as mobile devices (Drew & Mann, 2018 ), small groups (Choi, Land, & Turgeon, 2005 ), journals (Nair, Tay, & Koh, 2013 ), simulations (Vlachopoulos & Makri, 2017 ), video (Lange & Costley, 2020 ), etc. Such studies, however, cannot provide the overall context and comparative importance. Some studies have examined the overall learning experience of students with exploratory lists, but have mixed non-quality factors with quality of teaching factors making it difficult to discern the instructor’s versus contextual roles in quality (e.g., Asoodar, Vaezi, & Izanloo, 2016 ; Bollinger & Martindale, 2004 ; Farrell & Brunton, 2020 ; Hong, 2002 ; Song, Singleton, Hill, & Koh, 2004 ; Sun, Tsai, Finger, Chen, & Yeh, 2008 ). The application of technology adoption studies also fall into this category by essentially aggregating all teaching quality in the single category of performance ( Al-Gahtani, 2016 ; Artino, 2010 ). Some studies have used high-level teaching-oriented models, primarily the Community of Inquiry model (le Roux & Nagel, 2018 ), but empirical support has been mixed (Arbaugh et al., 2008 ); and its elegance (i.e., relying on only three factors) has not provided much insight to practitioners (Anderson, 2016 ; Cleveland-Innes & Campbell, 2012 ).

Research questions

Integration of studies and concepts explored continues to be fragmented and confusing despite the fact that the number of empirical studies related to student perceptions of quality factors has increased. It is important to have an empirical view of what students’ value in a single comprehensive study and, also, to know if there is a hierarchy of factors, ranging from students who are least to most critical of the online learning experience. This research study has two research questions.

The first research question is: What are the significant factors in creating a high-quality online learning experience from students’ perspectives? That is important to know because it should have a significant effect on the instructor’s design of online classes. The goal of this research question is identify a more articulated and empirically-supported set of factors capturing the full range of student expectations.

The second research question is: Is there a priority or hierarchy of factors related to students’ perceptions of online teaching quality that relate to their decisions to enroll in online classes? For example, is it possible to distinguish which factors are critical for enrollment decisions when students are primarily motivated by convenience and scheduling flexibility (minimum threshold)? Do these factors differ from students with a genuine acceptance of the general quality of online courses (a moderate threshold)? What are the factors that are important for the students who are the most critical of online course delivery (highest threshold)?

This article next reviews the literature on online education quality, focusing on the student perspective and reviews eight factors derived from it. The research methods section discusses the study structure and methods. Demographic data related to the sample are next, followed by the results, discussion, and conclusion.

Literature review

Online education is much discussed (Prinsloo, 2016 ; Van Wart et al., 2019 ; Zawacki-Richter & Naidu, 2016 ), but its perception is substantially influenced by where you stand and what you value (Otter et al., 2013 ; Tanner, Noser, & Totaro, 2009 ). Accrediting bodies care about meeting technical standards, proof of effectiveness, and consistency (Grandzol & Grandzol, 2006 ). Institutions care about reputation, rigor, student satisfaction, and institutional efficiency (Jung, 2011 ). Faculty care about subject coverage, student participation, faculty satisfaction, and faculty workload (Horvitz, Beach, Anderson, & Xia, 2015 ; Mansbach & Austin, 2018 ). For their part, students care about learning achievement (Marks, Sibley, & Arbaugh, 2005 ; O’Neill & Sai, 2014 ; Shen, Cho, Tsai, & Marra, 2013 ), but also view online education as a function of their enjoyment of classes, instructor capability and responsiveness, and comfort in the learning environment (e.g., Asoodar et al., 2016 ; Sebastianelli, Swift, & Tamimi, 2015 ). It is this last perspective, of students, upon which we focus.

It is important to note students do not sign up for online classes solely based on perceived quality. Perceptions of quality derive from notions of the capacity of online learning when ideal—relative to both learning achievement and satisfaction/enjoyment, and perceptions about the likelihood and experience of classes living up to expectations. Students also sign up because of convenience and flexibility, and personal notions of suitability about learning. Convenience and flexibility are enormous drivers of online registration (Lee, Stringer, & Du, 2017 ; Mann & Henneberry, 2012 ). Even when students say they prefer face-to-face classes to online, many enroll in online classes and re-enroll in the future if the experience meets minimum expectations. This study examines the threshold expectations of students when they are considering taking online classes.

When discussing students’ perceptions of quality, there is little clarity about the actual range of concepts because no integrated empirical studies exist comparing major factors found throughout the literature. Rather, there are practitioner-generated lists of micro-competencies such as the Quality Matters consortium for higher education (Quality Matters, 2018 ), or broad frameworks encompassing many aspects of quality beyond teaching (Open and Distant Learning Quality Council, 2012 ). While checklists are useful for practitioners and accreditation processes, they do not provide robust, theoretical bases for scholarly development. Overarching frameworks are heuristically useful, but not for pragmatic purposes or theory building arenas. The most prominent theoretical framework used in online literature is the Community of Inquiry (CoI) model (Arbaugh et al., 2008 ; Garrison, Anderson, & Archer, 2003 ), which divides instruction into teaching, cognitive, and social presence. Like deductive theories, however, the supportive evidence is mixed (Rourke & Kanuka, 2009 ), especially regarding the importance of social presence (Annand, 2011 ; Armellini and De Stefani, 2016 ). Conceptually, the problem is not so much with the narrow articulation of cognitive or social presence; cognitive presence is how the instructor provides opportunities for students to interact with material in robust, thought-provoking ways, and social presence refers to building a community of learning that incorporates student-to-student interactions. However, teaching presence includes everything else the instructor does—structuring the course, providing lectures, explaining assignments, creating rehearsal opportunities, supplying tests, grading, answering questions, and so on. These challenges become even more prominent in the online context. While the lecture as a single medium is paramount in face-to-face classes, it fades as the primary vehicle in online classes with increased use of detailed syllabi, electronic announcements, recorded and synchronous lectures, 24/7 communications related to student questions, etc. Amassing the pedagogical and technological elements related to teaching under a single concept provides little insight.

In addition to the CoI model, numerous concepts are suggested in single-factor empirical studies when focusing on quality from a student’s perspective, with overlapping conceptualizations and nonstandardized naming conventions. Seven distinct factors are derived here from the literature of student perceptions of online quality: Instructional Support, Teaching Presence, Basic Online Modality, Social Presence, Online Social Comfort, cognitive Presence, and Interactive Online Modality.

Instructional support

Instructional Support refers to students’ perceptions of techniques by the instructor used for input, rehearsal, feedback, and evaluation. Specifically, this entails providing detailed instructions, designed use of multimedia, and the balance between repetitive class features for ease of use, and techniques to prevent boredom. Instructional Support is often included as an element of Teaching Presence, but is also labeled “structure” (Lee & Rha, 2009 ; So & Brush, 2008 ) and instructor facilitation (Eom, Wen, & Ashill, 2006 ). A prime example of the difference between face-to-face and online education is the extensive use of the “flipped classroom” (Maycock, 2019 ; Wang, Huang, & Schunn, 2019 ) in which students move to rehearsal activities faster and more frequently than traditional classrooms, with less instructor lecture (Jung, 2011 ; Martin, Wang, & Sadaf, 2018 ). It has been consistently supported as an element of student perceptions of quality (Espasa & Meneses, 2010 ).

  • Teaching presence

Teaching Presence refers to students’ perceptions about the quality of communication in lectures, directions, and individual feedback including encouragement (Jaggars & Xu, 2016 ; Marks et al., 2005 ). Specifically, instructor communication is clear, focused, and encouraging, and instructor feedback is customized and timely. If Instructional Support is what an instructor does before the course begins and in carrying out those plans, then Teaching Presence is what the instructor does while the class is conducted and in response to specific circumstances. For example, a course could be well designed but poorly delivered because the instructor is distracted; or a course could be poorly designed but an instructor might make up for the deficit by spending time and energy in elaborate communications and ad hoc teaching techniques. It is especially important in student satisfaction (Sebastianelli et al., 2015 ; Young, 2006 ) and also referred to as instructor presence (Asoodar et al., 2016 ), learner-instructor interaction (Marks et al., 2005 ), and staff support (Jung, 2011 ). As with Instructional Support, it has been consistently supported as an element of student perceptions of quality.

Basic online modality

Basic Online Modality refers to the competent use of basic online class tools—online grading, navigation methods, online grade book, and the announcements function. It is frequently clumped with instructional quality (Artino, 2010 ), service quality (Mohammadi, 2015 ), instructor expertise in e-teaching (Paechter, Maier, & Macher, 2010 ), and similar terms. As a narrowly defined concept, it is sometimes called technology (Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Sun et al., 2008 ). The only empirical study that did not find Basic Online Modality significant, as technology, was Sun et al. ( 2008 ). Because Basic Online Modality is addressed with basic instructor training, some studies assert the importance of training (e.g., Asoodar et al., 2016 ).

Social presence

Social Presence refers to students’ perceptions of the quality of student-to-student interaction. Social Presence focuses on the quality of shared learning and collaboration among students, such as in threaded discussion responses (Garrison et al., 2003 ; Kehrwald, 2008 ). Much emphasized but challenged in the CoI literature (Rourke & Kanuka, 2009 ), it has mixed support in the online literature. While some studies found Social Presence or related concepts to be significant (e.g., Asoodar et al., 2016 ; Bollinger & Martindale, 2004 ; Eom et al., 2006 ; Richardson, Maeda, Lv, & Caskurlu, 2017 ), others found Social Presence insignificant (Joo, Lim, & Kim, 2011 ; So & Brush, 2008 ; Sun et al., 2008 ).

Online social comfort

Online Social Comfort refers to the instructor’s ability to provide an environment in which anxiety is low, and students feel comfortable interacting even when expressing opposing viewpoints. While numerous studies have examined anxiety (e.g., Liaw & Huang, 2013 ; Otter et al., 2013 ; Sun et al., 2008 ), only one found anxiety insignificant (Asoodar et al., 2016 ); many others have not examined the concept.

  • Cognitive presence

Cognitive Presence refers to the engagement of students such that they perceive they are stimulated by the material and instructor to reflect deeply and critically, and seek to understand different perspectives (Garrison et al., 2003 ). The instructor provides instructional materials and facilitates an environment that piques interest, is reflective, and enhances inclusiveness of perspectives (Durabi, Arrastia, Nelson, Cornille, & Liang, 2011 ). Cognitive Presence includes enhancing the applicability of material for student’s potential or current careers. Cognitive Presence is supported as significant in many online studies (e.g., Artino, 2010 ; Asoodar et al., 2016 ; Joo et al., 2011 ; Marks et al., 2005 ; Sebastianelli et al., 2015 ; Sun et al., 2008 ). Further, while many instructors perceive that cognitive presence is diminished in online settings, neuroscientific studies indicate this need not be the case (Takamine, 2017 ). While numerous studies failed to examine Cognitive Presence, this review found no studies that lessened its significance for students.

Interactive online modality

Interactive Online Modality refers to the “high-end” usage of online functionality. That is, the instructor uses interactive online class tools—video lectures, videoconferencing, and small group discussions—well. It is often included in concepts such as instructional quality (Artino, 2010 ; Asoodar et al., 2016 ; Mohammadi, 2015 ; Otter et al., 2013 ; Paechter et al., 2010 ) or engagement (Clayton, Blumberg, & Anthony, 2018 ). While individual methods have been investigated (e.g. Durabi et al., 2011 ), high-end engagement methods have not.

Other independent variables affecting perceptions of quality include age, undergraduate versus graduate status, gender, ethnicity/race, discipline, educational motivation of students, and previous online experience. While age has been found to be small or insignificant, more notable effects have been reported at the level-of-study, with graduate students reporting higher “success” (Macon, 2011 ), and community college students having greater difficulty with online classes (Legon & Garrett, 2019 ; Xu & Jaggars, 2014 ). Ethnicity and race have also been small or insignificant. Some situational variations and student preferences can be captured by paying attention to disciplinary differences (Arbaugh, 2005 ; Macon, 2011 ). Motivation levels of students have been reported to be significant in completion and achievement, with better students doing as well across face-to-face and online modes, and weaker students having greater completion and achievement challenges (Clayton et al., 2018 ; Lu & Lemonde, 2013 ).

Research methods

To examine the various quality factors, we apply a critical success factor methodology, initially introduced to schools of business research in the 1970s. In 1981, Rockhart and Bullen codified an approach embodying principles of critical success factors (CSFs) as a way to identify the information needs of executives, detailing steps for the collection and analyzation of data to create a set of organizational CSFs (Rockhart & Bullen, 1981 ). CSFs describe the underlying or guiding principles which must be incorporated to ensure success.

Utilizing this methodology, CSFs in the context of this paper define key areas of instruction and design essential for an online class to be successful from a student’s perspective. Instructors implicitly know and consider these areas when setting up an online class and designing and directing activities and tasks important to achieving learning goals. CSFs make explicit those things good instructors may intuitively know and (should) do to enhance student learning. When made explicit, CSFs not only confirm the knowledge of successful instructors, but tap their intuition to guide and direct the accomplishment of quality instruction for entire programs. In addition, CSFs are linked with goals and objectives, helping generate a small number of truly important matters an instructor should focus attention on to achieve different thresholds of online success.

After a comprehensive literature review, an instrument was created to measure students’ perceptions about the importance of techniques and indicators leading to quality online classes. Items were designed to capture the major factors in the literature. The instrument was pilot studied during academic year 2017–18 with a 397 student sample, facilitating an exploratory factor analysis leading to important preliminary findings (reference withheld for review). Based on the pilot, survey items were added and refined to include seven groups of quality teaching factors and two groups of items related to students’ overall acceptance of online classes as well as a variable on their future online class enrollment. Demographic information was gathered to determine their effects on students’ levels of acceptance of online classes based on age, year in program, major, distance from university, number of online classes taken, high school experience with online classes, and communication preferences.

This paper draws evidence from a sample of students enrolled in educational programs at Jack H. Brown College of Business and Public Administration (JHBC), California State University San Bernardino (CSUSB). The JHBC offers a wide range of online courses for undergraduate and graduate programs. To ensure comparable learning outcomes, online classes and face-to-face classes of a certain subject are similar in size—undergraduate classes are generally capped at 60 and graduate classes at 30, and often taught by the same instructors. Students sometimes have the option to choose between both face-to-face and online modes of learning.

A Qualtrics survey link was sent out by 11 instructors to students who were unlikely to be cross-enrolled in classes during the 2018–19 academic year. 1 Approximately 2500 students were contacted, with some instructors providing class time to complete the anonymous survey. All students, whether they had taken an online class or not, were encouraged to respond. Nine hundred eighty-seven students responded, representing a 40% response rate. Although drawn from a single business school, it is a broad sample representing students from several disciplines—management, accounting and finance, marketing, information decision sciences, and public administration, as well as both graduate and undergraduate programs of study.

The sample age of students is young, with 78% being under 30. The sample has almost no lower division students (i.e., freshman and sophomore), 73% upper division students (i.e., junior and senior) and 24% graduate students (master’s level). Only 17% reported having taken a hybrid or online class in high school. There was a wide range of exposure to university level online courses, with 47% reporting having taken 1 to 4 classes, and 21% reporting no online class experience. As a Hispanic-serving institution, 54% self-identified as Latino, 18% White, and 13% Asian and Pacific Islander. The five largest majors were accounting & finance (25%), management (21%), master of public administration (16%), marketing (12%), and information decision sciences (10%). Seventy-four percent work full- or part-time. See Table  1 for demographic data.

Measures and procedure

To increase the reliability of evaluation scores, composite evaluation variables are formed after an exploratory factor analysis of individual evaluation items. A principle component method with Quartimin (oblique) rotation was applied to explore the factor construct of student perceptions of online teaching CSFs. The item correlations for student perceptions of importance coefficients greater than .30 were included, a commonly acceptable ratio in factor analysis. A simple least-squares regression analysis was applied to test the significance levels of factors on students’ impression of online classes.

Exploratory factor constructs

Using a threshold loading of 0.3 for items, 37 items loaded on seven factors. All factors were logically consistent. The first factor, with eight items, was labeled Teaching Presence. Items included providing clear instructions, staying on task, clear deadlines, and customized feedback on strengths and weaknesses. Teaching Presence items all related to instructor involvement during the course as a director, monitor, and learning facilitator. The second factor, with seven items, aligned with Cognitive Presence. Items included stimulating curiosity, opportunities for reflection, helping students construct explanations posed in online courses, and the applicability of material. The third factor, with six items, aligned with Social Presence defined as providing student-to-student learning opportunities. Items included getting to know course participants for sense of belonging, forming impressions of other students, and interacting with others. The fourth factor, with six new items as well as two (“interaction with other students” and “a sense of community in the class”) shared with the third factor, was Instructional Support which related to the instructor’s roles in providing students a cohesive learning experience. They included providing sufficient rehearsal, structured feedback, techniques for communication, navigation guide, detailed syllabus, and coordinating student interaction and creating a sense of online community. This factor also included enthusiasm which students generally interpreted as a robustly designed course, rather than animation in a traditional lecture. The fifth factor was labeled Basic Online Modality and focused on the basic technological requirements for a functional online course. Three items included allowing students to make online submissions, use of online gradebooks, and online grading. A fourth item is the use of online quizzes, viewed by students as mechanical practice opportunities rather than small tests and a fifth is navigation, a key component of Online Modality. The sixth factor, loaded on four items, was labeled Online Social Comfort. Items here included comfort discussing ideas online, comfort disagreeing, developing a sense of collaboration via discussion, and considering online communication as an excellent medium for social interaction. The final factor was called Interactive Online Modality because it included items for “richer” communications or interactions, no matter whether one- or two-way. Items included videoconferencing, instructor-generated videos, and small group discussions. Taken together, these seven explained 67% of the variance which is considered in the acceptable range in social science research for a robust model (Hair, Black, Babin, & Anderson, 2014 ). See Table  2 for the full list.

To test for factor reliability, the Cronbach alpha of variables were calculated. All produced values greater than 0.7, the standard threshold used for reliability, except for system trust which was therefore dropped. To gauge students’ sense of factor importance, all items were means averaged. Factor means (lower means indicating higher importance to students), ranged from 1.5 to 2.6 on a 5-point scale. Basic Online Modality was most important, followed by Instructional Support and Teaching Presence. Students deemed Cognitive Presence, Social Online Comfort, and Online Interactive Modality less important. The least important for this sample was Social Presence. Table  3 arrays the critical success factor means, standard deviations, and Cronbach alpha.

To determine whether particular subgroups of respondents viewed factors differently, a series of ANOVAs were conducted using factor means as dependent variables. Six demographic variables were used as independent variables: graduate vs. undergraduate, age, work status, ethnicity, discipline, and past online experience. To determine strength of association of the independent variables to each of the seven CSFs, eta squared was calculated for each ANOVA. Eta squared indicates the proportion of variance in the dependent variable explained by the independent variable. Eta squared values greater than .01, .06, and .14 are conventionally interpreted as small, medium, and large effect sizes, respectively (Green & Salkind, 2003 ). Table  4 summarizes the eta squared values for the ANOVA tests with Eta squared values less than .01 omitted.

While no significant differences in factor means among students in different disciplines in the College occur, all five other independent variables have some small effect on some or all CSFs. Graduate students tend to rate Online Interactive Modality, Instructional Support, Teaching Presence, and Cognitive Presence higher than undergraduates. Elder students value more Online Interactive Modality. Full-time working students rate all factors, except Social Online Comfort, slightly higher than part-timers and non-working students. Latino and White rate Basic Online Modality and Instructional Support higher; Asian and Pacific Islanders rate Social Presence higher. Students who have taken more online classes rate all factors higher.

In addition to factor scores, two variables are constructed to identify the resultant impressions labeled online experience. Both were logically consistent with a Cronbach’s α greater than 0.75. The first variable, with six items, labeled “online acceptance,” included items such as “I enjoy online learning,” “My overall impression of hybrid/online learning is very good,” and “the instructors of online/hybrid classes are generally responsive.” The second variable was labeled “face-to-face preference” and combines four items, including enjoying, learning, and communicating more in face-to-face classes, as well as perceiving greater fairness and equity. In addition to these two constructed variables, a one-item variable was also used subsequently in the regression analysis: “online enrollment.” That question asked: if hybrid/online classes are well taught and available, how much would online education make up your entire course selection going forward?

Regression results

As noted above, two constructed variables and one item were used as dependent variables for purposes of regression analysis. They were online acceptance, F2F preference, and the selection of online classes. In addition to seven quality-of-teaching factors identified by factor analysis, control variables included level of education (graduate versus undergraduate), age, ethnicity, work status, distance to university, and number of online/hybrid classes taken in the past. See Table  5 .

When the ETA squared values for ANOVA significance were measured for control factors, only one was close to a medium effect. Graduate versus undergraduate status had a .05 effect (considered medium) related to Online Interactive Modality, meaning graduate students were more sensitive to interactive modality than undergraduates. Multiple regression analysis of critical success factors and online impressions were conducted to compare under what conditions factors were significant. The only consistently significant control factor was number of online classes taken. The more classes students had taken online, the more inclined they were to take future classes. Level of program, age, ethnicity, and working status do not significantly affect students’ choice or overall acceptance of online classes.

The least restrictive condition was online enrollment (Table  6 ). That is, students might not feel online courses were ideal, but because of convenience and scheduling might enroll in them if minimum threshold expectations were met. When considering online enrollment three factors were significant and positive (at the 0.1 level): Basic Online Modality, Cognitive Presence, and Online Social Comfort. These least-demanding students expected classes to have basic technological functionality, provide good opportunities for knowledge acquisition, and provide comfortable interaction in small groups. Students who demand good Instructional Support (e.g., rehearsal opportunities, standardized feedback, clear syllabus) are less likely to enroll.

Online acceptance was more restrictive (see Table  7 ). This variable captured the idea that students not only enrolled in online classes out of necessity, but with an appreciation of the positive attributes of online instruction, which balanced the negative aspects. When this standard was applied, students expected not only Basic Online Modality, Cognitive Presence, and Online Social Comfort, but expected their instructors to be highly engaged virtually as the course progressed (Teaching Presence), and to create strong student-to-student dynamics (Social Presence). Students who rated Instructional Support higher are less accepting of online classes.

Another restrictive condition was catering to the needs of students who preferred face-to-face classes (see Table  8 ). That is, they preferred face-to-face classes even when online classes were well taught. Unlike students more accepting of, or more likely to enroll in, online classes, this group rates Instructional Support as critical to enrolling, rather than a negative factor when absent. Again different from the other two groups, these students demand appropriate interactive mechanisms (Online Interactive Modality) to enable richer communication (e.g., videoconferencing). Student-to-student collaboration (Social Presence) was also significant. This group also rated Cognitive Presence and Online Social Comfort as significant, but only in their absence. That is, these students were most attached to direct interaction with the instructor and other students rather than specific teaching methods. Interestingly, Basic Online Modality and Teaching Presence were not significant. Our interpretation here is this student group, most critical of online classes for its loss of physical interaction, are beyond being concerned with mechanical technical interaction and demand higher levels of interactivity and instructional sophistication.

Discussion and study limitations

Some past studies have used robust empirical methods to identify a single factor or a small number of factors related to quality from a student’s perspective, but have not sought to be relatively comprehensive. Others have used a longer series of itemized factors, but have less used less robust methods, and have not tied those factors back to the literature. This study has used the literature to develop a relatively comprehensive list of items focused on quality teaching in a single rigorous protocol. That is, while a Beta test had identified five coherent factors, substantial changes to the current survey that sharpened the focus on quality factors rather than antecedent factors, as well as better articulating the array of factors often lumped under the mantle of “teaching presence.” In addition, it has also examined them based on threshold expectations: from minimal, such as when flexibility is the driving consideration, to modest, such as when students want a “good” online class, to high, when students demand an interactive virtual experience equivalent to face-to-face.

Exploratory factor analysis identified seven factors that were reliable, coherent, and significant under different conditions. When considering students’ overall sense of importance, they are, in order: Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Social Online Comfort, Interactive Online Modality, and Social Presence. Students are most concerned with the basics of a course first, that is the technological and instructor competence. Next they want engagement and virtual comfort. Social Presence, while valued, is the least critical from this overall perspective.

The factor analysis is quite consistent with the range of factors identified in the literature, pointing to the fact that students can differentiate among different aspects of what have been clumped as larger concepts, such as teaching presence. Essentially, the instructor’s role in quality can be divided into her/his command of basic online functionality, good design, and good presence during the class. The instructor’s command of basic functionality is paramount. Because so much of online classes must be built in advance of the class, quality of the class design is rated more highly than the instructor’s role in facilitating the class. Taken as a whole, the instructor’s role in traditional teaching elements is primary, as we would expect it to be. Cognitive presence, especially as pertinence of the instructional material and its applicability to student interests, has always been found significant when studied, and was highly rated as well in a single factor. Finally, the degree to which students feel comfortable with the online environment and enjoy the learner-learner aspect has been less supported in empirical studies, was found significant here, but rated the lowest among the factors of quality to students.

Regression analysis paints a more nuanced picture, depending on student focus. It also helps explain some of the heterogeneity of previous studies, depending on what the dependent variables were. If convenience and scheduling are critical and students are less demanding, minimum requirements are Basic Online Modality, Cognitive Presence, and Online Social Comfort. That is, students’ expect an instructor who knows how to use an online platform, delivers useful information, and who provides a comfortable learning environment. However, they do not expect to get poor design. They do not expect much in terms of the quality teaching presence, learner-to-learner interaction, or interactive teaching.

When students are signing up for critical classes, or they have both F2F and online options, they have a higher standard. That is, they not only expect the factors for decisions about enrolling in noncritical classes, but they also expect good Teaching and Social Presence. Students who simply need a class may be willing to teach themselves a bit more, but students who want a good class expect a highly present instructor in terms responsiveness and immediacy. “Good” classes must not only create a comfortable atmosphere, but in social science classes at least, must provide strong learner-to-learner interactions as well. At the time of the research, most students believe that you can have a good class without high interactivity via pre-recorded video and videoconference. That may, or may not, change over time as technology thresholds of various video media become easier to use, more reliable, and more commonplace.

The most demanding students are those who prefer F2F classes because of learning style preferences, poor past experiences, or both. Such students (seem to) assume that a worthwhile online class has basic functionality and that the instructor provides a strong presence. They are also critical of the absence of Cognitive Presence and Online Social Comfort. They want strong Instructional Support and Social Presence. But in addition, and uniquely, they expect Online Interactive Modality which provides the greatest verisimilitude to the traditional classroom as possible. More than the other two groups, these students crave human interaction in the learning process, both with the instructor and other students.

These findings shed light on the possible ramifications of the COVID-19 aftermath. Many universities around the world jumped from relatively low levels of online instruction in the beginning of spring 2020 to nearly 100% by mandate by the end of the spring term. The question becomes, what will happen after the mandate is removed? Will demand resume pre-crisis levels, will it increase modestly, or will it skyrocket? Time will be the best judge, but the findings here would suggest that the ability/interest of instructors and institutions to “rise to the occasion” with quality teaching will have as much effect on demand as students becoming more acclimated to online learning. If in the rush to get classes online many students experience shoddy basic functional competence, poor instructional design, sporadic teaching presence, and poorly implemented cognitive and social aspects, they may be quite willing to return to the traditional classroom. If faculty and institutions supporting them are able to increase the quality of classes despite time pressures, then most students may be interested in more hybrid and fully online classes. If instructors are able to introduce high quality interactive teaching, nearly the entire student population will be interested in more online classes. Of course students will have a variety of experiences, but this analysis suggests that those instructors, departments, and institutions that put greater effort into the temporary adjustment (and who resist less), will be substantially more likely to have increases in demand beyond what the modest national trajectory has been for the last decade or so.

There are several study limitations. First, the study does not include a sample of non-respondents. Non-responders may have a somewhat different profile. Second, the study draws from a single college and university. The profile derived here may vary significantly by type of student. Third, some survey statements may have led respondents to rate quality based upon experience rather than assess the general importance of online course elements. “I felt comfortable participating in the course discussions,” could be revised to “comfort in participating in course discussions.” The authors weighed differences among subgroups (e.g., among majors) as small and statistically insignificant. However, it is possible differences between biology and marketing students would be significant, leading factors to be differently ordered. Emphasis and ordering might vary at a community college versus research-oriented university (Gonzalez, 2009 ).

Availability of data and materials

We will make the data available.

Al-Gahtani, S. S. (2016). Empirical investigation of e-learning acceptance and assimilation: A structural equation model. Applied Comput Information , 12 , 27–50.

Google Scholar  

Alqurashi, E. (2016). Self-efficacy in online learning environments: A literature review. Contemporary Issues Educ Res (CIER) , 9 (1), 45–52.

Anderson, T. (2016). A fourth presence for the Community of Inquiry model? Retrieved from https://virtualcanuck.ca/2016/01/04/a-fourth-presence-for-the-community-of-inquiry-model/ .

Annand, D. (2011). Social presence within the community of inquiry framework. The International Review of Research in Open and Distributed Learning , 12 (5), 40.

Arbaugh, J. B. (2005). How much does “subject matter” matter? A study of disciplinary effects in on-line MBA courses. Academy of Management Learning & Education , 4 (1), 57–73.

Arbaugh, J. B., Cleveland-Innes, M., Diaz, S. R., Garrison, D. R., Ice, P., Richardson, J. C., & Swan, K. P. (2008). Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. Internet and Higher Education , 11 , 133–136.

Armellini, A., & De Stefani, M. (2016). Social presence in the 21st century: An adjustment to the Community of Inquiry framework. British Journal of Educational Technology , 47 (6), 1202–1216.

Arruabarrena, R., Sánchez, A., Blanco, J. M., et al. (2019). Integration of good practices of active methodologies with the reuse of student-generated content. International Journal of Educational Technology in Higher Education , 16 , #10.

Arthur, L. (2009). From performativity to professionalism: Lecturers’ responses to student feedback. Teaching in Higher Education , 14 (4), 441–454.

Artino, A. R. (2010). Online or face-to-face learning? Exploring the personal factors that predict students’ choice of instructional format. Internet and Higher Education , 13 , 272–276.

Asoodar, M., Vaezi, S., & Izanloo, B. (2016). Framework to improve e-learner satisfaction and further strengthen e-learning implementation. Computers in Human Behavior , 63 , 704–716.

Bernard, R. M., et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research , 74 (3), 379–439.

Bollinger, D., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses. Int J E-learning , 3 (1), 61–67.

Brinkley-Etzkorn, K. E. (2018). Learning to teach online: Measuring the influence of faculty development training on teaching effectiveness through a TPACK lens. The Internet and Higher Education , 38 , 28–35.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin , 3 , 7.

Choi, I., Land, S. M., & Turgeon, A. J. (2005). Scaffolding peer-questioning strategies to facilitate metacognition during online small group discussion. Instructional Science , 33 , 483–511.

Clayton, K. E., Blumberg, F. C., & Anthony, J. A. (2018). Linkages between course status, perceived course value, and students’ preferences for traditional versus non-traditional learning environments. Computers & Education , 125 , 175–181.

Cleveland-Innes, M., & Campbell, P. (2012). Emotional presence, learning, and the online learning environment. The International Review of Research in Open and Distributed Learning , 13 (4), 269–292.

Cohen, A., & Baruth, O. (2017). Personality, learning, and satisfaction in fully online academic courses. Computers in Human Behavior , 72 , 1–12.

Crews, T., & Butterfield, J. (2014). Data for flipped classroom design: Using student feedback to identify the best components from online and face-to-face classes. Higher Education Studies , 4 (3), 38–47.

Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: Staff and student perspectives. Assessment & Evaluation in Higher Education , 44 (1), 25–36.

Drew, C., & Mann, A. (2018). Unfitting, uncomfortable, unacademic: A sociological reading of an interactive mobile phone app in university lectures. International Journal of Educational Technology in Higher Education , 15 , #43.

Durabi, A., Arrastia, M., Nelson, D., Cornille, T., & Liang, X. (2011). Cognitive presence in asynchronous online learning: A comparison of four discussion strategies. Journal of Computer Assisted Learning , 27 (3), 216–227.

Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education , 4 (2), 215–235.

Espasa, A., & Meneses, J. (2010). Analysing feedback processes in an online teaching and learning environment: An exploratory study. Higher Education , 59 (3), 277–292.

Farrell, O., & Brunton, J. (2020). A balancing act: A window into online student engagement experiences. International Journal of Educational Technology in High Education , 17 , #25.

Fidalgo, P., Thormann, J., Kulyk, O., et al. (2020). Students’ perceptions on distance education: A multinational study. International Journal of Educational Technology in High Education , 17 , #18.

Flores, Ò., del-Arco, I., & Silva, P. (2016). The flipped classroom model at the university: Analysis based on professors’ and students’ assessment in the educational field. International Journal of Educational Technology in Higher Education , 13 , #21.

Garrison, D. R., Anderson, T., & Archer, W. (2003). A theory of critical inquiry in online distance education. Handbook of Distance Education , 1 , 113–127.

Gong, D., Yang, H. H., & Cai, J. (2020). Exploring the key influencing factors on college students’ computational thinking skills through flipped-classroom instruction. International Journal of Educational Technology in Higher Education , 17 , #19.

Gonzalez, C. (2009). Conceptions of, and approaches to, teaching online: A study of lecturers teaching postgraduate distance courses. Higher Education , 57 (3), 299–314.

Grandzol, J. R., & Grandzol, C. J. (2006). Best practices for online business Education. International Review of Research in Open and Distance Learning , 7 (1), 1–18.

Green, S. B., & Salkind, N. J. (2003). Using SPSS: Analyzing and understanding data , (3rd ed., ). Upper Saddle River: Prentice Hall.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis: Pearson new international edition . Essex: Pearson Education Limited.

Harjoto, M. A. (2017). Blended versus face-to-face: Evidence from a graduate corporate finance class. Journal of Education for Business , 92 (3), 129–137.

Hong, K.-S. (2002). Relationships between students’ instructional variables with satisfaction and learning from a web-based course. The Internet and Higher Education , 5 , 267–281.

Horvitz, B. S., Beach, A. L., Anderson, M. L., & Xia, J. (2015). Examination of faculty self-efficacy related to online teaching. Innovation Higher Education , 40 , 305–316.

Inside Higher Education and Gallup. (2019). The 2019 survey of faculty attitudes on technology. Author .

Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers and Education , 95 , 270–284.

Joo, Y. J., Lim, K. Y., & Kim, E. K. (2011). Online university students’ satisfaction and persistence: Examining perceived level of presence, usefulness and ease of use as predictor in a structural model. Computers & Education , 57 (2), 1654–1664.

Jung, I. (2011). The dimensions of e-learning quality: From the learner’s perspective. Educational Technology Research and Development , 59 (4), 445–464.

Kay, R., MacDonald, T., & DiGiuseppe, M. (2019). A comparison of lecture-based, active, and flipped classroom teaching approaches in higher education. Journal of Computing in Higher Education , 31 , 449–471.

Kehrwald, B. (2008). Understanding social presence in text-based online learning environments. Distance Education , 29 (1), 89–106.

Kintu, M. J., Zhu, C., & Kagambe, E. (2017). Blended learning effectiveness: The relationship between student characteristics, design features and outcomes. International Journal of Educational Technology in Higher Education , 14 , #7.

Kuo, Y.-C., Walker, A. E., Schroder, K. E., & Belland, B. R. (2013). Interaction, internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses. Internet and Education , 20 , 35–50.

Lange, C., & Costley, J. (2020). Improving online video lectures: Learning challenges created by media. International Journal of Educational Technology in Higher Education , 17 , #16.

le Roux, I., & Nagel, L. (2018). Seeking the best blend for deep learning in a flipped classroom – Viewing student perceptions through the Community of Inquiry lens. International Journal of Educational Technology in High Education , 15 , #16.

Lee, H.-J., & Rha, I. (2009). Influence of structure and interaction on student achievement and satisfaction in web-based distance learning. Educational Technology & Society , 12 (4), 372–382.

Lee, Y., Stringer, D., & Du, J. (2017). What determines students’ preference of online to F2F class? Business Education Innovation Journal , 9 (2), 97–102.

Legon, R., & Garrett, R. (2019). CHLOE 3: Behind the numbers . Published online by Quality Matters and Eduventures. https://www.qualitymatters.org/sites/default/files/research-docs-pdfs/CHLOE-3-Report-2019-Behind-the-Numbers.pdf

Liaw, S.-S., & Huang, H.-M. (2013). Perceived satisfaction, perceived usefulness and interactive learning environments as predictors of self-regulation in e-learning environments. Computers & Education , 60 (1), 14–24.

Lu, F., & Lemonde, M. (2013). A comparison of online versus face-to-face students teaching delivery in statistics instruction for undergraduate health science students. Advances in Health Science Education , 18 , 963–973.

Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1).

Macon, D. K. (2011). Student satisfaction with online courses versus traditional courses: A meta-analysis . Disssertation: Northcentral University, CA.

Mann, J., & Henneberry, S. (2012). What characteristics of college students influence their decisions to select online courses? Online Journal of Distance Learning Administration , 15 (5), 1–14.

Mansbach, J., & Austin, A. E. (2018). Nuanced perspectives about online teaching: Mid-career senior faculty voices reflecting on academic work in the digital age. Innovative Higher Education , 43 (4), 257–272.

Marks, R. B., Sibley, S. D., & Arbaugh, J. B. (2005). A structural equation model of predictors for effective online learning. Journal of Management Education , 29 (4), 531–563.

Martin, F., Wang, C., & Sadaf, A. (2018). Student perception of facilitation strategies that enhance instructor presence, connectedness, engagement and learning in online courses. Internet and Higher Education , 37 , 52–65.

Maycock, K. W. (2019). Chalk and talk versus flipped learning: A case study. Journal of Computer Assisted Learning , 35 , 121–126.

McGivney-Burelle, J. (2013). Flipping Calculus. PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate . Studies , 23 (5), 477–486.

Mohammadi, H. (2015). Investigating users’ perspectives on e-learning: An integration of TAM and IS success model. Computers in Human Behavior , 45 , 359–374.

Nair, S. S., Tay, L. Y., & Koh, J. H. L. (2013). Students’ motivation and teachers’ teaching practices towards the use of blogs for writing of online journals. Educational Media International , 50 (2), 108–119.

Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of Online Learning and Teaching , 11 (2), 309–319.

Ni, A. Y. (2013). Comparing the effectiveness of classroom and online learning: Teaching research methods. Journal of Public Affairs Education , 19 (2), 199–215.

Nouri, J. (2016). The flipped classroom: For active, effective and increased learning – Especially for low achievers. International Journal of Educational Technology in Higher Education , 13 , #33.

O’Neill, D. K., & Sai, T. H. (2014). Why not? Examining college students’ reasons for avoiding an online course. Higher Education , 68 (1), 1–14.

O'Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95.

Open & Distant Learning Quality Council (2012). ODLQC standards . England: Author https://www.odlqc.org.uk/odlqc-standards .

Ortagus, J. C. (2017). From the periphery to prominence: An examination of the changing profile of online students in American higher education. Internet and Higher Education , 32 , 47–57.

Otter, R. R., Seipel, S., Graef, T., Alexander, B., Boraiko, C., Gray, J., … Sadler, K. (2013). Comparing student and faculty perceptions of online and traditional courses. Internet and Higher Education , 19 , 27–35.

Paechter, M., Maier, B., & Macher, D. (2010). Online or face-to-face? Students’ experiences and preferences in e-learning. Internet and Higher Education , 13 , 292–329.

Prinsloo, P. (2016). (re)considering distance education: Exploring its relevance, sustainability and value contribution. Distance Education , 37 (2), 139–145.

Quality Matters (2018). Specific review standards from the QM higher Education rubric , (6th ed., ). MD: MarylandOnline.

Richardson, J. C., Maeda, Y., Lv, J., & Caskurlu, S. (2017). Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior , 71 , 402–417.

Rockhart, J. F., & Bullen, C. V. (1981). A primer on critical success factors . Cambridge: Center for Information Systems Research, Massachusetts Institute of Technology.

Rourke, L., & Kanuka, H. (2009). Learning in Communities of Inquiry: A Review of the Literature. The Journal of Distance Education / Revue de l'ducation Distance , 23 (1), 19–48 Athabasca University Press. Retrieved August 2, 2020 from https://www.learntechlib.org/p/105542/ .

Sebastianelli, R., Swift, C., & Tamimi, N. (2015). Factors affecting perceived learning, satisfaction, and quality in the online MBA: A structural equation modeling approach. Journal of Education for Business , 90 (6), 296–305.

Shen, D., Cho, M.-H., Tsai, C.-L., & Marra, R. (2013). Unpacking online learning experiences: Online learning self-efficacy and learning satisfaction. Internet and Higher Education , 19 , 10–17.

Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology , 59 (3), 623–664.

So, H. J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education , 51 (1), 318–336.

Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education , 7 (1), 59–70.

Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers & Education , 50 (4), 1183–1202.

Takamine, K. (2017). Michelle D. miller: Minds online: Teaching effectively with technology. Higher Education , 73 , 789–791.

Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students’ perceptions of online learning: A comparative study. Journal of Information Systems Education , 20 (1), 29.

Tucker, B. (2012). The flipped classroom. Education Next , 12 (1), 82–83.

Van Wart, M., Ni, A., Ready, D., Shayo, C., & Court, J. (2020). Factors leading to online learner satisfaction. Business Educational Innovation Journal , 12 (1), 15–24.

Van Wart, M., Ni, A., Rose, L., McWeeney, T., & Worrell, R. A. (2019). Literature review and model of online teaching effectiveness integrating concerns for learning achievement, student satisfaction, faculty satisfaction, and institutional results. Pan-Pacific . Journal of Business Research , 10 (1), 1–22.

Ventura, A. C., & Moscoloni, N. (2015). Learning styles and disciplinary differences: A cross-sectional study of undergraduate students. International Journal of Learning and Teaching , 1 (2), 88–93.

Vlachopoulos, D., & Makri, A. (2017). The effect of games and simulations on higher education: A systematic literature review. International Journal of Educational Technology in Higher Education , 14 , #22.

Wang, Y., Huang, X., & Schunn, C. D. (2019). Redesigning flipped classrooms: A learning model and its effects on student perceptions. Higher Education , 78 , 711–728.

Wingo, N. P., Ivankova, N. V., & Moss, J. A. (2017). Faculty perceptions about teaching online: Exploring the literature using the technology acceptance model as an organizing framework. Online Learning , 21 (1), 15–35.

Xu, D., & Jaggars, S. S. (2014). Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. Journal of Higher Education , 85 (5), 633–659.

Young, S. (2006). Student views of effective online teaching in higher education. American Journal of Distance Education , 20 (2), 65–77.

Zawacki-Richter, O., & Naidu, S. (2016). Mapping research trends from 35 years of publications in distance Education. Distance Education , 37 (3), 245–269.

Download references

Acknowledgements

No external funding/ NA.

Author information

Authors and affiliations.

Development for the JHB College of Business and Public Administration, 5500 University Parkway, San Bernardino, California, 92407, USA

Montgomery Van Wart, Anna Ni, Pamela Medina, Jesus Canelon, Melika Kordrostami, Jing Zhang & Yu Liu

You can also search for this author in PubMed   Google Scholar

Contributions

Equal. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Montgomery Van Wart .

Ethics declarations

Competing interests.

We have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Van Wart, M., Ni, A., Medina, P. et al. Integrating students’ perspectives about online learning: a hierarchy of factors. Int J Educ Technol High Educ 17 , 53 (2020). https://doi.org/10.1186/s41239-020-00229-8

Download citation

Received : 29 April 2020

Accepted : 30 July 2020

Published : 02 December 2020

DOI : https://doi.org/10.1186/s41239-020-00229-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Online education
  • Online teaching
  • Student perceptions
  • Online quality
  • Student presence

literature review on online education

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

A systematic review of online examinations: A pedagogical innovation for scalable authentication and integrity

Kerryn butler-henderson.

a College of Health and Medicine, University of Tasmania, Locked Bag 1322, Launceston, Tasmania, 7250, Australia

Joseph Crawford

b Academic Division, University of Tasmania, Locked Bag 1322, Launceston, Tasmania, 7250, Australia

Digitization and automation across all industries has resulted in improvements in efficiencies and effectiveness to systems and process, and the higher education sector is not immune. Online learning, e-learning, electronic teaching tools, and digital assessments are not innovations. However, there has been limited implementation of online invigilated examinations in many countries. This paper provides a brief background on online examinations, followed by the results of a systematic review on the topic to explore the challenges and opportunities. We follow on with an explication of results from thirty-six papers, exploring nine key themes: student perceptions, student performance, anxiety, cheating, staff perceptions, authentication and security, interface design, and technology issues. While the literature on online examinations is growing, there is still a dearth of discussion at the pedagogical and governance levels.

  • • There is a lack of score variation between examination modalities.
  • • Online exams offer various methods for mitigating cheating.
  • • There is a favorable ratings for online examinations by students.
  • • Staff preferred online examinations for their ease of completion and logistics.
  • • The interface of a system continues to be an enabler or barrier of online exams.

1. Introduction

Learning and teaching is transforming away from the conventional lecture theatre designed to seat 100 to 10,000 passive students towards more active learning environments. In our current climate, this is exacerbated by COVID-19 responses ( Crawford et al., 2020 ), where thousands of students are involved in online adaptions of face-to-face examinations (e.g. online Zoom rooms with all microphones and videos locked on). This evolution has grown from the need to recognize that students now rarely study exclusively and have commitments that conflict with their University life (e.g. work, family, social obligations). Students have more diverse digitally capability ( Margaryan et al., 2011 ) and higher age and gender diversity ( Eagly & Sczesny, 2009 ; Schwalb & Sedlacek, 1990 ). Continual change of the demographic and profile of students creates a challenge for scholars seeking to develop a student experience that demonstrates quality and maintains financial and academic viability ( Gross et al., 2013 ; Hainline et al., 2010 ).

Universities are developing extensive online offerings to grow their international loads and facilitate the massification of higher learning. These protocols, informed by growing policy targets to educate a larger quantity of graduates (e.g. Kemp, 1999 ; Reiko, 2001 ), have challenged traditional university models of fully on-campus student attendance. The development of online examination software has offered a systematic and technological alternative to the end-of-course summative examination designed for final authentication and testing of student knowledge retention, application, and extension. As a result of the COVID-19 pandemic, the initial response in higher education across many countries was to postpone examinations ( Crawford et al., 2020 ). However, as the pandemic continued, the need to move to either an online examination format or alternative assessment became more urgent.

This paper is a timely exploration of the contemporary literature related to online examinations in the university setting, with the hopes to consolidate information on this relatively new pedagogy in higher education. This paper begins with a brief background of traditional examinations, as the assumptions applied in many online examination environments build on the techniques and assumptions of the traditional face-to-face gymnasium-housed invigilated examinations. This is followed by a summary of the systematic review method, including search strategy, procedure, quality review, analysis, and summary of the sample.

Print-based educational examinations designed to test knowledge have existed for hundreds of years. The New York State Education Department has “the oldest educational testing service in the United States” and has been delivering entrance examinations since 1865 ( Johnson, 2009 , p. 1; NYSED, 2012 ). In pre-Revolution Russia, it was not possible to obtain a diploma to enter university without passing a high-stakes graduation examinations ( Karp, 2007 ). These high school examinations assessed and assured learning of students in rigid and high-security conditions. Under traditional classroom conditions, these were likely a reasonable practice to validate knowledge. The discussion of authenticating learning was not a consideration at this stage, as students were face to face only. For many high school jurisdictions, these are designed to strengthen the accountability of teachers and assess student performance ( Mueller & Colley, 2015 ).

In tertiary education, the use of an end-of-course summative examination as a form of validating knowledge has been informed significantly by accreditation bodies and streamlined financially viable assessment options. The American Bar Association has required a final course examination to remain accredited ( Sheppard, 1996 ). Law examinations typically contained brief didactic questions focused on assessing rote memory through to problem-based assessment to evaluate students’ ability to apply knowledge ( Sheppard, 1996 ). In accredited courses, there are significant parallels. Alternatives to traditional gymnasium-sized classroom paper-and-pencil invigilated examinations have been developed with educators recognizing the limitations associated with single-point summative examinations ( Butt, 2018 ).

The objective structured clinical examinations (OSCE) incorporate multiple workstations with students performing specific practical tasks from physical examinations on mannequins to short-answer written responses to scenarios ( Turner & Dankoski, 2008 ). The OSCE has parallels with the patient simulation examination used in some medical schools ( Botezatu et al., 2010 ). Portfolios assess and demonstrate learning over a whole course and for extracurricular learning ( Wasley, 2008 ).

The inclusion of online examinations, e-examinations, and bring-your-own-device models have offered alternatives to the large-scale examination rooms with paper-and-pencil invigilated examinations. Each of these offer new opportunities for the inclusion of innovative pedagogies and assessment where examinations are considered necessary. Further, some research indicates online examinations are able to discern a true pass from a true fail with a high level of accuracy ( Ardid et al., 2015 ), yet there is no systematic consolidation of the literature. We believe this timely review is critical for the progression of the field in first stepping back and consolidating the existing practices to support dissemination and further innovation. The pursuit of such systems may be to provide formative feedback and to assess learning outcomes, but a dominant rationale for final examinations is to authenticate learning. That is, to ensure the student whose name is on the student register, is the student who is completing the assessed work. The development of digitalized examination pilot studies and case studies are becoming an expected norm with universities developing responses to a growing online curriculum offering (e.g. Al-Hakeem & Abdulrahman, 2017 ; Alzu'bi, 2015 ; Anderson et al., 2005 ; Fluck et al., 2009 ; Fluck et al., 2017 ; Fluck, 2019 ; Seow & Soong, 2014 ; Sindre & Vegendla, 2015 ; Steel et al., 2019 ; Wibowo et al., 2016 ).

As many scholars highlight, cheating is a common component of the contemporary student experience ( Jordan, 2001 ; Rettinger & Kramer, 2009 ) despite that it should not be. Some are theorizing responses to the inevitability of cheating from developing student capacity for integrity ( Crawford, 2015 ; Wright, 2011 ) to enhancing detection of cheating ( Dawson & Sutherland-Smith, 2018 , 2019 ) and legislation to ban contract cheating ( Amigud & Dawson, 2020 ). We see value in the pursuit of methods that can support integrity in student assessment, including during rapid changes to the curriculum. The objective of this paper is to summarize the current evidence on online examination methods, and scholarly responses to authentication of learning and the mitigation of cheating, within the confines of assessment that enables learning and student wellbeing. We scope out preparation for examinations (e.g. Nguyen & Henderson, 2020 ) to enable focus on the online exam setting specifically.

2. Material and methods

2.1. search strategy.

To address the objective of this paper, a systematic literature review was undertaken, following the PRISMA approach for article selection ( Moher et al., 2009 ). The keyword string was developed incorporating the U.S. National Library of Medicine (2019) MeSH (Medical Subject Headings) terms: [(“online” OR “electronic” OR “digital”) AND (“exam*” OR “test”) AND (“university” OR “educat*” OR “teach” OR “school” OR “college”)]. The following databases were queried: A + Education (Informit), ERIC (EBSCO), Education Database (ProQuest), Education Research Complete (EBSCO), Educational Research Abstracts Online (Taylor & Francis), Informit, and Scopus. These search phrases will enable the collection of a broad range of literature on online examinations as well as terms often used synonymously, such as e-examination/eExams and BYOD (bring-your-own-device) examinations. The eligibility criteria included peer-reviewed journal articles or full conference papers on online examinations in the university sector, published between 2009 and 2018, available in English. As other sources (e.g. dissertations) are not peer-reviewed, and we aimed to identify rigorous best practice literature, we excluded these. We subsequently conducted a general search in Google Scholar and found no additional results. All records returned from the search were extracted and imported into the Covidence® online software by the first author.

2.2. Selection procedure and quality assessment

The online Covidence® software facilitated article selection following the PRISMA approach. Each of the 1906 titles and abstracts were double-screened by the authors based on the eligibility criteria. We also excluded non-higher education examinations, given the context around student demographics is often considerably different than vocational education, primary and high schools. Where there was discordance between the authors on a title or abstract inclusion or exclusion, consensus discussions were undertaken. The screening reduced the volume of papers significantly because numerous papers related to a different education context or involved online or digital forms of medical examinations. Next, the full-text for selected abstracts were double-reviewed, with discordance managed through a consensus discussion. The papers selected following the double full-text review were accepted for this review. Each accepted paper was reviewed for quality using the MMAT system ( Hong et al., 2018 ) and the scores were calculated as high, medium, or low quality based on the matrix ( Hong et al., 2018 ). A summary of this assessment is presented in Table 1 .

Summary of article characteristics.

QAS, quality assessment score.

2.3. Thematic analysis

Following the process described by Braun and Clarke (2006) , an inductive thematic approach was undertaken to identify common themes identified in each article. This process involves six stages: data familiarization, data coding, theme searching, theme review, defining themes, and naming themes. Familiarization with the literature was achieved during the screening, full-text, and quality review process by triple exposure to works. The named authors then inductively coded half the manuscripts each. The research team consolidated the data together to identify themes. Upon final agreement of themes and their definitions, the write-up was split among the team with subsequent review and revision of ideas in themes through independent and collaborative writing and reviewing ( Creswell & Miller, 2000 ; Lincoln & Guba, 1985 ). This resulted in nine final themes, each discussed in-depth during the discussion.

There were thirty-six (36) articles identified that met the eligibility criteria and were selected following the PRISMA approach, as shown in Fig. 1 .

Fig. 1

PRISMA results.

3.1. Characteristics of selected articles

The selected articles are from a wide range of discipline areas and countries. Table 1 summarizes the characteristics of the selected articles. The United States of America held a vast majority (14, 38.9%) of the publications on online examinations, followed by Saudi Arabia (4, 11.1%), China (2, 5.6%), and Australia (2, 5.6%). When aggregated at the region-level, there was an equality of papers from North America and Asia (14, 38.9% each), with Europe (6, 16.7%) and Oceania (2, 5.6%) least represented in the selection of articles. There has been considerable growth in publications in the past five years, concerning online examinations. Publications between the years 2009 and 2015 represented a third (12, 33.3%) of the total number of selected papers. The majority (24, 66.7%) of papers were published in the last three years. Papers that described a system but did not include empirical evidence scored a low-quality rank as they did not meet many of the criteria that relate to the evaluation of a system.

When examining the types of papers, the majority (30, 83.3%) were empirical research, with the remainder commentary papers (6, 16.7%). Of the empirical research papers, three-quarters of the paper reported a quantitative study design (32, 88.9%) compared to two (5.6%) qualitative study designs and two (5.6%) that used a mixed method. For quantitative studies, there was a range between nine and 1800 student participants ( x ̄  = 291.62) across 26 studies, and a range between two and 85 staff participants ( x ̄  = 30.67) in one study. The most common quantitative methods were self-administered surveys and analysis of numerical examination student grades (38% each). Qualitative and mixed methods studies only adopted interviews (6%). Only one qualitative study reported a sample of students ( n  = 4), with two qualitative studies reporting a sample of staff ( n  = 2, n  = 5).

3.2. Student perceptions

Today's students prefer online examinations compared to paper exams ([68.75% preference of online over paper-based examinations: Attia, 2014 ; 56–62.5%: Böhmer et al., 2018 ; no percentage: ( Schmidt, Ralph & Buskirk, 2009 ); 92%: Matthíasdóttir & Arnalds, 2016 ; no percentage: Pagram et al., 2018 ; 51%: Park, 2017 ; 84%: Schmidt, Ralph & Williams & Wong, 2009 ). Two reasons provided for the preference is the increased speed and ease of editing responses ( Pagram et al., 2018 ), with one study finding two-thirds (67%) of students reported a positive experience in online examination environment ( Matthíasdóttir & Arnalds, 2016 ). Students believe online examinations allows a more authentic assessment experience ( Williams & Wong, 2009 ), with 78 percent of students reporting consistencies between the online environment and their future real-world environment ( Matthíasdóttir & Arnalds, 2016 ).

Students perceive the online examinations saves time (75.0% of students surveyed) and is more economical (87.5%) than paper examinations ( Attia, 2014 ). It provides greater flexibility for completing examinations ( Schmidt et al., 2009 ) with faster access to remote student papers (87.5%) and students trust the result of online over paper-based examinations (78.1%: Attia, 2014 ). The majority of students (59.4%: Attia, 2014 ; 55.5%: Pagram et al., 2018 ) perceive that the online examination environment makes it easier to cheat. More than half (56.25%) of students believe that a lack of information communication and technology (ICT) skill do not adversely affect performance in online examinations ( Attia, 2014 ). Nearly a quarter (23%) of students reported ( Abdel Karim & Shukur, 2016 ) the most preferred font face (type) was Arial, a font also recommended by Vision Australia (2014) in their guidelines for online and print inclusive design and legibility considerations. Nearly all (87%) students preferred black text color on a white background color (87%). With regards to onscreen time counters, a countdown counter was the most preferred option (42%) compared to a traditional analogue clock (30%) or an ascending counter (22%). Many systems allow students to set their preferred remaining time reminder or alert, including 15 min remaining (35% students preferred), 5 min remaining (26%), mid-examination (15%) or 30 min remaining (13%).

3.3. Student performance

Several studies in the sample referred to a lack of score variation between the results of examination across different administration methods. For example, student performance did not have significant difference in final examination scores across online and traditional examination modalities ( Gold & Mozes-Carmel, 2017 ). This is reinforced by a test of validity and reliability of computer-based and paper-based assessment that demonstrated no significant difference ( Oz & Ozturan, 2018 ), and equality of grades identified across the two modalities ( Stowell & Bennett, 2010 ).

When considering student perceptions, of the studies documented in our sample, there tended to be favorable ratings of online examinations. In a small sample of 34 postgraduate students, the respondents had positive perceptions towards online learning assessments (67.4%). The students also believed it contributed to improved learning and feedback (67.4%), and 77 percent had favorable attitudes towards online assessment ( Attia, 2014 ). In a pre-examination survey, students indicated they preferred to type than to write, felt more confident about the examination, and had limited issues with software and hardware ( Pagram, 2018 ). With the same sample in a post-examination survey, within the design and technology examination, students felt the software and hardware were simple to use, yet many students did not feel at ease from their use of an e-examination.

Rios and Liu (2017) compared proctored and non-proctored online examinations across several aspects, including test-taking behavior. Their study did not identify any difference in the test-taking behavior of students between the two environments. There was no significant difference between omitted items and not-reached items. Furthermore, with regards to rapid guessing, there was no significant difference. A negligible difference existed for students aged older than thirty-five years, yet gender was a nonsignificant factor.

3.4. Anxiety

Scholars have an increasing awareness of the role that test anxiety has in reducing student success in online learning environments ( Kolski & Weible, 2018 ). The manuscripts identified by the literature scan, identified inconsistencies of results for the effect that examination modalities have on student test anxiety. A study of 69 psychology undergraduates identified that students who typically experienced high anxiety in traditional test environments had lower anxiety levels when completing an online examination ( Stowell & Bennett, 2010 ). In a quasi-experimental study ( n  = 38 nursing students), when baseline anxiety is controlled, students in computer-based examinations had higher degrees of test anxiety.

In 34 postgraduate student interviews, only three opposed online assessment based on perceived lack of technical skill (e.g. typing; Attia, 2014 ). Around two-thirds of participants identified some form of fear-based on internet disconnection, electricity, slow typing, or family disturbances at home. A 37 participant Community College study used proximal indicators (e.g. lip licking and biting, furrowed eyebrows, and seat squirming) to assess the rate of test anxiety in webcam-based examination proctoring ( Kolski & Weible, 2018 ). Teacher strategies to reduce anxiety in their students include enabling students to consider, review, and acknowledge their anxieties ( Kolski & Weible, 2018 ). Responses such as students writing of their anxiety, or responding to multiple-choice questionnaire on test anxiety, reduced anxiety. Students in the test group and provided anxiety items or expressive writing exercises, performed better ( Kumar, 2014 ).

3.5. Cheating

Cheating was the most prevalent area among all the themes identified. Cheating in asynchronous, objective, and online assessments is argued by some to be at unconscionable levels ( Sullivan, 2016 ). In one survey, 73.6 percent of students felt it was easier to cheat on online examinations than regular examinations ( Aisyah et al., 2018 ). This is perhaps because students are monitored in paper and pencil examinations, compared to online examinations where greater control of variables is required to mitigate cheating. Some instructors have used randomized examination batteries to minimize cheating potential through peer-to-peer sharing ( Schmidt et al., 2009 ).

Scholars identify various methods for mitigating cheating. Identifying the test taker, preventing examination theft, unauthorized use of textbook/notes, preparing a set-up for online examination, unauthorized student access to a test bank, preventing the use of devices (e.g. phone, Bluetooth, and calculators), limiting access to other people during the examination, equitable access to equipment, identifying computer crashes, inconsistency of method for proctoring ( Hearn Moore et al., 2017 ). In another, the issue for solving cheating is social as well as technological. While technology is considered the current norm for reducing cheating, these tools have been mostly ineffective ( Sullivan, 2016 ). Access to multiple question banks through effective quiz design and delivery is a mechanism to reduce the propensity to cheat, by reducing the stakes through multiple delivery attempts ( Sullivan, 2016 ). Question and answer randomization, continuous question development, multiple examination versions, open book options, time stamps, and diversity in question formats, sequences, types, and frequency are used to manage the perception and potential for cheating. In the study with MBA students, perception of the ability to cheat seemed to be critical for the development of a safe online examination environment ( Sullivan, 2016 ).

Dawson (2016) in a review of bring-your-own-device examinations including:

  • • Copying contents of USB to a hard drive to make a copy of the digital examination available to others,
  • • Use of a virtual machine to maintain access to standard applications on their device,
  • • USB keyboard hacks to allow easy access to other documents (e.g. personal notes),
  • • Modifying software to maintain complete control of their own device, and
  • • A cold boot attack to maintain a copy of the examination.

The research on cheating has focused mainly on technical challenges (e.g. hardware to support cheating), rather than ethical and social issues (e.g. behavioral development to curb future cheating behaviors). The latter has been researched in more depth in traditional assessment methods (e.g. Wright, 2015 ). In a study on Massive Open Online Courses (MOOCs), motivations for students to engage in optional learning stemmed from knowledge, work, convenience, and personal interest ( Shapiro et al., 2017 ). This provides possible opportunities for future research to consider behavioral elements for responding to cheating, rather than institutional punitive arrangements.

3.6. Staff perception

Schmidt et al. (2009) also examined the perceptions of academics with regards to online examination. Academics reported that their biggest concern with using online examinations is the potential for cheating. There was a perception that students may get assistance during an examination. The reliability of the technology is the second more critical concern of academic staff. This includes concerns about internet connectivity as well as computer or software issues. The third concern is related to ease of use, both for the academic and for students. Academics want a system that is easy and quick to create, manage and mark examinations, and students can use with proficient ICT skills ( Schmidt et al., 2009 ). Furthermore, staff reported in a different study that marking digital work was easier and preferred it over paper examinations because of the reduction in paper ( Pagram et al., 2018 ). They believe preference should be given to using university machines instead of the student using their computer, mainly due to issues around operating system compatibility and data loss.

3.7. Authentication and security

Authentication was recognized as a significant issue for examination. Some scholars indicate that the primary reason for requiring physical attendance to proctored examinations is to validate and authenticate the student taking the assessment ( Chao et al., 2012 ). Importantly, the validity of online proctored examination administration procedures is argued as lower than proctored on-campus examinations ( Rios & Liu, 2017 ). Most responses to online examinations use bring-your-own-device models where laptops are brought to traditional lecture theatres, use of software on personal devices in any location desired, or use of prescribed devices in a classroom setting. The primary goal of each is to balance the authentication of students and maintain the integrity and value of achieving learning outcomes.

In a review of current authentication options ( AbuMansoor, 2017 ), the use of fingerprint reading, streaming media, and follow-up identifications were used to authenticate small cohorts of students. Some learning management systems (LMS) have developed subsidiary products (e.g. Weaver within Moodle) to support authentication processes. Some biometric software uses different levels to authenticate keystrokes for motor controls, stylometry for linguistics, application behavior for semantics, capture to physical or behavioral samples, extraction of unique data, comparison of distance measures, and recording decision-making. Development of online examinations should be oriented towards the same theory of open book examinations.

A series of models are proposed in our literature sample. AbuMansoor (2017) propose to use a series of processes into place to develop examinations that minimize cheating (e.g. question batteries), deploying authentication techniques (e.g. keystrokes and fingerprints), and conduct posthoc assessments to search for cheating. The Aisyah et al. (2018) model identifies two perspectives to conceptualize authentication systems: examinee and admin. From the examinee perspective, points of authentication at the pre-, intra-, and post-examination periods. From the administrative perspective, accessing photographic authentication from pre- and intra-examination periods can be used to validate the examinee. The open book open web (OBOW: Mohanna & Patel, 2016 ) model uses the application of authentic assessment to place the learner in the role of a decision-maker and expert witness, with validation by avoiding any question that could have a generic answer.

The Smart Authenticated Fast Exams (SAFE: Chebrolu et al., 2017 ) model uses application focus (e.g. continuously tracking focus of examinee), logging (phone state, phone identification, and Wi-Fi status), visual password (a password that is visually presented but not easily communicated without photograph), Bluetooth neighborhood logging (to check for nearby devices), ID checks, digitally signed application, random device swap, and the avoidance of ‘bring your own device’ models. The online comprehensive examination (OCE) was used in a National Board Dental Examination to test knowledge in a home environment with 200 multiple choice questions, and the ability to take the test multiple times for formative knowledge development.

Some scholars recommend online synchronous assessments as an alternative to traditional proctored examinations while maintaining the ability to manually authenticate ( Chao et al., 2012 ). In these assessments: quizzes are designed to test factual knowledge, practice for procedural, essay for conceptual, and oral for metacognitive knowledge. A ‘cyber face-to-face’ element is required to enable the validation of students.

3.8. Interface design

The interface of a system will impact on whether a student perceives the environment to be an enabler or barrier for online examinations. Abdel Karim and Shukur (2016) summarized the potential interface design features that emerged from a systematic review of the literature on this topic, as shown in Table 2 . The incorporation of navigation tools has also been identified by students and staff as an essential design feature ( Rios & Liu, 2017 ), as is an auto-save functionality ( Pagram et al., 2018 ).

Potential interface design features ( Abdel Karim & Shukur, 2016 ).

3.9. Technology issues

None of the studies that included technological problems in its design reported any issues ( Böhmer et al., 2018 ; Matthíasdóttir & Arnalds, 2016 ; Schmidt et al., 2009 ). One study stated that 5 percent of students reported some problem ranging from a slow system through to the system not working well with the computer operating system, however, the authors stated no technical problems that resulted in the inability to complete the examination were reported ( Matthíasdóttir & Arnalds, 2016 ). In a separate study, students reported that they would prefer to use university technology to complete the examination due to distrust of the system working with their home computer or laptop operating system or the fear of losing data during the examination ( Pagram et al., 2018 ). While the study did not report any problems loading on desktop machines, some student laptops from their workplace had firewalls, and as such had to load the system from a USB.

4. Discussion

This systematic literature review sought to assess the current state of literature concerning online examinations and its equivalents. For most students, online learning environments created a system more supportive of their wellbeing, personal lives, and learning performance. Staff preferred online examinations for their workload implications and ease of completion, and basic evaluation of print-based examination logistics could identify some substantial ongoing cost savings. Not all staff and students preferred the idea of online test environments, yet studies that considered age and gender identified only negligible differences ( Rios & Liu, 2017 ).

While the literature on online examinations is growing, there is still a dearth of discussion at the pedagogical and governance levels. Our review and new familiarity with papers led us to point researchers in two principal directions: accreditation and authenticity. We acknowledge that there are many possible pathways to consider, with reference to the consistency of application, the validity and reliability of online examinations, and whether online examinations enable better measurement and greater student success. There are also opportunities to synthesize online examination literature with other innovative digital pedagogical devices. For example, immersive learning environments ( Herrington et al., 2007 ), mobile technologies ( Jahnke & Liebscher, 2020 ); social media ( Giannikas, 2020 ), and web 2.0 technologies ( Bennett et al., 2012 ). The literature examined acknowledges key elements of the underlying needs for online examinations from student, academic, and technical perspectives. This has included the need for online examinations need to accessible, need to be able to distinguish a true pass from a true fail, secure, minimize opportunities for cheating, accurately authenticates the student, reduce marking time, and designed to be agile in software or technological failure.

We turn attention now to areas of need in future research, and focus on accreditation and authenticity over these alternates given there is a real need for more research prior to synthesis of knowledge on the latter pathways.

4.1. The accreditation question

The influence of external accreditation bodies was named frequently and ominously among the sample group, but lacked clarity surrounding exact parameters and expectations. Rios (2017, p. 231) identified a specific measure was used “for accreditation purposes”. Hylton et al. (2016 , p. 54) specified that the US Department of Education requires “appropriate procedures or technology are implemented” to authentic distance students. Gehringer and Peddycord (2013) empirically found that online/open-web examinations provided more significant data for accreditation. Underlying university decisions to use face-to-face invigilated examination settings is to enable authentication of learning – a requirement of many governing bodies globally. The continual refinement of rules has enabled a degree of assurance that students are who they say they are.

Nevertheless, sophisticated networks have been established globally to support direct student cheating from completing quick assessments and calculators with secret search engine capability through to full completion of a course inclusive of attending on-campus invigilated examinations. The authentication process in invigilated examinations does not typically account for distance students who have a forged student identification card to enable a contract service to complete their examinations. Under the requirement assure authentication of learning, invigilated examinations will require revision to meet contemporary environments. The inclusion of a broader range of big data from keystroke patterns, linguistics analysis, and whole-of-student analytics over a student lifecycle is necessary to identify areas of risk from the institutional perspective. Where a student has a significantly different method of typing or sentence structure, it is necessary to review.

An experimental study on the detection of cheating in a psychology unit found teachers could detect cheating 62 percent of the time ( Dawson & Sutherland-Smith, 2017 ). Automated algorithms could be used to support the pre-identification of this process, given lecturers and professors are unlikely to be explicitly coding for cheating propensity when grading multiple hundreds of papers on the same topic. Future scholars should be considering the innate differences that exist among test-taking behaviors that could be codified to create pattern recognition software. Even in traditional invigilated examinations, the use of linguistics and handwriting evaluations could be used for cheating identification.

4.2. Authentic assessments and examinations

The literature identified in the sample discussed with limited depth the role of authentic assessment in examinations. The evolution of pedagogy and teaching principles (e.g. constructive alignment; Biggs, 1996 ) have paved the way for revised approaches to assessment and student learning. In the case of invigilated examinations, universities have been far slower to progress innovative solutions despite growing evidence that students prefer the flexibility and opportunities afforded by digitalizing exams. University commitments to the development of authentic assessment environments will require a radical revision of current examination practice to incorporate real-life learning processes and unstructured problem-solving ( Williams & Wong, 2009 ). While traditional examinations may be influenced by financial efficacy, accreditation, and authentication pressures, there are upward pressures from student demand, student success, and student wellbeing to create more authentic learning opportunities.

The online examination setting offers greater connectivity to the kinds of environments graduates will be expected to engage in on a regular basis. The development of time management skills to plan times to complete a fixed time examination is reflected in the business student's need to pitch and present at certain times of the day to corporate stakeholders, or a dentist maintaining a specific time allotment for the extraction of a tooth. The completion of a self-regulated task online with tangible performance outcomes is reflected in many roles from lawyer briefs on time-sensitive court cases to high school teacher completions of student reports at the end of a calendar year. Future practitioner implementation and evaluation should be focused on embedding authenticity into the examination setting, and future researchers should seek to understand better the parameters by which online examinations can create authentic learning experiences for students. In some cases, the inclusion of examinations may not be appropriate; and in these cases, they should be progressively extracted from the curriculum.

4.3. Where to next?

As institutions begin to provide higher learning flexibility to students with digital and blended offerings, there is scholarly need to consider the efficacy of the examination environment associated with these settings. Home computers and high-speed internet are becoming commonplace ( Rainie & Horrigan, 2005 ), recognizing that such an assumption has implications for student equity. As Warschauer (2007 , p. 41) puts it, “the future of learning is digital”. Our ability as educators will be in seeking to understand how we can create high impact learning opportunities while responding to an era of digitalization. Research considering digital fluency in students will be pivotal ( Crawford & Butler-Henderson, 2020 ). Important too, is the scholarly imperative to examine the implementation barriers and successes associated with online examinations in higher education institutions given the lack of clear cross-institutional case studies. There is also a symbiotic question that requires addressing by scholars in our field, beginning with understanding how online examinations can enable higher education, and likewise how higher education can shape and inform the implementation and delivery of online examinations.

4.4. Limitations

This study adopted a rigorous PRISMA method for preliminary identification of papers for inclusion, the MMAT protocol for identifying the quality of papers, and an inductive thematic analysis for analyzing papers included. These processes respond directly to limitations of subjectivity and assurance of breadth and depth of literature. However, the systematic literature review method limits the papers included by the search criteria used. While we opted for a broad set of terms, it is possible we missed papers that would typically have been identified in other manual and critical identification processes. The lack of research published provided a substantial opportunity to develop a systematic literature review to summarize the state of the evidence, but the availability of data limits each comment. A meta-analysis on quantitative research in this area of study would be complicated because of the lack of replication. Indeed, our ability to unpack which institutions currently use online examinations (and variants thereof) relied on scholars publishing on such implementations; many of which have not. The findings of this systematic literature review are also limited by the lack of replication in this infant field. The systematic literature review was, in our opinions, the most appropriate method to summarize the current state of literature despite the above limitations and provides a strong foundation for an evidence-based future of online examinations. We also acknowledge the deep connection that this research may have in relation to the contemporary COVID-19 climate in higher education, with many universities opting for online forms of examinations to support physically distanced education and emergency remote teaching. There were 138 publications on broad learning and teaching topics during the first half of 2020 ( Butler-Henderson et al., 2020 ). Future research may consider how this has changed or influenced the nature of rapid innovation for online examinations.

5. Conclusion

This systematic literature review considered the contemporary literature on online examinations and their equivalents. We discussed student, staff, and technological research as it was identified in our sample. The dominant focus of the literature is still oriented on preliminary evaluations of implementation. These include what processes changed at a technological level, and how students and staff rated their preferences. There were some early attempts to explore the effect of online examinations on student wellbeing and student performance, along with how the changes affect the ability for staff to achieve.

Higher education needs this succinct summary of the literature on online examinations to understand the barriers and how they can be overcome, encouraging greater uptake of online examinations in tertiary education. One of the largest barriers is perceptions of using online examinations. Once students have experienced online examinations, there is a preference for this format due to its ease of use. The literature reported student performance did not have significant difference in final examination scores across online and traditional examination modalities. Student anxiety decreased once they had used the online examination software. This information needs to be provided to students to change students’ perceptions and decrease anxiety when implementing an online examination system. Similarly, the information summarized in this paper needs to be provided to staff, such as the data related to cheating, reliability of the technology, ease of use, and reduction in time for establishing and marking examinations. When selecting a system, institutions should seek one that includes biometrics with a high level of precision, such as user authentication, and movement, sound, and keystroke monitoring (reporting deviations so the recording can be reviewed). These features reduce the need for online examinations to be invigilated. Other system features should include locking the system or browser, cloud-based technology so local updates are not required, and an interface design that makes using the online examination intuitive. Institutions should also consider how it will address technological failures and digital disparities, such as literacy and access to technology.

We recognize the need for substantially more evidence surrounding the post-implementation stages of online examinations. The current use of online examinations across disciplines, institutions, and countries needs to be examined to understand the successes and gaps. Beyond questions of ‘do students prefer online or on-campus exams’, serious questions of how student mental wellbeing, employability, and achievement of learning outcomes can be improved as a result of an online examination pedagogy is critical. In conjunction is the need to break down the facets and types of digitally enhanced examinations (e.g. online, e-examination, BYOD examinations, and similar) and compare each of these for their respective efficacy in enabling student success against institutional implications. While this paper was only able to capture the literature that does exist, we believe the next stage of literature needs to consider broader implications than immediate student perceptions toward the achievement of institutional strategic imperatives that may include student wellbeing, student success, student retention, financial viability, staff enrichment, and student employability.

Author statement

Both authors Kerryn Butler-Henderson and Joseph Crawford contributed to the design of this study, literature searches, data abstraction and cleaning, data analysis, and development of this manuscript. All contributions were equal.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

  • Abdel Karim N., Shukur Z. Proposed features of an online examination interface design and its optimal values. Computers in Human Behavior. 2016; 64 :414–422. doi: 10.1016/j.chb.2016.07.013. [ CrossRef ] [ Google Scholar ]
  • AbuMansour H. 2017 IEEE/ACS 14th international conference on computer systems and applications (AICCSA) 2017. Proposed bio-authentication system for question bank in learning management systems; pp. 489–494. [ CrossRef ] [ Google Scholar ]
  • Aisyah S., Bandung Y., Subekti L.B. 2018 international conference on information technology systems and innovation (ICITSI) 2018. Development of continuous authentication system on android-based online exam application; pp. 171–176. [ CrossRef ] [ Google Scholar ]
  • Al-Hakeem M.S., Abdulrahman M.S. Developing a new e-exam platform to enhance the university academic examinations: The case of Lebanese French University. International Journal of Modern Education and Computer Science. 2017; 9 (5):9. doi: 10.5815/ijmecs.2017.05.02. [ CrossRef ] [ Google Scholar ]
  • Alzu'bi M. Proceedings of conference of the international journal of arts & sciences. 2015. The effect of using electronic exams on students' achievement and test takers' motivation in an English 101 course; pp. 207–215. [ Google Scholar ]
  • Amigud A., Dawson P. The law and the outlaw: is legal prohibition a viable solution to the contract cheating problem? Assessment & Evaluation in Higher Education. 2020; 45 (1):98–108. doi: 10.1080/02602938.2019.1612851. [ CrossRef ] [ Google Scholar ]
  • Anderson H.M., Cain J., Bird E. Online course evaluations: Review of literature and a pilot study. American Journal of Pharmaceutical Education. 2005; 69 (1):34–43. doi: 10.5688/aj690105. [ CrossRef ] [ Google Scholar ]
  • Ardid M., Gómez-Tejedor J.A., Meseguer-Dueñas J.M., Riera J., Vidaurre A. Online exams for blended assessment. Study of different application methodologies. Computers & Education. 2015; 81 :296–303. doi: 10.1016/j.compedu.2014.10.010. [ CrossRef ] [ Google Scholar ]
  • Attia M. Postgraduate students' perceptions toward online assessment: The case of the faculty of education, Umm Al-Qura university. In: Wiseman A., Alromi N., Alshumrani S., editors. Education for a knowledge society in Arabian Gulf countries. Emerald Group Publishing Limited; Bingley, United Kingdom: 2014. pp. 151–173. [ CrossRef ] [ Google Scholar ]
  • Bennett S., Bishop A., Dalgarno B., Waycott J., Kennedy G. Implementing web 2.0 technologies in higher education: A collective case study. Computers & Education. 2012; 59 (2):524–534. [ Google Scholar ]
  • Biggs J. Enhancing teaching through constructive alignment. Higher Education. 1996; 32 (3):347–364. doi: 10.1007/bf00138871. [ CrossRef ] [ Google Scholar ]
  • Böhmer C., Feldmann N., Ibsen M. 2018 IEEE global engineering education conference (EDUCON) 2018. E-exams in engineering education—online testing of engineering competencies: Experiences and lessons learned; pp. 571–576. [ CrossRef ] [ Google Scholar ]
  • Botezatu M., Hult H., Tessma M.K., Fors U.G. Virtual patient simulation for learning and assessment: Superior results in comparison with regular course exams. Medical Teacher. 2010; 32 (10):845–850. doi: 10.3109/01421591003695287. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Braun V., Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006; 3 (2):77–101. doi: 10.1191/1478088706qp063oa. [ CrossRef ] [ Google Scholar ]
  • Butler-Henderson K., Crawford J., Rudolph J., Lalani K., Sabu K.M. COVID-19 in Higher Education Literature Database (CHELD V1): An open access systematic literature review database with coding rules. Journal of Applied Learning and Teaching. 2020; 3 (2) doi: 10.37074/jalt.2020.3.2.11. Advanced Online Publication. [ CrossRef ] [ Google Scholar ]
  • Butt A. Quantification of influences on student perceptions of group work. Journal of University Teaching and Learning Practice. 2018; 15 (5) [ Google Scholar ]
  • Chao K.J., Hung I.C., Chen N.S. On the design of online synchronous assessments in a synchronous cyber classroom. Journal of Computer Assisted Learning. 2012; 28 (4):379–395. doi: 10.1111/j.1365-2729.2011.00463.x. [ CrossRef ] [ Google Scholar ]
  • Chebrolu K., Raman B., Dommeti V.C., Boddu A.V., Zacharia K., Babu A., Chandan P. Proceedings of the 2017 ACM SIGCSE technical symposium on computer science education. 2017. Safe: Smart authenticated Fast exams for student evaluation in classrooms; pp. 117–122. [ CrossRef ] [ Google Scholar ]
  • Chen Q. Proceedings of ACM turing celebration conference-China. 2018. An application of online exam in discrete mathematics course; pp. 91–95. [ CrossRef ] [ Google Scholar ]
  • Chytrý V., Nováková A., Rícan J., Simonová I. 2018 international symposium on educational technology (ISET) 2018. Comparative analysis of online and printed form of testing in scientific reasoning and metacognitive monitoring; pp. 13–17. [ CrossRef ] [ Google Scholar ]
  • Crawford J. University of Tasmania, Australia: Honours Dissertation; 2015. Authentic leadership in student leaders: An empirical study in an Australian university. [ Google Scholar ]
  • Crawford J., Butler-Henderson K. Digitally empowered workers and authentic leaders: The capabilities required for digital services. In: Sandhu K., editor. Leadership, management, and adoption techniques for digital service innovation. IGI Global; Hershey, Pennsylvania: 2020. pp. 103–124. [ CrossRef ] [ Google Scholar ]
  • Crawford J., Butler-Henderson K., Rudolph J., Malkawi B., Glowatz M., Burton R., Magni P., Lam S. COVID-19: 20 countries' higher education intra-period digital pedagogy responses. Journal of Applied Teaching and Learning. 2020; 3 (1):9–28. doi: 10.37074/jalt.2020.3.1.7. [ CrossRef ] [ Google Scholar ]
  • Creswell J., Miller D. Determining validity in qualitative inquiry. Theory into Practice. 2000; 39 (3):124–130. doi: 10.1207/s15430421tip3903_2. [ CrossRef ] [ Google Scholar ]
  • Daffin L., Jr., Jones A. Comparing student performance on proctored and non-proctored exams in online psychology courses. Online Learning. 2018; 22 (1):131–145. doi: 10.24059/olj.v22i1.1079. [ CrossRef ] [ Google Scholar ]
  • Dawson P. Five ways to hack and cheat with bring‐your‐own‐device electronic examinations. British Journal of Educational Technology. 2016; 47 (4):592–600. doi: 10.1111/bjet.12246. [ CrossRef ] [ Google Scholar ]
  • Dawson P., Sutherland-Smith W. Can markers detect contract cheating? Results from a pilot study. Assessment & Evaluation in Higher Education. 2018; 43 (2):286–293. doi: 10.1080/02602938.2017.1336746. [ CrossRef ] [ Google Scholar ]
  • Dawson P., Sutherland-Smith W. Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study. Assessment & Evaluation in Higher Education. 2019; 44 (5):715–725. doi: 10.1080/02602938.2018.1531109. [ CrossRef ] [ Google Scholar ]
  • Eagly A., Sczesny S. Stereotypes about women, men, and leaders: Have times changed? In: Barreto M., Ryan M.K., Schmitt M.T., editors. Psychology of women book series. The glass ceiling in the 21st century: Understanding barriers to gender equality. American Psychological Association; 2009. pp. 21–47. [ CrossRef ] [ Google Scholar ]
  • Ellis S., Barber J. Expanding and personalizing feedback in online assessment: A case study in a school of pharmacy. Practitioner Research in Higher Education. 2016; 10 (1):121–129. [ Google Scholar ]
  • Fluck A. An international review of eExam technologies and impact. Computers & Education. 2019; 132 :1–15. doi: 10.1016/j.compedu.2018.12.008. [ CrossRef ] [ Google Scholar ]
  • Fluck A., Adebayo O.S., Abdulhamid S.I.M. Secure e-examination systems compared: Case studies from two countries. Journal of Information Technology Education: Innovations in Practice. 2017; 16 :107–125. doi: 10.28945/3705. [ CrossRef ] [ Google Scholar ]
  • Fluck A., Pullen D., Harper C. Case study of a computer based examination system. Australasian Journal of Educational Technology. 2009; 25 (4):509–533. doi: 10.14742/ajet.1126. [ CrossRef ] [ Google Scholar ]
  • Gehringer E., Peddycord B., III Experience with online and open-web exams. Journal of Instructional Research. 2013; 2 :10–18. doi: 10.9743/jir.2013.2.12. [ CrossRef ] [ Google Scholar ]
  • Giannikas C. Facebook in tertiary education: The impact of social media in e-learning. Journal of University Teaching and Learning Practice. 2020; 17 (1):3. [ Google Scholar ]
  • Gold S.S., Mozes-Carmel A. A comparison of online vs. proctored final exams in online classes. Journal of Educational Technology. 2009; 6 (1):76–81. doi: 10.26634/jet.6.1.212. [ CrossRef ] [ Google Scholar ]
  • Gross J., Torres V., Zerquera D. Financial aid and attainment among students in a state with changing demographics. Research in Higher Education. 2013; 54 (4):383–406. doi: 10.1007/s11162-012-9276-1. [ CrossRef ] [ Google Scholar ]
  • Guillén-Gámez F.D., García-Magariño I., Bravo J., Plaza I. Exploring the influence of facial verification software on student academic performance in online learning environments. International Journal of Engineering Education. 2015; 31 (6A):1622–1628. [ Google Scholar ]
  • Hainline L., Gaines M., Feather C.L., Padilla E., Terry E. Changing students, faculty, and institutions in the twenty-first century. Peer Review. 2010; 12 (3):7–10. [ Google Scholar ]
  • Hearn Moore P., Head J.D., Griffin R.B. Impeding students' efforts to cheat in online classes. Journal of Learning in Higher Education. 2017; 13 (1):9–23. [ Google Scholar ]
  • Herrington J., Reeves T.C., Oliver R. Immersive learning technologies: Realism and online authentic learning. Journal of Computing in Higher Education. 2007; 19 (1):80–99. [ Google Scholar ]
  • Hong Q.N., Fàbregues S., Bartlett G., Boardman F., Cargo M., Dagenais P., Gagnon M.P., Griffiths F., Nicolau B., O'Cathain A., Rousseau M.C., Vedel I., Pluye P. The mixed methods appraisal tool (MMAT) version 2018 for information professionals and researchers. Education for Information. 2018; 34 (4):285–291. doi: 10.3233/EFI-180221. [ CrossRef ] [ Google Scholar ]
  • Hylton K., Levy Y., Dringus L.P. Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education. 2016; 92 :53–63. doi: 10.1016/j.compedu.2015.10.002. [ CrossRef ] [ Google Scholar ]
  • Jahnke I., Liebscher J. Three types of integrated course designs for using mobile technologies to support creativity in higher education. Computers & Education. 2020; 146 doi: 10.1016/j.compedu.2019.103782. Advanced Online Publication. [ CrossRef ] [ Google Scholar ]
  • Johnson C. 2009. History of New York state regents exams. Unpublished manuscript. [ Google Scholar ]
  • Jordan A. College student cheating: The role of motivation, perceived norms, attitudes, and knowledge of institutional policy. Ethics & Behavior. 2001; 11 (3):233–247. doi: 10.1207/s15327019eb1103_3. [ CrossRef ] [ Google Scholar ]
  • Karp A. Exams in algebra in Russia: Toward a history of high stakes testing. International Journal for the History of Mathematics Education. 2007; 2 (1):39–57. [ Google Scholar ]
  • Kemp D. Australian Government Printing Service; Canberra: 1999. Knowledge and innovation: A policy statement on research and research training. [ Google Scholar ]
  • Kolagari S., Modanloo M., Rahmati R., Sabzi Z., Ataee A.J. The effect of computer-based tests on nursing students' test anxiety: A quasi-experimental study. Acta Informatica Medica. 2018; 26 (2):115. doi: 10.5455/aim.2018.26.115-118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kolski T., Weible J. Examining the relationship between student test anxiety and webcam based exam proctoring. Online Journal of Distance Learning Administration. 2018; 21 (3):1–15. [ Google Scholar ]
  • Kumar A. 2014 IEEE Frontiers in Education Conference (FIE) Proceedings. 2014. Test anxiety and online testing: A study; pp. 1–6. [ CrossRef ] [ Google Scholar ]
  • Li X., Chang K.M., Yuan Y., Hauptmann A. Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 2015. Massive open online proctor: Protecting the credibility of MOOCs certificates; pp. 1129–1137. [ CrossRef ] [ Google Scholar ]
  • Lincoln Y., Guba E. Sage Publications; California: 1985. Naturalistic inquiry. [ Google Scholar ]
  • Margaryan A., Littlejohn A., Vojt G. Are digital natives a myth or reality? University students' use of digital technologies. Computers & Education. 2011; 56 (2):429–440. doi: 10.1016/j.compedu.2010.09.004. [ CrossRef ] [ Google Scholar ]
  • Matthíasdóttir Á., Arnalds H. Proceedings of the 17th international conference on computer systems and technologies 2016. 2016. e-assessment: students' point of view; pp. 369–374. [ CrossRef ] [ Google Scholar ]
  • Mitra S., Gofman M. Proceedings of the twenty-second americas conference on information systems (28) 2016. Towards greater integrity in online exams. [ Google Scholar ]
  • Mohanna K., Patel A. 2015 fifth international conference on e-learning. 2015. Overview of open book-open web exam over blackboard under e-Learning system; pp. 396–402. [ CrossRef ] [ Google Scholar ]
  • Moher D., Liberati A., Tetzlaff J., Altman D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine. 2009; 151 (4) doi: 10.7326/0003-4819-151-4-200908180-00135. 264-249. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mueller R.G., Colley L.M. An evaluation of the impact of end-of-course exams and ACT-QualityCore on US history instruction in a Kentucky high school. Journal of Social Studies Research. 2015; 39 (2):95–106. doi: 10.1016/j.jssr.2014.07.002. [ CrossRef ] [ Google Scholar ]
  • Nguyen H., Henderson A. Can the reading load Be engaging? Connecting the instrumental, critical and aesthetic in academic reading for student learning. Journal of University Teaching and Learning Practice. 2020; 17 (2):6. [ Google Scholar ]
  • NYSED . 2012. History of regent examinations: 1865 – 1987. Office of state assessment. http://www.p12.nysed.gov/assessment/hsgen/archive/rehistory.htm [ Google Scholar ]
  • Oz H., Ozturan T. Computer-based and paper-based testing: Does the test administration mode influence the reliability and validity of achievement tests? Journal of Language and Linguistic Studies. 2018; 14 (1):67. [ Google Scholar ]
  • Pagram J., Cooper M., Jin H., Campbell A. Tales from the exam room: Trialing an e-exam system for computer education and design and technology students. Education Sciences. 2018; 8 (4):188. doi: 10.3390/educsci8040188. [ CrossRef ] [ Google Scholar ]
  • Park S. Proceedings of the 21st world multi-conference on systemics, cybernetics and informatics. WMSCI 2017; 2017. Online exams as a formative learning tool in health science education; pp. 281–282. [ Google Scholar ]
  • Patel A.A., Amanullah M., Mohanna K., Afaq S. Third international conference on e-technologies and networks for development. ICeND2014; 2014. E-exams under e-learning system: Evaluation of onscreen distraction by first year medical students in relation to on-paper exams; pp. 116–126. [ CrossRef ] [ Google Scholar ]
  • Petrović J., Vitas D., Pale P. 2017 international symposium ELMAR. 2017. Experiences with supervised vs. unsupervised online knowledge assessments in formal education; pp. 255–258. [ CrossRef ] [ Google Scholar ]
  • Rainie L., Horrigan J. Pew Internet & American Life Project; Washington, DC: 2005. A decade of adoption: How the internet has woven itself into American life. [ Google Scholar ]
  • Reiko Y. University reform in the post-massification era in Japan: Analysis of government education policy for the 21st century. Higher Education Policy. 2001; 14 (4):277–291. doi: 10.1016/s0952-8733(01)00022-8. [ CrossRef ] [ Google Scholar ]
  • Rettinger D.A., Kramer Y. Situational and personal causes of student cheating. Research in Higher Education. 2009; 50 (3):293–313. doi: 10.1007/s11162-008-9116-5. [ CrossRef ] [ Google Scholar ]
  • Rios J.A., Liu O.L. Online proctored versus unproctored low-stakes internet test administration: Is there differential test-taking behavior and performance? American Journal of Distance Education. 2017; 31 (4):226–241. doi: 10.1080/08923647.2017.1258628. [ CrossRef ] [ Google Scholar ]
  • Rodchua S., Yiadom-Boakye G., Woolsey R. Student verification system for online assessments: Bolstering quality and integrity of distance learning. Journal of Industrial Technology. 2011; 27 (3) [ Google Scholar ]
  • Schmidt S.M., Ralph D.L., Buskirk B. Utilizing online exams: A case study. Journal of College Teaching & Learning (TLC) 2009; 6 (8) doi: 10.19030/tlc.v6i8.1108. [ CrossRef ] [ Google Scholar ]
  • Schwalb S.J., Sedlacek W.E. Have college students' attitudes toward older people changed. Journal of College Student Development. 1990; 31 (2):125–132. [ Google Scholar ]
  • Seow T., Soong S. Proceedings of the australasian society for computers in learning in tertiary education, Dunedin. 2014. Students' perceptions of BYOD open-book examinations in a large class: A pilot study; pp. 604–608. [ Google Scholar ]
  • Sheppard S. An informal history of how law schools evaluate students, with a predictable emphasis on law school final exams. UMKC Law Review. 1996; 65 :657. [ Google Scholar ]
  • Sindre G., Vegendla A. NIK: Norsk Informatikkonferanse (n.p.) 2015, November. E-exams and exam process improvement. [ Google Scholar ]
  • Steel A., Moses L.B., Laurens J., Brady C. Use of e-exams in high stakes law school examinations: Student and staff reactions. Legal Education Review. 2019; 29 (1):1. [ Google Scholar ]
  • Stowell J.R., Bennett D. Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research. 2010; 42 (2):161–171. doi: 10.2190/ec.42.2.b. [ CrossRef ] [ Google Scholar ]
  • Sullivan D.P. An integrated approach to preempt cheating on asynchronous, objective, online assessments in graduate business classes. Online Learning. 2016; 20 (3):195–209. doi: 10.24059/olj.v20i3.650. [ CrossRef ] [ Google Scholar ]
  • Turner J.L., Dankoski M.E. Objective structured clinical exams: A critical review. Family Medicine. 2008; 40 (8):574–578. [ PubMed ] [ Google Scholar ]
  • US National Library of Medicine . 2019. Medical subject headings. https://www.nlm.nih.gov/mesh/meshhome.html [ Google Scholar ]
  • Vision Australia . 2014. Online and print inclusive design and legibility considerations. Vision Australia. https://www.visionaustralia.org/services/digital-access/blog/12-03-2014/online-and-print-inclusive-design-and-legibility-considerations [ Google Scholar ]
  • Warschauer M. The paradoxical future of digital learning. Learning Inquiry. 2007; 1 (1):41–49. doi: 10.1007/s11519-007-0001-5. [ CrossRef ] [ Google Scholar ]
  • Wibowo S., Grandhi S., Chugh R., Sawir E. A pilot study of an electronic exam system at an Australian University. Journal of Educational Technology Systems. 2016; 45 (1):5–33. doi: 10.1177/0047239516646746. [ CrossRef ] [ Google Scholar ]
  • Williams J.B., Wong A. The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web exams. British Journal of Educational Technology. 2009; 40 (2):227–236. doi: 10.1111/j.1467-8535.2008.00929.x. [ CrossRef ] [ Google Scholar ]
  • Wright T.A. Distinguished Scholar Invited Essay: Reflections on the role of character in business education and student leadership development. Journal of Leadership & Organizational Studies. 2015; 22 (3):253–264. doi: 10.1177/1548051815578950. [ CrossRef ] [ Google Scholar ]
  • Yong-Sheng Z., Xiu-Mei F., Ai-Qin B. 2015 7th international conference on information technology in medicine and education (ITME) 2015. The research and design of online examination system; pp. 687–691. [ CrossRef ] [ Google Scholar ]

Advertisement

Advertisement

Research Trends in Adaptive Online Learning: Systematic Literature Review (2011–2020)

  • Original research
  • Published: 22 July 2022
  • Volume 28 , pages 431–448, ( 2023 )

Cite this article

literature review on online education

  • Selina Atwani Ochukut   ORCID: orcid.org/0000-0001-8708-5112 1 ,
  • Robert Obwocha Oboko 1 ,
  • Evans Miriti 1 &
  • Elizaphan Maina 2  

877 Accesses

3 Altmetric

Explore all metrics

With the improvement of Information and Communication Technologies (ICTs, online learning has become a viable means for teaching and learning. Nonetheless, online learning is still facing various challenges. The challenges include lack of support and loneliness experienced by learners. Adaptive online learning is one of the means that researchers are proposing to support learners and reduce the loneliness they experience in online learning. Research in adaptive online learning has been on the rise. Though there are several review studies that have attempted to provide summaries of research and development happening in this area, there is still lack of a comprehensive and up-to-date review that looks at the aspects of adaptive online learning systems in terms of the learner characteristics being modelled, domain model, adaptation model, the various techniques used to achieve the various tasks in those models and the impact the adaptive online learning has on learning. This study therefore was initiated in order to fill this gap. The study was carried out using a systematic literature review methodology. A total of 59 articles were used in the study, drawn from six databases namely Science direct, IEEE explore, ACM, Emerald, Springer and Taylor and Francis. The results indicate that: the most used learner characteristic is learning style even though the use of learning knowledge is on the rise; there is a rise in the use of machine learning algorithms in learner modelling; learning content is the most common target for adaptation; rules is the most utilized method in the adaptation model; and most adaptive online learning have not been evaluated in terms of learning. There is therefore a need for evaluation of the developed adaptive online learning and more studies that utilize more than one learner characteristic as the basis for adaptation and those that use machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

literature review on online education

Similar content being viewed by others

literature review on online education

Systematic review of research on artificial intelligence applications in higher education – where are the educators?

literature review on online education

Ethical principles for artificial intelligence in education

literature review on online education

The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research

Abbreviations.

Convolutional Neural Networks

Reinforcement Learning

Hidden Markov Model

Association for Computing Machinery

Institute of Electrical and Electronics Engineers

Aboagye, E., Yawson, J. A., & Appiah, K. N. (2020). COVID-19 and E-learning: The challenges of students in tertiary institutions. Social Education Research, 2 (1), 109–115.

Article   Google Scholar  

Abyaa, A., Idrissi, M. K., & Bennani, S. (2017). An adult learner’s knowledge model based on ontologies and rule reasoning. ACM International Conference Proceeding Series . https://doi.org/10.1145/3175628.3175656

Abyaa, A., KhalidiIdrissi, M., & Bennani, S. (2019). Learner modelling: systematic Review of the literature from the last 5 years. Educational Technology Research and Development . https://doi.org/10.1007/s11423-018-09644-1

AfiniNormadhi, N. B., Shuib, L., Md Nasir, H. N., Bimba, A., Idris, N., & Balakrishnan, V. (2019). Identification of personal traits in adaptive learning environment: Systematic literature review. Computers and Education, 130 , 168–190. https://doi.org/10.1016/j.compedu.2018.11.005

Ahmadaliev, D., Xiaohui, C., Abduvohidov, M., Medatov, A., & Temirova, G. (2019). An adaptive activity sequencing instrument to enhance e-learning: An integrated application of overlay user model and mathematical programming on the Web. 2019 International Conference on Computer and Information Sciences, ICCIS 2019 , 1–4. https://doi.org/10.1109/ICCISci.2019.8716473

Aissaoui, O. E., El Madani, Y. E. A., Oughdir, L., & Allioui, Y. E. (2019). Combining supervised and unsupervised machine learning algorithms to predict the learners’ learning styles. Procedia Computer Science, 148 , 87–96. https://doi.org/10.1016/j.procs.2019.01.012

Al-Omari, M., Carter, J., & Chiclana, F. (2016). A hybrid approach for supporting adaptivity in e-learning environments. International Journal of Information and Learning Technology, 33 (5), 333–348. https://doi.org/10.1108/IJILT-04-2016-0014

Al-Rajhi, L., Salama, R., & Gamalel-Din, S. (2014). Personalized intelligent assessment model for measuring initial students abilities. ACM International Conference Proceeding Series . https://doi.org/10.1145/2643604.2643606

Al-tarabily, M. M., Abdel-kader, R. F., Azeem, G. A., & Marie, M. I. (2018). Optimizing dynamic multi - agent performance in E - learning environment. IEEE Access . https://doi.org/10.1109/ACCESS.2018.2847334

Al Duhayyim, M., & Newbury, P. (2018). Concept-based and fuzzy adaptive e-learning. ACM International Conference Proceeding Series . https://doi.org/10.1145/3234825.3234832

Allinjawi, A., Alsulami, A., Alsaedy, Y., & Hussein, K. (2018). Proposing an adaptive e-learning system using learners’ knowledge in simulating medical module. In Proceedings - 17th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2018 , 45–50. https://doi.org/10.1109/ICIS.2018.8466514 .

Alshammari, M., Anane, R., & Hendley, R. J. (2015). The impact of learning style adaptivity in teaching computer security. In Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE , 2015 - June , 135–140. https://doi.org/10.1145/2729094.2742614 .

Alsobhi, A. Y., & Alyoubi, K. H. (2019). Adaptation algorithms for selecting personalised learning experience based on learning style and dyslexia type. Data Technologies and Applications, 53 (2), 189–200. https://doi.org/10.1108/DTA-10-2018-0092

Anantharaman, H., Mubarak, A., & Shobana, B. T. (2019). Modelling an Adaptive e-Learning System Using LSTM and Random Forest Classification. In 2018 IEEE Conference on E-Learning, e-Management and e-Services, IC3e 2018 , 29–34. https://doi.org/10.1109/IC3e.2018.8632646 .

Antony, J., Thottupuram, R., Thomas, S., & John, M. V. (2012). Semantic web based adaptive E-learning triggered through short message services. In ICCSE 2012 - Proceedings of 2012 7th International Conference on Computer Science and Education , Iccse , 1860–1863. https://doi.org/10.1109/ICCSE.2012.6295434 .

Atchariyachanvanich, K., Nalintippayawong, S., & Julavanich, T. (2019). Reverse SQL question generation algorithm in the dblearn adaptive e-learning system. IEEE Access, 7 , 54993–55004. https://doi.org/10.1109/ACCESS.2019.2912522

Awais, M., Habiba, U., Khalid, H., Shoaib, M., & Arshad, S. (2019). An adaptive feedback system to improve student performance based on collaborative behavior. IEEE Access, 7 , 107171–107178. https://doi.org/10.1109/ACCESS.2019.2931565

Azzi, I., Jeghal, A., Radouane, A., Yahyaouy, A., & Tairi, H. (2020). A robust classification to predict learning styles in adaptive E-learning systems. Education and Information Technologies, 25 (1), 437–448. https://doi.org/10.1007/s10639-019-09956-6

Bauer, M., Bräuer, C., Schuldt, J., Niemann, M., & Krömker, H. (2019). Application of wearable technology for the acquisition of learning motivation in an adaptive e-learning platform. Advances in Intelligent Systems and Computing, 795 , 29–40. https://doi.org/10.1007/978-3-319-94619-1_4

Beldagli, B., & Adiguzel, T. (2010). Illustrating an ideal adaptive e-learning: A conceptual framework. Procedia - Social and Behavioral Sciences, 2 (2), 5755–5761. https://doi.org/10.1016/j.sbspro.2010.03.939

Birjali, A. M., Beni-hssane, A., & Erritali, M. (2018). A novel adaptive e-learning model based on Big Data by using competence-based knowledge and social learner activities. Applied Soft Computing Journal . https://doi.org/10.1016/j.asoc.2018.04.030

Boussakssou, M., Hssina, B., & Erittali, M. (2020). Towards an adaptive E-learning system based on Q-learning algorithm. Procedia Computer Science, 170 , 1198–1203. https://doi.org/10.1016/j.procs.2020.03.028

Bradac, V., & Walek, B. (2017). A comprehensive adaptive system for e-learning of foreign languages. Expert Systems with Applications, 90 , 414–426. https://doi.org/10.1016/j.eswa.2017.08.019

Brusilovsky, P. (2007). Adaptive Navigation Support 8 . 2 Adaptive Navigation Support : From Adaptive Hypermedia. 263–290.

Brusilovsky, P., & Millán, E. (2007). User models for adaptive hypermedia and adaptive educational systems . Springer.

Book   Google Scholar  

Chrysafiadi, K., & Virvou, M. (2015). Fuzzy logic for adaptive instruction in an e-learning environment for computer programming. IEEE Transactions on Fuzzy Systems, 23 (1), 164–177. https://doi.org/10.1109/TFUZZ.2014.2310242

Ciloglugil, B., & Inceoglu, M. M. (2018). A learner ontology based on learning style models for adaptive E-learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Vol. 10961 LNCS . Springer International Publishing. https://doi.org/10.1007/978-3-319-95165-2_14 .

Colchester, K., Hagras, H., & Alghazzawi, D. (2017). A survey of artificial intelligence techniques employed for adaptive educational systems within E-learning platforms. Journal of Artificial Intelligence and Soft Computing Research, 7 (1), 47–64. https://doi.org/10.1515/jaiscr-2017-0004

Deeb, B., Hassan, Z., & Beseiso, M. (2014). An adaptive HMM based approach for improving e-Learning methods. In 2014 World Congress on Computer Applications and Information Systems, WCCAIS 2014 . https://doi.org/10.1109/WCCAIS.2014.6916638 .

Dolenc, K., & Aberšek, B. (2015). TECH8 intelligent and adaptive e-learning system: Integration into Technology and Science classrooms in lower secondary schools. Computers and Education, 82 , 354–365. https://doi.org/10.1016/j.compedu.2014.12.010

Drissi, S., & Amirat, A. (2016). An adaptive e-learning system based on student’s learning styles: An empirical study. International Journal of Distance Education Technologies, 14 (3), 34–51. https://doi.org/10.4018/IJDET.2016070103

El-Sabagh, H. A. (2021). Adaptive e-learning environment based on learning styles and its impact on development students’ engagement. International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-021-00289-4

El Fazazi, H., Elgarej, M., Qbadou, M., & Mansouri, K. (2021). Design of an adaptive e-learning system based on multi-agent approach and reinforcement learning. Engineering, Technology & Applied Science Research, 11 (1), 6637–6644.

El Guabassi, I., Bousalem, Z., Al Achhab, M., Jellouli, I., & El Mohajir, B. E. (2018). Personalized adaptive content system for context-Aware ubiquitous learning. Procedia Computer Science, 127 , 444–453. https://doi.org/10.1016/j.procs.2018.01.142

Ennouamani, S., & Mahani, Z. (2017). An overview of adaptive e-learning systems. In 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS) , Icicis , 342–347. https://doi.org/10.1109/INTELCIS.2017.8260060 .

Fatahi, S. (2019). An experimental study on an adaptive e-learning environment based on learner’s personality and emotion. Education and Information Technologies, 24 (4), 2225–2241. https://doi.org/10.1007/s10639-019-09868-5

Fazazi, Hanaa El, Samadi, A., Qbadou, M., Mansouri, K., & Elgarej, M. (2019). A learning style indetification approach in adaptive e-learning system Springer.

GopalaKrishnan, T., & Sengottuvelan, P. (2016). A hybrid PSO with Naïve Bayes classifier for disengagement detection in online learning. Program, 50 (2), 215–224. https://doi.org/10.1108/PROG-07-2015-0047

Hassan, M. A., Habiba, U., Majeed, F., & Shoaib, M. (2021). Adaptive gamification in e-learning based on students’ learning styles. Interactive Learning Environments, 29 (4), 545–565. https://doi.org/10.1080/10494820.2019.1588745

Hnatchuk, Y., Hnatchuk, A., Pityn, M., Hlukhov, I., & Cherednichenko, O. (2021). Intelligent decision support agent based on fuzzy logic in athletes’ adaptive e-learning systems. CEUR Workshop Proceedings, 2853 , 258–265.

Google Scholar  

Hssina, B., & Erritali, M. (2019). A personalized pedagogical objectives based on a genetic algorithm in an adaptive learning system. Procedia Computer Science, 151 (2018), 1152–1157. https://doi.org/10.1016/j.procs.2019.04.164

Hu, P. C., & Kuo, P. C. (2017). Adaptive learning system for E-learning based on EEG brain signals. In 2017 IEEE 6th Global Conference on Consumer Electronics, GCCE 2017 , 2017 - Janua (Gcce), 1–2. https://doi.org/10.1109/GCCE.2017.8229382 .

Ifenthaler, D., & Yau, J. Y. K. (2020). Utilising learning analytics to support study success in higher education: A systematic review. Educational Technology Research and Development, 68 (4), 1961–1990. https://doi.org/10.1007/s11423-020-09788-z

Kabudi, T., Pappas, I., & Olsen, D. H. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education: Artificial Intelligence, 2 , 100017. https://doi.org/10.1016/j.caeai.2021.100017

Kibuku, R. N., Ochieng, D. O., & Wausi, A. N. (2020). E-learning challenges faced by universities in Kenya: A literature review. Electronic Journal of E-Learning, 18 (2), 150–161.

Knutov, E., De Bra, P., & Pechenizkiy, M. (2009). AH 12 years later: A comprehensive survey of adaptive hypermedia methods and techniques. New Review of Hypermedia and Multimedia . https://doi.org/10.1080/13614560902801608

Kolekar, S. V., Pai, R. M., & ManoharaPai, M. M. (2018). Adaptive user interface for moodle based e-learning system using learning styles. Procedia Computer Science, 135 , 606–615. https://doi.org/10.1016/j.procs.2018.08.226

Kostadinova, H., Totkov, G., & Indzhov, H. (2012). Adaptive e-learning system based on accumulative digital activities in Revised Bloom’s Taxonomy. ACM International Conference Proceeding Series . https://doi.org/10.1145/2383276.2383330

Kularbphettong, K., Kedsiribut, P., & Roonrakwit, P. (2015). Developing an adaptive web-based intelligent tutoring system using mastery learning technique. Procedia - Social and Behavioral Sciences, 191 , 686–691. https://doi.org/10.1016/j.sbspro.2015.04.619

Lagman, A. C., & Mansul, D. M. (2017). Extracting personalized learning path in adaptive elearning environment using rule based assessment. ACM International Conference Proceeding Series . https://doi.org/10.1145/3176653.3176679

Lancheros-Cuesta, D., Carrillo-Ramos, A., & Pavlich-Mariscal, J. A. (2015). Kamachiy’Mayistru: Adaptive module to support teaching to people with learning difficulties. International Journal of Web Information Systems, 11 (4), 510–526. https://doi.org/10.1108/IJWIS-04-2015-0010

Lancheros-Cuesta, D. J., Carrillo-Ramos, A., & Pavlich-Mariscal, J. A. (2014). Content adaptation for students with learning difficulties: Design and case study. International Journal of Web Information Systems, 10 (2), 106–130. https://doi.org/10.1108/IJWIS-12-2013-0040

Landsberg, C. R., Astwood, R. S., Van Buskirk, W. L., Townsend, L. N., Steinhauser, N. B., & Mercado, A. D. (2012). Review of adaptive training system techniques. Military Psychology, 24 (2), 96–113. https://doi.org/10.1080/08995605.2012.672903

Liu, M., Kang, J., Zou, W., Lee, H., Pan, Z., & Corliss, S. (2017). Using data to understand how to better design adaptive learning. Technology, Knowledge and Learning, 22 (3), 271–298. https://doi.org/10.1007/s10758-017-9326-z

Martin, F., Chen, Y., Moore, R. L., & Westine, C. D. (2020). Systematic review of adaptive learning research designs, context, strategies, and technologies from 2009 to 2018. Educational Technology Research and Development, 68 (4), 1903–1929. https://doi.org/10.1007/s11423-020-09793-2

Mavroudi, A., Hadzilacos, T., & Angeli, C. (2016). An adaptive e-learning strategy to overcome the inherent difficulties of the learning content. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 9891 LNCS , 440–445. https://doi.org/10.1007/978-3-319-45153-4_40 .

Megahed, M., & Mohammed, A. (2020). Modeling adaptive E-Learning environment using facial expressions and fuzzy logic. Expert Systems with Applications, 157 , 113460. https://doi.org/10.1016/j.eswa.2020.113460

Micah, E., Colecraft, E., Lartey, A., Aryeetey, R., & Marquis, G. (2018). A model of adaptive e-learning in an ODL environment. Mehran University Research Journal of Food, Agriculture, Nutrition and Development, 12 (1), 5789–5801.

Montazer, G. A., & Rezaei, M. S. (2013). E-learners grouping in uncertain environment using fuzzy ART-Snap-Drift neural network. In 4th International Conference on E-Learning and e-Teaching, ICELET 2013 , 112–116. https://doi.org/10.1109/ICELET.2013.6681656 .

Nurjanah, D. (2018). LifeOn, a ubiquitous lifelong learner model ontology supporting adaptive learning. IEEE Global Engineering Education Conference, EDUCON, 2018-April (pp. 866–871). https://doi.org/10.1109/EDUCON.2018.8363321

Okoli, C. (2015). A guide to conducting a standalone systematic literature review. Communications of the Association for Information Systems, 37 (1), 879–910.

Rani, M., Vyas, R., & Vyas, O. P. (2017). OPAESFH: Ontology-based personalized adaptive e-learning system using FPN and HMM. In IEEE Region 10 Annual International Conference, Proceedings/TENCON , 2017 - Decem , 2441–2446. https://doi.org/10.1109/TENCON.2017.8228271 .

Samia, D., & Abdelkrim, A. (2012). An adaptive educationnal hypermedia system integrating learning styles: Model and experiment. 2012 International Conference on Education and E-Learning Innovations, ICEELI 2012 . https://doi.org/10.1109/ICEELI.2012.6360641 .

Sethi, M. A., Lomte, S. S., & Shinde, U. B. (2016). Adaptive eLearning system for visual and verbal learners (pp. 2029–2033).

Sidi-Ali, M. A. (2019). Adaptive E-learning: Motivating learners whilst adapting feedback to cultural background. ACM UMAP 2019 - Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization , pp 341–344. https://doi.org/10.1145/3320435.3323464 .

Suryani, M., Santaso, H., & Hasibuan, S. (2014). Learning Content Personalization Based on Triple-Factor Learning Type Approach in e-learning . pp 494–501.

Tashtoush, Y. M., Al-Soud, M., Fraihat, M., Al-Sarayrah, W., & Alsmirat, M. A. (2017). Adaptive e-learning web-based English tutor using data mining techniques and Jackson’s learning styles. In 2017 8th International Conference on Information and Communication Systems, ICICS 2017 , pp 86–91. https://doi.org/10.1109/IACS.2017.7921951 .

Trikha, N., & Godbole, A. (2016). Adaptive e-learning system using hybrid approach. In Proceedings of the International Conference on Inventive Computation Technologies, ICICT 2016 , 2 . https://doi.org/10.1109/INVENTIVE.2016.7824844 .

Troussas, C., Chrysafiadi, K., & Virvou, M. (2019). An intelligent adaptive fuzzy-based inference system for computer-assisted language learning. Expert Systems with Applications, 127 , 85–96. https://doi.org/10.1016/j.eswa.2019.03.003

Truong, H. M. (2016). Integrating learning styles and adaptive e-learning system: Current developments, problems and opportunities. Computers in Human Behavior, 55 , 1185–1193. https://doi.org/10.1016/j.chb.2015.02.014

Ueda, H., Furukawa, M., Yamaji, K., & Nakamura, M. (2018). SCORMAdaptiveQuiz: Implementation of adaptive e-learning for moodle. Procedia Computer Science, 126 , 2261–2270. https://doi.org/10.1016/j.procS.2018.07.223

Vandewaetere, M., Desmet, P., & Clarebout, G. (2011). The contribution of learner characteristics in the development of computer-based adaptive learning environments. Computers in Human Behavior, 27 (1), 118–130. https://doi.org/10.1016/j.chb.2010.07.038

Wan, S., & Niu, Z. (2020). A hybrid e-learning recommendation approach based on learners’ influence propagation. IEEE Transactions on Knowledge and Data Engineering, 32 (5), 827–840. https://doi.org/10.1109/TKDE.2019.2895033

Wu, C. H., Chen, T. C., Yan, Y. H., & Lee, C. F. (2017). Developing an adaptive e-learning system for learning excel. In Proceedings of the 2017 IEEE International Conference on Applied System Innovation: Applied System Innovation for Modern Technology, ICASI 2017 , 1973–1975. https://doi.org/10.1109/ICASI.2017.7988583 .

Wu, C. H., Chen, Y. S., & Chen, T. C. (2018). An Adaptive e-learning system for enhancing learning performance: Based on dynamic scaffolding theory. Eurasia Journal of Mathematics, Science and Technology Education, 14 (3), 903–913.

Xie, H., Chu, H. C., Hwang, G. J., & Wang, C. C. (2019). Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Computers and Education, 140 (June), 103599. https://doi.org/10.1016/j.compedu.2019.103599

Yasir, M., & Sharif, S. (2011). An approach to adaptive e-learning hypermedia system based on learning styles ( AEHS-LS ): Implementation and evaluation. International Journal of Library and Information Science, 3 (January), 15–28.

Yel, M. B. (2018). An adaptive e-learning model based on Myers- Briggs type indicator ( MBTI ). Third International Conference on Informatics and Computing (ICIC), 2018 , 1–4.

Download references

Acknowledgements

The authors wish to acknowledge the support given by the University of Nairobi’s School of Computing and Informatics faculty.

This research was funded by the Kenya National Research Fund 2016/2017 grant award, awarded to Kenyatta University, University of Nairobi, and The Cooperative University of Kenya in the multidisciplinary-multiinstitutional category.

Author information

Authors and affiliations.

University of Nairobi, Nairobi, Kenya

Selina Atwani Ochukut, Robert Obwocha Oboko & Evans Miriti

Kenyatta University, Nairobi, Kenya

Elizaphan Maina

You can also search for this author in PubMed   Google Scholar

Contributions

All the authors listed reviewed the manuscript at least once.

Corresponding author

Correspondence to Selina Atwani Ochukut .

Ethics declarations

Conflict of interests.

The authors declare that there are no competing interests.

Ethical Approval

No ethical approval was required for this study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Ochukut, S.A., Oboko, R.O., Miriti, E. et al. Research Trends in Adaptive Online Learning: Systematic Literature Review (2011–2020). Tech Know Learn 28 , 431–448 (2023). https://doi.org/10.1007/s10758-022-09615-9

Download citation

Accepted : 30 June 2022

Published : 22 July 2022

Issue Date : June 2023

DOI : https://doi.org/10.1007/s10758-022-09615-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adaptive online learning
  • Personalized learning
  • Learner modelling
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. A literature review: efficacy of online learning courses for higher

    This study is a literature review using meta-analysis. Meta-analysis is a review of research results systematic, especially on the results of research empirically related to online learning efficacy for designing and developing instructional materials that can provide wider access to quality higher education.

  2. A Literature Review on Impact of COVID-19 Pandemic on Teaching and Learning

    There are a variety of subjects with varying needs. Different subjects and age groups require different approaches to online learning (Doucet et al., 2020). Online learning also allows physically challenged students with more freedom to participate in learning in the virtual environment, requiring limited movement (Basilaia & Kvavadze, 2020).

  3. A systematic review of research on online teaching and learning from

    This review enabled us to identify the online learning research themes examined from 2009 to 2018. In the section below, we review the most studied research themes, engagement and learner characteristics along with implications, limitations, and directions for future research. 5.1. Most studied research themes.

  4. PDF A Literature Review of the Factors Influencing E-Learning and Blended

    In this review of the literature on e-learning, we present and discuss definitions of e-learning, hybrid learning and blended learning, and we review the literature comparing different online teaching formats with traditional on-campus/face-to-face teaching. With this point of departure, we explore which factors affect

  5. Effectiveness of online and blended learning from schools: A systematic

    This systematic review of the research literature on online and blended learning from schools starts by outlining recent perspectives on emergency remote learning, as occurred during the Covid-19 pandemic. ... In this review, online or blended learning may have taken place for an entire programme of learning, or it may only have taken place for ...

  6. Students' experience of online learning during the COVID‐19 pandemic: A

    The literature on online learning has long emphasised the role of effective interaction for the success of student learning. According to Muirhead and Juwah ... Interactivity in computer‐mediated college and university education: A recent review of the literature. Journal of Educational Technology & Society, 7 (1), 12-20.

  7. COVID-19 and teacher education: a literature review of online teaching

    This paper provides a review of the literature on online teaching and learning practices in teacher education. In total, 134 empirical studies were analysed. Online teaching and learning practices related to social, cognitive and teaching presence were identified.

  8. Influence of e-learning on the students' of higher education in the

    The integration of digital technologies into educational practices has reshaped traditional learning models, creating a dynamic and accessible global landscape for higher education. This paradigm shift transcends geographical boundaries, fostering a more interconnected and inclusive educational environment. This comprehensive literature analysis explores the impact of e-learning on higher ...

  9. Learners' Satisfaction and Commitment Towards Online Learning During

    The literature review has presented the results of a systematic review on the factors affecting the efficiency of online learning, and how it impacts learners' satisfaction and commitment. To conduct the literature review, approximately 40 empirical studies were reviewed and analysed.

  10. How Many Ways Can We Define Online Learning? A Systematic Literature

    ABSTRACT. Online learning as a concept and as a keyword has consistently been a focus of education research for over two decades. In this paper, we present results from a systematic literature review for the definitions of online learning because the concept of online learning, though often defined, has a range of meanings attached to it.

  11. Negative Impacts From the Shift to Online Learning During the COVID-19

    We present a brief literature review for how the shift to virtual instruction in the middle of the semester may introduce unique challenges for students, drawing on research that has investigated factors inhibiting or promoting success in online education. ... A randomized assessment of online learning. American Economic Review, 106(5), 378 ...

  12. Shifting online during COVID-19: A systematic review of ...

    This systematic literature review of 36 peer-reviewed empirical articles outlines eight strategies used by higher education lecturers and students to maintain educational continuity during the COVID-19 pandemic since January 2020. The findings show that students' online access and positive coping strategies could not eradicate their infrastructure and home environment challenges. Lecturers ...

  13. Online Education and Its Effective Practice: A Research Review

    gued that effective online instruction is dependent upon 1) w ell-designed course content, motiva t-. ed interaction between the instructor and learners, we ll-prepared and fully-supported ...

  14. Online and face‐to‐face learning: Evidence from students' performance

    The debate in the literature surrounding online learning versus F2F teaching continues to be a contentious one. A review of the literature reveals mixed findings when comparing the efficacy of online learning on student performance in relation to the traditional F2F medium of instruction (Lundberg et al., 2008; Nguyen, 2015).

  15. A Systematic Literature Review of Social Learning Theory in Online

    To ensure the quality and rigor of this literature review, we followed a widely recognized reporting standard—PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) (Liberati et al., 2009; Page et al., 2021).The review's scope and formulation of research questions, along with the establishment of inclusion and exclusion criteria, were facilitated by the PICOC framework ...

  16. Integrating students' perspectives about online learning: a hierarchy

    This article reports on a large-scale (n = 987), exploratory factor analysis study incorporating various concepts identified in the literature as critical success factors for online learning from the students' perspective, and then determines their hierarchical significance. Seven factors--Basic Online Modality, Instructional Support, Teaching Presence, Cognitive Presence, Online Social ...

  17. Comprehensively Summarizing What Distracts Students from Online

    This literature review develops a comprehensive de finition of distraction, summarizes three main types of distraction (multitasking, mind-wandering, and using digital devices), and proposes two ...

  18. Determinants of Online Teaching and Learning Effectiveness for

    The literature review reveals a dearth of studies investigating the effectiveness of online teaching and learning methods in these specific subjects. Thus, the current study aims to bridge this gap and provide valuable insights into enhancing online teaching and learning effectiveness.

  19. Literature Review of Online Learning in Academic Libraries

    Online learning refers to instruction that is delivered electronically through various multimedia and Internet platforms and applications. It is used interchangeably with other terms such as web-based learning, e-learning, computer-assisted instruction, and Internet-based learning.This chapter includes a review of the literature published between 2010 and 2015 on online learning in information ...

  20. Systematic Literature Review of E-Learning Capabilities to Enhance

    E-learning systems are receiving ever increasing attention in academia, business and public administration. Major crises, like the pandemic, highlight the tremendous importance of the appropriate development of e-learning systems and its adoption and processes in organizations. Managers and employees who need efficient forms of training and learning flow within organizations do not have to ...

  21. ERIC

    The Internet has made online learning possible, and many educators and researchers are interested in online learning courses to enhance and improve the student learning outcomes while battling the shortage in resources, facilities and equipment particularly in higher education institution. ... A Literature Review: Efficacy of Online Learning ...

  22. A systematic review of online examinations: A pedagogical innovation

    2.1. Search strategy. To address the objective of this paper, a systematic literature review was undertaken, following the PRISMA approach for article selection (Moher et al., 2009).The keyword string was developed incorporating the U.S. National Library of Medicine (2019) MeSH (Medical Subject Headings) terms: [("online" OR "electronic" OR "digital") AND ("exam*" OR "test ...

  23. Exploring the mental well-being of higher educational institutions

    The in-depth literature review revealed some commonly used keywords such as emotional well-being, mental well-being, mental health, psychological well-being, and psychological resilience. ... Online learning was identified as a potential to help access education for all but information overload, ...

  24. Traditional Learning Compared to Online Learning During the COVID-19

    This study compares university students' performance in traditional learning to that of online learning during the pandemic, ... Literature Review. Inevitable crises and disasters can profoundly affect the educational sector. Previously, the emergency procedure was to stop the educational process completely. However, today's technological ...

  25. Trends in Higher Education

    The COVID-19 pandemic has led to a surge in online studio education, which has presented a significant challenge to traditional design studio teaching methods that rely on face-to-face interactions between instructors and students. It is contended that online studio education enhances the accessibility of design studio pedagogy, making it possible for students to learn from anywhere in the world.

  26. Research Trends in Adaptive Online Learning: Systematic Literature

    Systematic literature review process was used to carry out the research. The researchers followed the steps suggested by Okoli(), which can be summarised as planning, selection, extraction and execution.3.1 Planning. In the planning stage, the researcher identified the purpose of the study which is to look at adaptive online learning in terms of the learner characteristics modelled and the ...

  27. Towards inclusivity in AI: A comparative study of cognitive engagement

    The literature shows that the integrated learning of AI and other subjects could enhance students' understanding of both fields and promote the development of problem-solving and critical-thinking skills (eg, Park et al., 2022; Wan et al., 2020; Wang et al., 2022). Designing such integrated learning experiences and exploring the impact of these ...